May 15 00:55:40.840854 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 14 23:14:51 -00 2025 May 15 00:55:40.840873 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:40.840897 kernel: BIOS-provided physical RAM map: May 15 00:55:40.840903 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 00:55:40.840909 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 00:55:40.840914 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 00:55:40.840921 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 15 00:55:40.840927 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 15 00:55:40.840934 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:55:40.840940 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 00:55:40.840945 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 00:55:40.840951 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 00:55:40.840956 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 00:55:40.840961 kernel: NX (Execute Disable) protection: active May 15 00:55:40.840969 kernel: SMBIOS 2.8 present. May 15 00:55:40.840976 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 15 00:55:40.840981 kernel: Hypervisor detected: KVM May 15 00:55:40.840987 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:55:40.840993 kernel: kvm-clock: cpu 0, msr 6f196001, primary cpu clock May 15 00:55:40.840999 kernel: kvm-clock: using sched offset of 2422935284 cycles May 15 00:55:40.841005 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:55:40.841011 kernel: tsc: Detected 2794.748 MHz processor May 15 00:55:40.841018 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:55:40.841026 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:55:40.841032 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 15 00:55:40.841038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:55:40.841044 kernel: Using GB pages for direct mapping May 15 00:55:40.841050 kernel: ACPI: Early table checksum verification disabled May 15 00:55:40.841056 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 15 00:55:40.841062 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841069 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841075 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841082 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 15 00:55:40.841088 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841094 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841100 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841114 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.841120 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 15 00:55:40.841127 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 15 00:55:40.841133 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 15 00:55:40.841143 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 15 00:55:40.841149 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 15 00:55:40.841155 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 15 00:55:40.841162 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 15 00:55:40.841168 kernel: No NUMA configuration found May 15 00:55:40.841175 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 15 00:55:40.841183 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 15 00:55:40.841190 kernel: Zone ranges: May 15 00:55:40.841196 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:55:40.841203 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 15 00:55:40.841209 kernel: Normal empty May 15 00:55:40.841215 kernel: Movable zone start for each node May 15 00:55:40.841222 kernel: Early memory node ranges May 15 00:55:40.841228 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 00:55:40.841235 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 15 00:55:40.841241 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 15 00:55:40.841249 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:55:40.841255 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 00:55:40.841262 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 15 00:55:40.841268 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:55:40.841274 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:55:40.841281 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:55:40.841287 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:55:40.841294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:55:40.841300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:55:40.841308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:55:40.841314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:55:40.841321 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:55:40.841327 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:55:40.841333 kernel: TSC deadline timer available May 15 00:55:40.841340 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 00:55:40.841346 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:55:40.841353 kernel: kvm-guest: setup PV sched yield May 15 00:55:40.841359 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 00:55:40.841366 kernel: Booting paravirtualized kernel on KVM May 15 00:55:40.841373 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:55:40.841380 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 15 00:55:40.841386 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 15 00:55:40.841393 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 15 00:55:40.841399 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 00:55:40.841405 kernel: kvm-guest: setup async PF for cpu 0 May 15 00:55:40.841412 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 15 00:55:40.841418 kernel: kvm-guest: PV spinlocks enabled May 15 00:55:40.841425 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:55:40.841432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 15 00:55:40.841438 kernel: Policy zone: DMA32 May 15 00:55:40.841446 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:40.841453 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:55:40.841459 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:55:40.841466 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:55:40.841473 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:55:40.841481 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 15 00:55:40.841487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:55:40.841502 kernel: ftrace: allocating 34584 entries in 136 pages May 15 00:55:40.841510 kernel: ftrace: allocated 136 pages with 2 groups May 15 00:55:40.841516 kernel: rcu: Hierarchical RCU implementation. May 15 00:55:40.841523 kernel: rcu: RCU event tracing is enabled. May 15 00:55:40.841530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:55:40.841536 kernel: Rude variant of Tasks RCU enabled. May 15 00:55:40.841543 kernel: Tracing variant of Tasks RCU enabled. May 15 00:55:40.841551 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:55:40.841558 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:55:40.841564 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 00:55:40.841570 kernel: random: crng init done May 15 00:55:40.841577 kernel: Console: colour VGA+ 80x25 May 15 00:55:40.841583 kernel: printk: console [ttyS0] enabled May 15 00:55:40.841590 kernel: ACPI: Core revision 20210730 May 15 00:55:40.841596 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:55:40.841603 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:55:40.841610 kernel: x2apic enabled May 15 00:55:40.841617 kernel: Switched APIC routing to physical x2apic. May 15 00:55:40.841623 kernel: kvm-guest: setup PV IPIs May 15 00:55:40.841630 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:55:40.841636 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:55:40.841643 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 00:55:40.841649 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:55:40.841656 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:55:40.841662 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:55:40.841674 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:55:40.841681 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:55:40.841688 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:55:40.841695 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 00:55:40.841702 kernel: RETBleed: Mitigation: untrained return thunk May 15 00:55:40.841709 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:55:40.841716 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 00:55:40.841741 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:55:40.841748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:55:40.841756 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:55:40.841767 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:55:40.841774 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 00:55:40.841781 kernel: Freeing SMP alternatives memory: 32K May 15 00:55:40.841787 kernel: pid_max: default: 32768 minimum: 301 May 15 00:55:40.841794 kernel: LSM: Security Framework initializing May 15 00:55:40.841801 kernel: SELinux: Initializing. May 15 00:55:40.841808 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:55:40.841816 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:55:40.841823 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 00:55:40.841830 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:55:40.841837 kernel: ... version: 0 May 15 00:55:40.841843 kernel: ... bit width: 48 May 15 00:55:40.841850 kernel: ... generic registers: 6 May 15 00:55:40.841857 kernel: ... value mask: 0000ffffffffffff May 15 00:55:40.841864 kernel: ... max period: 00007fffffffffff May 15 00:55:40.841870 kernel: ... fixed-purpose events: 0 May 15 00:55:40.841878 kernel: ... event mask: 000000000000003f May 15 00:55:40.841885 kernel: signal: max sigframe size: 1776 May 15 00:55:40.841892 kernel: rcu: Hierarchical SRCU implementation. May 15 00:55:40.841899 kernel: smp: Bringing up secondary CPUs ... May 15 00:55:40.841905 kernel: x86: Booting SMP configuration: May 15 00:55:40.841912 kernel: .... node #0, CPUs: #1 May 15 00:55:40.841919 kernel: kvm-clock: cpu 1, msr 6f196041, secondary cpu clock May 15 00:55:40.841925 kernel: kvm-guest: setup async PF for cpu 1 May 15 00:55:40.841932 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 15 00:55:40.841940 kernel: #2 May 15 00:55:40.841947 kernel: kvm-clock: cpu 2, msr 6f196081, secondary cpu clock May 15 00:55:40.841954 kernel: kvm-guest: setup async PF for cpu 2 May 15 00:55:40.841960 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 15 00:55:40.841967 kernel: #3 May 15 00:55:40.841974 kernel: kvm-clock: cpu 3, msr 6f1960c1, secondary cpu clock May 15 00:55:40.841981 kernel: kvm-guest: setup async PF for cpu 3 May 15 00:55:40.841987 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 15 00:55:40.841994 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:55:40.842002 kernel: smpboot: Max logical packages: 1 May 15 00:55:40.842009 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 00:55:40.842016 kernel: devtmpfs: initialized May 15 00:55:40.842022 kernel: x86/mm: Memory block size: 128MB May 15 00:55:40.842029 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:55:40.842036 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:55:40.842043 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:55:40.842050 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:55:40.842056 kernel: audit: initializing netlink subsys (disabled) May 15 00:55:40.842064 kernel: audit: type=2000 audit(1747270540.274:1): state=initialized audit_enabled=0 res=1 May 15 00:55:40.842071 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:55:40.842078 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:55:40.842085 kernel: cpuidle: using governor menu May 15 00:55:40.842091 kernel: ACPI: bus type PCI registered May 15 00:55:40.842098 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:55:40.842112 kernel: dca service started, version 1.12.1 May 15 00:55:40.842119 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 00:55:40.842126 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 15 00:55:40.842132 kernel: PCI: Using configuration type 1 for base access May 15 00:55:40.842141 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:55:40.842148 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:55:40.842155 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:55:40.842162 kernel: ACPI: Added _OSI(Module Device) May 15 00:55:40.842168 kernel: ACPI: Added _OSI(Processor Device) May 15 00:55:40.842175 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:55:40.842182 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:55:40.842188 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 00:55:40.842195 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 00:55:40.842203 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 00:55:40.842210 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:55:40.842217 kernel: ACPI: Interpreter enabled May 15 00:55:40.842224 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:55:40.842230 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:55:40.842237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:55:40.842244 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:55:40.842251 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:55:40.842362 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:55:40.842436 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:55:40.842503 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:55:40.842513 kernel: PCI host bridge to bus 0000:00 May 15 00:55:40.842585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:55:40.842648 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:55:40.842709 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:55:40.842786 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 00:55:40.842847 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 00:55:40.842907 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 15 00:55:40.842969 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:55:40.843047 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:55:40.843132 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:55:40.843206 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 15 00:55:40.843275 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 15 00:55:40.843344 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 15 00:55:40.843412 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:55:40.843486 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:55:40.843556 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 15 00:55:40.843629 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 15 00:55:40.843713 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 15 00:55:40.843840 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 00:55:40.843912 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 15 00:55:40.843979 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 15 00:55:40.844047 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 15 00:55:40.844134 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:55:40.844204 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 15 00:55:40.844275 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 15 00:55:40.844343 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 15 00:55:40.844409 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 15 00:55:40.844481 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:55:40.844548 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:55:40.844622 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:55:40.844690 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 15 00:55:40.844773 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 15 00:55:40.844846 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:55:40.844913 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 15 00:55:40.844923 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:55:40.844930 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:55:40.844937 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:55:40.844944 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:55:40.844953 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:55:40.844960 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:55:40.844977 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:55:40.845004 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:55:40.845020 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:55:40.845027 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:55:40.845034 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:55:40.845041 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:55:40.845048 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:55:40.845056 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:55:40.845063 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:55:40.845070 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:55:40.845077 kernel: iommu: Default domain type: Translated May 15 00:55:40.845084 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:55:40.845256 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:55:40.845360 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:55:40.845446 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:55:40.845461 kernel: vgaarb: loaded May 15 00:55:40.845478 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 00:55:40.845488 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 00:55:40.845498 kernel: PTP clock support registered May 15 00:55:40.845508 kernel: PCI: Using ACPI for IRQ routing May 15 00:55:40.845517 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:55:40.845526 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 00:55:40.845534 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 15 00:55:40.845541 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:55:40.845548 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:55:40.845557 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:55:40.845565 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:55:40.845573 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:55:40.845582 kernel: pnp: PnP ACPI init May 15 00:55:40.845691 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 00:55:40.845708 kernel: pnp: PnP ACPI: found 6 devices May 15 00:55:40.845738 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:55:40.845748 kernel: NET: Registered PF_INET protocol family May 15 00:55:40.845761 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:55:40.845770 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:55:40.845779 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:55:40.845787 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:55:40.845795 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 00:55:40.845802 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:55:40.845809 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:55:40.845816 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:55:40.845824 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:55:40.845833 kernel: NET: Registered PF_XDP protocol family May 15 00:55:40.845911 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:55:40.845975 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:55:40.846035 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:55:40.846096 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 00:55:40.846175 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 00:55:40.846236 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 15 00:55:40.846245 kernel: PCI: CLS 0 bytes, default 64 May 15 00:55:40.846255 kernel: Initialise system trusted keyrings May 15 00:55:40.846262 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:55:40.846269 kernel: Key type asymmetric registered May 15 00:55:40.846276 kernel: Asymmetric key parser 'x509' registered May 15 00:55:40.846284 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 00:55:40.846291 kernel: io scheduler mq-deadline registered May 15 00:55:40.846298 kernel: io scheduler kyber registered May 15 00:55:40.846305 kernel: io scheduler bfq registered May 15 00:55:40.846312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:55:40.846321 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:55:40.846329 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:55:40.846336 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 00:55:40.846343 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:55:40.846350 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:55:40.846358 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:55:40.846365 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:55:40.846372 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:55:40.846452 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 00:55:40.846464 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:55:40.846528 kernel: rtc_cmos 00:04: registered as rtc0 May 15 00:55:40.846591 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T00:55:40 UTC (1747270540) May 15 00:55:40.846660 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 00:55:40.846673 kernel: NET: Registered PF_INET6 protocol family May 15 00:55:40.846682 kernel: Segment Routing with IPv6 May 15 00:55:40.846693 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:55:40.846702 kernel: NET: Registered PF_PACKET protocol family May 15 00:55:40.846715 kernel: Key type dns_resolver registered May 15 00:55:40.846736 kernel: IPI shorthand broadcast: enabled May 15 00:55:40.846744 kernel: sched_clock: Marking stable (423164688, 419388877)->(1185584568, -343031003) May 15 00:55:40.846751 kernel: registered taskstats version 1 May 15 00:55:40.846758 kernel: Loading compiled-in X.509 certificates May 15 00:55:40.846766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: a3400373b5c34ccb74f940604f224840f2b40bdd' May 15 00:55:40.846775 kernel: Key type .fscrypt registered May 15 00:55:40.846785 kernel: Key type fscrypt-provisioning registered May 15 00:55:40.846794 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:55:40.846807 kernel: ima: Allocated hash algorithm: sha1 May 15 00:55:40.846817 kernel: ima: No architecture policies found May 15 00:55:40.846826 kernel: clk: Disabling unused clocks May 15 00:55:40.846835 kernel: Freeing unused kernel image (initmem) memory: 47456K May 15 00:55:40.846843 kernel: Write protecting the kernel read-only data: 28672k May 15 00:55:40.846852 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 00:55:40.846862 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 00:55:40.846871 kernel: Run /init as init process May 15 00:55:40.846880 kernel: with arguments: May 15 00:55:40.846892 kernel: /init May 15 00:55:40.846901 kernel: with environment: May 15 00:55:40.846911 kernel: HOME=/ May 15 00:55:40.846920 kernel: TERM=linux May 15 00:55:40.846929 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:55:40.846943 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:55:40.846957 systemd[1]: Detected virtualization kvm. May 15 00:55:40.846968 systemd[1]: Detected architecture x86-64. May 15 00:55:40.846980 systemd[1]: Running in initrd. May 15 00:55:40.846990 systemd[1]: No hostname configured, using default hostname. May 15 00:55:40.847000 systemd[1]: Hostname set to . May 15 00:55:40.847011 systemd[1]: Initializing machine ID from VM UUID. May 15 00:55:40.847020 systemd[1]: Queued start job for default target initrd.target. May 15 00:55:40.847031 systemd[1]: Started systemd-ask-password-console.path. May 15 00:55:40.847041 systemd[1]: Reached target cryptsetup.target. May 15 00:55:40.847051 systemd[1]: Reached target paths.target. May 15 00:55:40.847063 systemd[1]: Reached target slices.target. May 15 00:55:40.847079 systemd[1]: Reached target swap.target. May 15 00:55:40.847088 systemd[1]: Reached target timers.target. May 15 00:55:40.847096 systemd[1]: Listening on iscsid.socket. May 15 00:55:40.847114 systemd[1]: Listening on iscsiuio.socket. May 15 00:55:40.847127 systemd[1]: Listening on systemd-journald-audit.socket. May 15 00:55:40.847138 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 00:55:40.847149 systemd[1]: Listening on systemd-journald.socket. May 15 00:55:40.847160 systemd[1]: Listening on systemd-networkd.socket. May 15 00:55:40.847170 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:55:40.847180 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:55:40.847190 systemd[1]: Reached target sockets.target. May 15 00:55:40.847200 systemd[1]: Starting kmod-static-nodes.service... May 15 00:55:40.847210 systemd[1]: Finished network-cleanup.service. May 15 00:55:40.847220 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:55:40.847228 systemd[1]: Starting systemd-journald.service... May 15 00:55:40.847236 systemd[1]: Starting systemd-modules-load.service... May 15 00:55:40.847244 systemd[1]: Starting systemd-resolved.service... May 15 00:55:40.847252 systemd[1]: Starting systemd-vconsole-setup.service... May 15 00:55:40.847259 systemd[1]: Finished kmod-static-nodes.service. May 15 00:55:40.847267 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:55:40.847275 kernel: audit: type=1130 audit(1747270540.840:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.847283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:55:40.847296 systemd-journald[199]: Journal started May 15 00:55:40.847343 systemd-journald[199]: Runtime Journal (/run/log/journal/8252eaeca39e4c34bbb2f03ef49632a3) is 6.0M, max 48.5M, 42.5M free. May 15 00:55:40.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.848274 systemd-modules-load[200]: Inserted module 'overlay' May 15 00:55:40.850035 systemd[1]: Started systemd-journald.service. May 15 00:55:40.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.850743 kernel: audit: type=1130 audit(1747270540.849:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.851636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:55:40.889013 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:55:40.889038 kernel: Bridge firewalling registered May 15 00:55:40.866807 systemd-resolved[201]: Positive Trust Anchors: May 15 00:55:40.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.866822 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:55:40.897137 kernel: audit: type=1130 audit(1747270540.888:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.897154 kernel: audit: type=1130 audit(1747270540.893:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.866850 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:55:40.907245 kernel: audit: type=1130 audit(1747270540.898:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.868903 systemd-resolved[201]: Defaulting to hostname 'linux'. May 15 00:55:40.908831 kernel: SCSI subsystem initialized May 15 00:55:40.888025 systemd-modules-load[200]: Inserted module 'br_netfilter' May 15 00:55:40.889136 systemd[1]: Started systemd-resolved.service. May 15 00:55:40.894890 systemd[1]: Finished systemd-vconsole-setup.service. May 15 00:55:40.899150 systemd[1]: Reached target nss-lookup.target. May 15 00:55:40.913280 systemd[1]: Starting dracut-cmdline-ask.service... May 15 00:55:40.921834 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:55:40.921873 kernel: device-mapper: uevent: version 1.0.3 May 15 00:55:40.921883 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 00:55:40.924454 systemd-modules-load[200]: Inserted module 'dm_multipath' May 15 00:55:40.925471 systemd[1]: Finished systemd-modules-load.service. May 15 00:55:40.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.927362 systemd[1]: Finished dracut-cmdline-ask.service. May 15 00:55:40.931691 kernel: audit: type=1130 audit(1747270540.926:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.931710 kernel: audit: type=1130 audit(1747270540.930:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.932311 systemd[1]: Starting dracut-cmdline.service... May 15 00:55:40.935512 systemd[1]: Starting systemd-sysctl.service... May 15 00:55:40.939780 dracut-cmdline[219]: dracut-dracut-053 May 15 00:55:40.941333 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:40.941441 systemd[1]: Finished systemd-sysctl.service. May 15 00:55:40.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.950746 kernel: audit: type=1130 audit(1747270540.946:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.994750 kernel: Loading iSCSI transport class v2.0-870. May 15 00:55:41.010752 kernel: iscsi: registered transport (tcp) May 15 00:55:41.031742 kernel: iscsi: registered transport (qla4xxx) May 15 00:55:41.031768 kernel: QLogic iSCSI HBA Driver May 15 00:55:41.060569 systemd[1]: Finished dracut-cmdline.service. May 15 00:55:41.065800 kernel: audit: type=1130 audit(1747270541.060:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.062539 systemd[1]: Starting dracut-pre-udev.service... May 15 00:55:41.107752 kernel: raid6: avx2x4 gen() 28382 MB/s May 15 00:55:41.124752 kernel: raid6: avx2x4 xor() 7117 MB/s May 15 00:55:41.141754 kernel: raid6: avx2x2 gen() 30900 MB/s May 15 00:55:41.158751 kernel: raid6: avx2x2 xor() 18926 MB/s May 15 00:55:41.175748 kernel: raid6: avx2x1 gen() 26217 MB/s May 15 00:55:41.192747 kernel: raid6: avx2x1 xor() 15322 MB/s May 15 00:55:41.209738 kernel: raid6: sse2x4 gen() 14762 MB/s May 15 00:55:41.226749 kernel: raid6: sse2x4 xor() 6546 MB/s May 15 00:55:41.279755 kernel: raid6: sse2x2 gen() 13412 MB/s May 15 00:55:41.296753 kernel: raid6: sse2x2 xor() 9295 MB/s May 15 00:55:41.321753 kernel: raid6: sse2x1 gen() 11145 MB/s May 15 00:55:41.339341 kernel: raid6: sse2x1 xor() 7300 MB/s May 15 00:55:41.339364 kernel: raid6: using algorithm avx2x2 gen() 30900 MB/s May 15 00:55:41.339377 kernel: raid6: .... xor() 18926 MB/s, rmw enabled May 15 00:55:41.340057 kernel: raid6: using avx2x2 recovery algorithm May 15 00:55:41.352750 kernel: xor: automatically using best checksumming function avx May 15 00:55:41.441745 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 00:55:41.450457 systemd[1]: Finished dracut-pre-udev.service. May 15 00:55:41.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.451000 audit: BPF prog-id=7 op=LOAD May 15 00:55:41.451000 audit: BPF prog-id=8 op=LOAD May 15 00:55:41.452488 systemd[1]: Starting systemd-udevd.service... May 15 00:55:41.463916 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 15 00:55:41.467734 systemd[1]: Started systemd-udevd.service. May 15 00:55:41.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.469314 systemd[1]: Starting dracut-pre-trigger.service... May 15 00:55:41.479655 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation May 15 00:55:41.508250 systemd[1]: Finished dracut-pre-trigger.service. May 15 00:55:41.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.510700 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:55:41.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.544029 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:55:41.583739 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:55:41.602991 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:55:41.603005 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:55:41.603014 kernel: GPT:9289727 != 19775487 May 15 00:55:41.603022 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:55:41.603031 kernel: GPT:9289727 != 19775487 May 15 00:55:41.603039 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:55:41.603051 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.606743 kernel: libata version 3.00 loaded. May 15 00:55:41.618741 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:55:41.618804 kernel: AES CTR mode by8 optimization enabled May 15 00:55:41.618814 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:55:41.627109 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:55:41.627122 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:55:41.627206 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:55:41.627288 kernel: scsi host0: ahci May 15 00:55:41.627381 kernel: scsi host1: ahci May 15 00:55:41.627464 kernel: scsi host2: ahci May 15 00:55:41.627562 kernel: scsi host3: ahci May 15 00:55:41.627642 kernel: scsi host4: ahci May 15 00:55:41.627740 kernel: scsi host5: ahci May 15 00:55:41.627825 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 15 00:55:41.627835 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 15 00:55:41.627844 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 15 00:55:41.627853 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 15 00:55:41.627861 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 15 00:55:41.627870 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 15 00:55:41.634741 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) May 15 00:55:41.640774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 00:55:41.682625 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 00:55:41.683054 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 00:55:41.690182 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 00:55:41.693330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:55:41.695357 systemd[1]: Starting disk-uuid.service... May 15 00:55:41.706860 disk-uuid[535]: Primary Header is updated. May 15 00:55:41.706860 disk-uuid[535]: Secondary Entries is updated. May 15 00:55:41.706860 disk-uuid[535]: Secondary Header is updated. May 15 00:55:41.710756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.713739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.717745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.933773 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.933833 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.941742 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.941785 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 00:55:41.942745 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.943751 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.944755 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 00:55:41.948786 kernel: ata3.00: applying bridge limits May 15 00:55:41.949742 kernel: ata3.00: configured for UDMA/100 May 15 00:55:41.953700 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 00:55:41.981755 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 00:55:41.999348 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:55:41.999367 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 00:55:42.729404 disk-uuid[536]: The operation has completed successfully. May 15 00:55:42.730765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:42.752016 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:55:42.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.752108 systemd[1]: Finished disk-uuid.service. May 15 00:55:42.756151 systemd[1]: Starting verity-setup.service... May 15 00:55:42.780749 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:55:42.802968 systemd[1]: Found device dev-mapper-usr.device. May 15 00:55:42.805182 systemd[1]: Mounting sysusr-usr.mount... May 15 00:55:42.806933 systemd[1]: Finished verity-setup.service. May 15 00:55:42.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.868529 systemd[1]: Mounted sysusr-usr.mount. May 15 00:55:42.869950 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 00:55:42.869114 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 00:55:42.869958 systemd[1]: Starting ignition-setup.service... May 15 00:55:42.870855 systemd[1]: Starting parse-ip-for-networkd.service... May 15 00:55:42.882041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:55:42.882083 kernel: BTRFS info (device vda6): using free space tree May 15 00:55:42.882093 kernel: BTRFS info (device vda6): has skinny extents May 15 00:55:42.889759 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:55:42.924846 systemd[1]: Finished parse-ip-for-networkd.service. May 15 00:55:42.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.926000 audit: BPF prog-id=9 op=LOAD May 15 00:55:42.926972 systemd[1]: Starting systemd-networkd.service... May 15 00:55:42.947698 systemd-networkd[721]: lo: Link UP May 15 00:55:42.947710 systemd-networkd[721]: lo: Gained carrier May 15 00:55:42.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.948186 systemd-networkd[721]: Enumeration completed May 15 00:55:42.948266 systemd[1]: Started systemd-networkd.service. May 15 00:55:42.948437 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:55:42.949491 systemd[1]: Reached target network.target. May 15 00:55:42.950044 systemd-networkd[721]: eth0: Link UP May 15 00:55:42.950058 systemd-networkd[721]: eth0: Gained carrier May 15 00:55:42.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.951819 systemd[1]: Starting iscsiuio.service... May 15 00:55:42.963057 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 00:55:42.963057 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 15 00:55:42.963057 iscsid[726]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 00:55:42.963057 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 00:55:42.963057 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 00:55:42.963057 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 00:55:42.963057 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 00:55:42.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.956685 systemd[1]: Started iscsiuio.service. May 15 00:55:42.957679 systemd[1]: Starting iscsid.service... May 15 00:55:42.962005 systemd[1]: Started iscsid.service. May 15 00:55:42.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.963776 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:55:42.964574 systemd[1]: Starting dracut-initqueue.service... May 15 00:55:42.998082 systemd[1]: Finished dracut-initqueue.service. May 15 00:55:42.998553 systemd[1]: Reached target remote-fs-pre.target. May 15 00:55:43.000139 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:55:43.003818 systemd[1]: Reached target remote-fs.target. May 15 00:55:43.006164 systemd[1]: Starting dracut-pre-mount.service... May 15 00:55:43.020704 systemd[1]: Finished ignition-setup.service. May 15 00:55:43.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.021772 systemd[1]: Starting ignition-fetch-offline.service... May 15 00:55:43.028824 systemd[1]: Finished dracut-pre-mount.service. May 15 00:55:43.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.061388 ignition[737]: Ignition 2.14.0 May 15 00:55:43.061402 ignition[737]: Stage: fetch-offline May 15 00:55:43.061447 ignition[737]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:43.061469 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:43.061571 ignition[737]: parsed url from cmdline: "" May 15 00:55:43.061575 ignition[737]: no config URL provided May 15 00:55:43.061580 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:55:43.061587 ignition[737]: no config at "/usr/lib/ignition/user.ign" May 15 00:55:43.061606 ignition[737]: op(1): [started] loading QEMU firmware config module May 15 00:55:43.061611 ignition[737]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:55:43.073552 ignition[737]: op(1): [finished] loading QEMU firmware config module May 15 00:55:43.112672 ignition[737]: parsing config with SHA512: 8ac8c72dbbf2948706cdc6e3909ebd61cd719dd7f49becbc090e23c1a5473d8bfc1bd9ab71d9f7f6394a7e5d476fad190cf88307f502320765c8d1de56eb8241 May 15 00:55:43.118882 unknown[737]: fetched base config from "system" May 15 00:55:43.118893 unknown[737]: fetched user config from "qemu" May 15 00:55:43.119337 ignition[737]: fetch-offline: fetch-offline passed May 15 00:55:43.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.120446 systemd[1]: Finished ignition-fetch-offline.service. May 15 00:55:43.119384 ignition[737]: Ignition finished successfully May 15 00:55:43.121596 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:55:43.122265 systemd[1]: Starting ignition-kargs.service... May 15 00:55:43.131673 ignition[749]: Ignition 2.14.0 May 15 00:55:43.131682 ignition[749]: Stage: kargs May 15 00:55:43.131780 ignition[749]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:43.131788 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:43.132691 ignition[749]: kargs: kargs passed May 15 00:55:43.132747 ignition[749]: Ignition finished successfully May 15 00:55:43.137803 systemd[1]: Finished ignition-kargs.service. May 15 00:55:43.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.138858 systemd[1]: Starting ignition-disks.service... May 15 00:55:43.145131 ignition[755]: Ignition 2.14.0 May 15 00:55:43.145142 ignition[755]: Stage: disks May 15 00:55:43.145233 ignition[755]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:43.145246 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:43.146419 ignition[755]: disks: disks passed May 15 00:55:43.146462 ignition[755]: Ignition finished successfully May 15 00:55:43.151033 systemd[1]: Finished ignition-disks.service. May 15 00:55:43.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.151633 systemd[1]: Reached target initrd-root-device.target. May 15 00:55:43.153247 systemd[1]: Reached target local-fs-pre.target. May 15 00:55:43.155462 systemd[1]: Reached target local-fs.target. May 15 00:55:43.156033 systemd[1]: Reached target sysinit.target. May 15 00:55:43.157359 systemd[1]: Reached target basic.target. May 15 00:55:43.158486 systemd[1]: Starting systemd-fsck-root.service... May 15 00:55:43.168702 systemd-fsck[763]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 15 00:55:43.174605 systemd[1]: Finished systemd-fsck-root.service. May 15 00:55:43.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.177514 systemd[1]: Mounting sysroot.mount... May 15 00:55:43.184572 systemd[1]: Mounted sysroot.mount. May 15 00:55:43.186170 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 00:55:43.186249 systemd[1]: Reached target initrd-root-fs.target. May 15 00:55:43.188605 systemd[1]: Mounting sysroot-usr.mount... May 15 00:55:43.190376 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 00:55:43.190410 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:55:43.190428 systemd[1]: Reached target ignition-diskful.target. May 15 00:55:43.196549 systemd[1]: Mounted sysroot-usr.mount. May 15 00:55:43.199017 systemd[1]: Starting initrd-setup-root.service... May 15 00:55:43.202654 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:55:43.205887 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory May 15 00:55:43.209011 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:55:43.212588 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:55:43.236531 systemd[1]: Finished initrd-setup-root.service. May 15 00:55:43.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.239309 systemd[1]: Starting ignition-mount.service... May 15 00:55:43.241386 systemd[1]: Starting sysroot-boot.service... May 15 00:55:43.244312 bash[814]: umount: /sysroot/usr/share/oem: not mounted. May 15 00:55:43.252497 ignition[816]: INFO : Ignition 2.14.0 May 15 00:55:43.252497 ignition[816]: INFO : Stage: mount May 15 00:55:43.254231 ignition[816]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:43.254231 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:43.254231 ignition[816]: INFO : mount: mount passed May 15 00:55:43.254231 ignition[816]: INFO : Ignition finished successfully May 15 00:55:43.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.254500 systemd[1]: Finished ignition-mount.service. May 15 00:55:43.262309 systemd[1]: Finished sysroot-boot.service. May 15 00:55:43.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.450393 systemd-resolved[201]: Detected conflict on linux IN A 10.0.0.131 May 15 00:55:43.450415 systemd-resolved[201]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. May 15 00:55:43.816843 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 00:55:43.824751 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (824) May 15 00:55:43.824809 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:55:43.827342 kernel: BTRFS info (device vda6): using free space tree May 15 00:55:43.827355 kernel: BTRFS info (device vda6): has skinny extents May 15 00:55:43.830277 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 00:55:43.831359 systemd[1]: Starting ignition-files.service... May 15 00:55:43.843862 ignition[844]: INFO : Ignition 2.14.0 May 15 00:55:43.843862 ignition[844]: INFO : Stage: files May 15 00:55:43.845471 ignition[844]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:43.845471 ignition[844]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:43.848397 ignition[844]: DEBUG : files: compiled without relabeling support, skipping May 15 00:55:43.849810 ignition[844]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:55:43.849810 ignition[844]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:55:43.853510 ignition[844]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:55:43.854888 ignition[844]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:55:43.856621 unknown[844]: wrote ssh authorized keys file for user: core May 15 00:55:43.857829 ignition[844]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:55:43.859350 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:55:43.859350 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 00:55:43.910097 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:55:44.151287 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 00:55:44.153465 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:55:44.153465 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 00:55:44.254917 systemd-networkd[721]: eth0: Gained IPv6LL May 15 00:55:44.270628 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:55:44.358174 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:55:44.358174 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:44.361763 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:44.383377 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 00:55:44.777833 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:55:45.244734 ignition[844]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 00:55:45.244734 ignition[844]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:55:45.248320 ignition[844]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:55:45.248320 ignition[844]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:55:45.248320 ignition[844]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:55:45.248320 ignition[844]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:55:45.248320 ignition[844]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:55:45.256575 ignition[844]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:55:45.256575 ignition[844]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:55:45.256575 ignition[844]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 00:55:45.256575 ignition[844]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:55:45.256575 ignition[844]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:55:45.256575 ignition[844]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:55:45.281797 ignition[844]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:55:45.283627 ignition[844]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:55:45.285111 ignition[844]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:55:45.287110 ignition[844]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:55:45.287110 ignition[844]: INFO : files: files passed May 15 00:55:45.289653 ignition[844]: INFO : Ignition finished successfully May 15 00:55:45.291601 systemd[1]: Finished ignition-files.service. May 15 00:55:45.297471 kernel: kauditd_printk_skb: 24 callbacks suppressed May 15 00:55:45.297495 kernel: audit: type=1130 audit(1747270545.291:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.297505 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 00:55:45.298037 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 00:55:45.298844 systemd[1]: Starting ignition-quench.service... May 15 00:55:45.310572 kernel: audit: type=1130 audit(1747270545.301:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.310596 kernel: audit: type=1131 audit(1747270545.301:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.301996 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:55:45.302109 systemd[1]: Finished ignition-quench.service. May 15 00:55:45.316782 initrd-setup-root-after-ignition[870]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 00:55:45.319743 initrd-setup-root-after-ignition[872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:55:45.320401 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 00:55:45.326664 kernel: audit: type=1130 audit(1747270545.320:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.321806 systemd[1]: Reached target ignition-complete.target. May 15 00:55:45.328473 systemd[1]: Starting initrd-parse-etc.service... May 15 00:55:45.342372 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:55:45.342482 systemd[1]: Finished initrd-parse-etc.service. May 15 00:55:45.351267 kernel: audit: type=1130 audit(1747270545.343:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.351295 kernel: audit: type=1131 audit(1747270545.343:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.344447 systemd[1]: Reached target initrd-fs.target. May 15 00:55:45.352947 systemd[1]: Reached target initrd.target. May 15 00:55:45.354565 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 00:55:45.356591 systemd[1]: Starting dracut-pre-pivot.service... May 15 00:55:45.367071 systemd[1]: Finished dracut-pre-pivot.service. May 15 00:55:45.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.369429 systemd[1]: Starting initrd-cleanup.service... May 15 00:55:45.373235 kernel: audit: type=1130 audit(1747270545.368:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.377630 systemd[1]: Stopped target nss-lookup.target. May 15 00:55:45.379294 systemd[1]: Stopped target remote-cryptsetup.target. May 15 00:55:45.381098 systemd[1]: Stopped target timers.target. May 15 00:55:45.382616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:55:45.383602 systemd[1]: Stopped dracut-pre-pivot.service. May 15 00:55:45.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.385354 systemd[1]: Stopped target initrd.target. May 15 00:55:45.389689 kernel: audit: type=1131 audit(1747270545.384:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.389775 systemd[1]: Stopped target basic.target. May 15 00:55:45.391432 systemd[1]: Stopped target ignition-complete.target. May 15 00:55:45.393393 systemd[1]: Stopped target ignition-diskful.target. May 15 00:55:45.395333 systemd[1]: Stopped target initrd-root-device.target. May 15 00:55:45.397345 systemd[1]: Stopped target remote-fs.target. May 15 00:55:45.399113 systemd[1]: Stopped target remote-fs-pre.target. May 15 00:55:45.401009 systemd[1]: Stopped target sysinit.target. May 15 00:55:45.402739 systemd[1]: Stopped target local-fs.target. May 15 00:55:45.404479 systemd[1]: Stopped target local-fs-pre.target. May 15 00:55:45.406344 systemd[1]: Stopped target swap.target. May 15 00:55:45.407954 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:55:45.409097 systemd[1]: Stopped dracut-pre-mount.service. May 15 00:55:45.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.410965 systemd[1]: Stopped target cryptsetup.target. May 15 00:55:45.415900 kernel: audit: type=1131 audit(1747270545.410:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.415946 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:55:45.417061 systemd[1]: Stopped dracut-initqueue.service. May 15 00:55:45.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.418893 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:55:45.422807 kernel: audit: type=1131 audit(1747270545.418:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.419022 systemd[1]: Stopped ignition-fetch-offline.service. May 15 00:55:45.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.424728 systemd[1]: Stopped target paths.target. May 15 00:55:45.426337 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:55:45.428767 systemd[1]: Stopped systemd-ask-password-console.path. May 15 00:55:45.430772 systemd[1]: Stopped target slices.target. May 15 00:55:45.432461 systemd[1]: Stopped target sockets.target. May 15 00:55:45.434178 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:55:45.435162 systemd[1]: Closed iscsid.socket. May 15 00:55:45.436687 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:55:45.437683 systemd[1]: Closed iscsiuio.socket. May 15 00:55:45.439280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:55:45.440601 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 00:55:45.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.442825 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:55:45.444105 systemd[1]: Stopped ignition-files.service. May 15 00:55:45.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.446631 systemd[1]: Stopping ignition-mount.service... May 15 00:55:45.448903 systemd[1]: Stopping sysroot-boot.service... May 15 00:55:45.450498 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:55:45.451694 systemd[1]: Stopped systemd-udev-trigger.service. May 15 00:55:45.453693 ignition[885]: INFO : Ignition 2.14.0 May 15 00:55:45.453693 ignition[885]: INFO : Stage: umount May 15 00:55:45.453693 ignition[885]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:45.453693 ignition[885]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:45.453693 ignition[885]: INFO : umount: umount passed May 15 00:55:45.453693 ignition[885]: INFO : Ignition finished successfully May 15 00:55:45.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.454657 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:55:45.458622 systemd[1]: Stopped dracut-pre-trigger.service. May 15 00:55:45.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.464199 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:55:45.465894 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:55:45.466897 systemd[1]: Stopped ignition-mount.service. May 15 00:55:45.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.468858 systemd[1]: Stopped target network.target. May 15 00:55:45.470502 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:55:45.470560 systemd[1]: Stopped ignition-disks.service. May 15 00:55:45.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.473248 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:55:45.473289 systemd[1]: Stopped ignition-kargs.service. May 15 00:55:45.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.475743 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:55:45.475785 systemd[1]: Stopped ignition-setup.service. May 15 00:55:45.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.478347 systemd[1]: Stopping systemd-networkd.service... May 15 00:55:45.480174 systemd[1]: Stopping systemd-resolved.service... May 15 00:55:45.482206 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:55:45.483320 systemd[1]: Finished initrd-cleanup.service. May 15 00:55:45.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.485765 systemd-networkd[721]: eth0: DHCPv6 lease lost May 15 00:55:45.487451 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:55:45.488561 systemd[1]: Stopped systemd-networkd.service. May 15 00:55:45.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.490386 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:55:45.491465 systemd[1]: Stopped systemd-resolved.service. May 15 00:55:45.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.493259 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:55:45.493287 systemd[1]: Closed systemd-networkd.socket. May 15 00:55:45.496256 systemd[1]: Stopping network-cleanup.service... May 15 00:55:45.497544 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:55:45.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.497582 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 00:55:45.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.501000 audit: BPF prog-id=9 op=UNLOAD May 15 00:55:45.499107 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:55:45.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.499136 systemd[1]: Stopped systemd-sysctl.service. May 15 00:55:45.501560 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:55:45.501627 systemd[1]: Stopped systemd-modules-load.service. May 15 00:55:45.505000 audit: BPF prog-id=6 op=UNLOAD May 15 00:55:45.502298 systemd[1]: Stopping systemd-udevd.service... May 15 00:55:45.503339 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:55:45.508476 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:55:45.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.508591 systemd[1]: Stopped network-cleanup.service. May 15 00:55:45.510578 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:55:45.510716 systemd[1]: Stopped systemd-udevd.service. May 15 00:55:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.512087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:55:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.512117 systemd[1]: Closed systemd-udevd-control.socket. May 15 00:55:45.513826 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:55:45.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.513857 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 00:55:45.515528 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:55:45.515580 systemd[1]: Stopped dracut-pre-udev.service. May 15 00:55:45.517187 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:55:45.517230 systemd[1]: Stopped dracut-cmdline.service. May 15 00:55:45.517690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:55:45.517741 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 00:55:45.518651 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 00:55:45.519156 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:55:45.519207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 00:55:45.520456 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:55:45.520498 systemd[1]: Stopped kmod-static-nodes.service. May 15 00:55:45.522224 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:55:45.522257 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 00:55:45.523870 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:55:45.524244 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:55:45.524306 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 00:55:45.561713 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:55:45.561840 systemd[1]: Stopped sysroot-boot.service. May 15 00:55:45.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:45.563737 systemd[1]: Reached target initrd-switch-root.target. May 15 00:55:45.565521 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:55:45.565558 systemd[1]: Stopped initrd-setup-root.service. May 15 00:55:45.567366 systemd[1]: Starting initrd-switch-root.service... May 15 00:55:45.583020 systemd[1]: Switching root. May 15 00:55:45.600264 iscsid[726]: iscsid shutting down. May 15 00:55:45.601514 systemd-journald[199]: Received SIGTERM from PID 1 (n/a). May 15 00:55:45.601583 systemd-journald[199]: Journal stopped May 15 00:55:48.563999 kernel: SELinux: Class mctp_socket not defined in policy. May 15 00:55:48.564044 kernel: SELinux: Class anon_inode not defined in policy. May 15 00:55:48.564054 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 00:55:48.564068 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:55:48.564077 kernel: SELinux: policy capability open_perms=1 May 15 00:55:48.564086 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:55:48.564095 kernel: SELinux: policy capability always_check_network=0 May 15 00:55:48.564104 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:55:48.564113 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:55:48.564125 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:55:48.564133 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:55:48.564145 systemd[1]: Successfully loaded SELinux policy in 37.723ms. May 15 00:55:48.564163 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.522ms. May 15 00:55:48.564174 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:55:48.564187 systemd[1]: Detected virtualization kvm. May 15 00:55:48.564197 systemd[1]: Detected architecture x86-64. May 15 00:55:48.564207 systemd[1]: Detected first boot. May 15 00:55:48.564217 systemd[1]: Initializing machine ID from VM UUID. May 15 00:55:48.564228 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 00:55:48.564241 systemd[1]: Populated /etc with preset unit settings. May 15 00:55:48.564251 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:48.564262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:48.564273 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:48.564283 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 00:55:48.564294 systemd[1]: Stopped iscsiuio.service. May 15 00:55:48.564305 systemd[1]: iscsid.service: Deactivated successfully. May 15 00:55:48.564314 systemd[1]: Stopped iscsid.service. May 15 00:55:48.564324 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:55:48.564334 systemd[1]: Stopped initrd-switch-root.service. May 15 00:55:48.564344 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:55:48.564354 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 00:55:48.564364 systemd[1]: Created slice system-addon\x2drun.slice. May 15 00:55:48.564376 systemd[1]: Created slice system-getty.slice. May 15 00:55:48.564387 systemd[1]: Created slice system-modprobe.slice. May 15 00:55:48.564397 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 00:55:48.564408 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 00:55:48.564419 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 00:55:48.564429 systemd[1]: Created slice user.slice. May 15 00:55:48.564439 systemd[1]: Started systemd-ask-password-console.path. May 15 00:55:48.564449 systemd[1]: Started systemd-ask-password-wall.path. May 15 00:55:48.564458 systemd[1]: Set up automount boot.automount. May 15 00:55:48.564470 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 00:55:48.564479 systemd[1]: Stopped target initrd-switch-root.target. May 15 00:55:48.564489 systemd[1]: Stopped target initrd-fs.target. May 15 00:55:48.564498 systemd[1]: Stopped target initrd-root-fs.target. May 15 00:55:48.564508 systemd[1]: Reached target integritysetup.target. May 15 00:55:48.564518 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:55:48.564527 systemd[1]: Reached target remote-fs.target. May 15 00:55:48.564537 systemd[1]: Reached target slices.target. May 15 00:55:48.564547 systemd[1]: Reached target swap.target. May 15 00:55:48.564558 systemd[1]: Reached target torcx.target. May 15 00:55:48.564568 systemd[1]: Reached target veritysetup.target. May 15 00:55:48.564578 systemd[1]: Listening on systemd-coredump.socket. May 15 00:55:48.564588 systemd[1]: Listening on systemd-initctl.socket. May 15 00:55:48.564598 systemd[1]: Listening on systemd-networkd.socket. May 15 00:55:48.564608 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:55:48.564619 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:55:48.564629 systemd[1]: Listening on systemd-userdbd.socket. May 15 00:55:48.564639 systemd[1]: Mounting dev-hugepages.mount... May 15 00:55:48.564650 systemd[1]: Mounting dev-mqueue.mount... May 15 00:55:48.564660 systemd[1]: Mounting media.mount... May 15 00:55:48.564670 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.564680 systemd[1]: Mounting sys-kernel-debug.mount... May 15 00:55:48.564690 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 00:55:48.564702 systemd[1]: Mounting tmp.mount... May 15 00:55:48.564712 systemd[1]: Starting flatcar-tmpfiles.service... May 15 00:55:48.564733 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:48.564743 systemd[1]: Starting kmod-static-nodes.service... May 15 00:55:48.564754 systemd[1]: Starting modprobe@configfs.service... May 15 00:55:48.564764 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:48.564774 systemd[1]: Starting modprobe@drm.service... May 15 00:55:48.564784 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:48.564794 systemd[1]: Starting modprobe@fuse.service... May 15 00:55:48.564804 systemd[1]: Starting modprobe@loop.service... May 15 00:55:48.564815 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:55:48.564825 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:55:48.564836 systemd[1]: Stopped systemd-fsck-root.service. May 15 00:55:48.564848 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:55:48.564860 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:55:48.564870 kernel: loop: module loaded May 15 00:55:48.564879 systemd[1]: Stopped systemd-journald.service. May 15 00:55:48.564889 kernel: fuse: init (API version 7.34) May 15 00:55:48.564905 systemd[1]: Starting systemd-journald.service... May 15 00:55:48.564914 systemd[1]: Starting systemd-modules-load.service... May 15 00:55:48.564925 systemd[1]: Starting systemd-network-generator.service... May 15 00:55:48.564936 systemd[1]: Starting systemd-remount-fs.service... May 15 00:55:48.564946 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:55:48.564958 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:55:48.564971 systemd-journald[996]: Journal started May 15 00:55:48.565008 systemd-journald[996]: Runtime Journal (/run/log/journal/8252eaeca39e4c34bbb2f03ef49632a3) is 6.0M, max 48.5M, 42.5M free. May 15 00:55:45.656000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:55:46.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:55:46.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:55:46.036000 audit: BPF prog-id=10 op=LOAD May 15 00:55:46.036000 audit: BPF prog-id=10 op=UNLOAD May 15 00:55:46.036000 audit: BPF prog-id=11 op=LOAD May 15 00:55:46.036000 audit: BPF prog-id=11 op=UNLOAD May 15 00:55:46.065000 audit[919]: AVC avc: denied { associate } for pid=919 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 00:55:46.065000 audit[919]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=902 pid=919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:46.065000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:55:46.067000 audit[919]: AVC avc: denied { associate } for pid=919 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 00:55:46.067000 audit[919]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879a9 a2=1ed a3=0 items=2 ppid=902 pid=919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:46.067000 audit: CWD cwd="/" May 15 00:55:46.067000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:46.067000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:46.067000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:55:48.382000 audit: BPF prog-id=12 op=LOAD May 15 00:55:48.382000 audit: BPF prog-id=3 op=UNLOAD May 15 00:55:48.382000 audit: BPF prog-id=13 op=LOAD May 15 00:55:48.382000 audit: BPF prog-id=14 op=LOAD May 15 00:55:48.382000 audit: BPF prog-id=4 op=UNLOAD May 15 00:55:48.382000 audit: BPF prog-id=5 op=UNLOAD May 15 00:55:48.383000 audit: BPF prog-id=15 op=LOAD May 15 00:55:48.383000 audit: BPF prog-id=12 op=UNLOAD May 15 00:55:48.383000 audit: BPF prog-id=16 op=LOAD May 15 00:55:48.383000 audit: BPF prog-id=17 op=LOAD May 15 00:55:48.383000 audit: BPF prog-id=13 op=UNLOAD May 15 00:55:48.383000 audit: BPF prog-id=14 op=UNLOAD May 15 00:55:48.384000 audit: BPF prog-id=18 op=LOAD May 15 00:55:48.384000 audit: BPF prog-id=15 op=UNLOAD May 15 00:55:48.384000 audit: BPF prog-id=19 op=LOAD May 15 00:55:48.384000 audit: BPF prog-id=20 op=LOAD May 15 00:55:48.384000 audit: BPF prog-id=16 op=UNLOAD May 15 00:55:48.384000 audit: BPF prog-id=17 op=UNLOAD May 15 00:55:48.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.400000 audit: BPF prog-id=18 op=UNLOAD May 15 00:55:48.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.542000 audit: BPF prog-id=21 op=LOAD May 15 00:55:48.542000 audit: BPF prog-id=22 op=LOAD May 15 00:55:48.542000 audit: BPF prog-id=23 op=LOAD May 15 00:55:48.542000 audit: BPF prog-id=19 op=UNLOAD May 15 00:55:48.542000 audit: BPF prog-id=20 op=UNLOAD May 15 00:55:48.561000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 00:55:48.561000 audit[996]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca9def7c0 a2=4000 a3=7ffca9def85c items=0 ppid=1 pid=996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:48.561000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 00:55:46.064825 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:48.381418 systemd[1]: Queued start job for default target multi-user.target. May 15 00:55:46.065101 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 00:55:48.381429 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 00:55:46.065130 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 00:55:48.386070 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:55:46.065162 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 00:55:46.065173 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 00:55:46.065205 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 00:55:46.065219 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 00:55:48.567157 systemd[1]: Stopped verity-setup.service. May 15 00:55:46.065443 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 00:55:46.065482 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 00:55:46.065496 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 00:55:46.065846 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 00:55:46.065885 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 00:55:48.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.065906 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 15 00:55:46.065924 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 00:55:46.065943 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 15 00:55:46.065969 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 00:55:48.027965 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:48.028294 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:48.028455 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:48.028692 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:48.028777 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 00:55:48.028861 /usr/lib/systemd/system-generators/torcx-generator[919]: time="2025-05-15T00:55:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 00:55:48.569756 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.572740 systemd[1]: Started systemd-journald.service. May 15 00:55:48.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.573369 systemd[1]: Mounted dev-hugepages.mount. May 15 00:55:48.574259 systemd[1]: Mounted dev-mqueue.mount. May 15 00:55:48.575104 systemd[1]: Mounted media.mount. May 15 00:55:48.575907 systemd[1]: Mounted sys-kernel-debug.mount. May 15 00:55:48.576818 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 00:55:48.577744 systemd[1]: Mounted tmp.mount. May 15 00:55:48.578751 systemd[1]: Finished flatcar-tmpfiles.service. May 15 00:55:48.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.579889 systemd[1]: Finished kmod-static-nodes.service. May 15 00:55:48.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.581048 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:55:48.581219 systemd[1]: Finished modprobe@configfs.service. May 15 00:55:48.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.582407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:48.582599 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:48.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.583681 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:55:48.583889 systemd[1]: Finished modprobe@drm.service. May 15 00:55:48.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.585097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:48.585279 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:48.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.586346 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:55:48.586515 systemd[1]: Finished modprobe@fuse.service. May 15 00:55:48.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.587546 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:48.587757 systemd[1]: Finished modprobe@loop.service. May 15 00:55:48.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.588847 systemd[1]: Finished systemd-modules-load.service. May 15 00:55:48.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.590108 systemd[1]: Finished systemd-network-generator.service. May 15 00:55:48.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.591314 systemd[1]: Finished systemd-remount-fs.service. May 15 00:55:48.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.592587 systemd[1]: Reached target network-pre.target. May 15 00:55:48.594598 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 00:55:48.596570 systemd[1]: Mounting sys-kernel-config.mount... May 15 00:55:48.597436 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:55:48.598989 systemd[1]: Starting systemd-hwdb-update.service... May 15 00:55:48.601007 systemd[1]: Starting systemd-journal-flush.service... May 15 00:55:48.604593 systemd-journald[996]: Time spent on flushing to /var/log/journal/8252eaeca39e4c34bbb2f03ef49632a3 is 14.120ms for 1108 entries. May 15 00:55:48.604593 systemd-journald[996]: System Journal (/var/log/journal/8252eaeca39e4c34bbb2f03ef49632a3) is 8.0M, max 195.6M, 187.6M free. May 15 00:55:48.974351 systemd-journald[996]: Received client request to flush runtime journal. May 15 00:55:48.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.601967 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:48.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.603067 systemd[1]: Starting systemd-random-seed.service... May 15 00:55:48.604032 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:48.612582 systemd[1]: Starting systemd-sysctl.service... May 15 00:55:48.975429 udevadm[1022]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 00:55:48.642551 systemd[1]: Starting systemd-sysusers.service... May 15 00:55:48.647340 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 00:55:48.648328 systemd[1]: Mounted sys-kernel-config.mount. May 15 00:55:48.676024 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:55:48.684532 systemd[1]: Starting systemd-udev-settle.service... May 15 00:55:48.694189 systemd[1]: Finished systemd-sysctl.service. May 15 00:55:48.838269 systemd[1]: Finished systemd-sysusers.service. May 15 00:55:48.840183 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:55:48.855784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:55:48.973859 systemd[1]: Finished systemd-random-seed.service. May 15 00:55:48.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.975228 systemd[1]: Finished systemd-journal-flush.service. May 15 00:55:48.976419 systemd[1]: Reached target first-boot-complete.target. May 15 00:55:49.443328 systemd[1]: Finished systemd-hwdb-update.service. May 15 00:55:49.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.444000 audit: BPF prog-id=24 op=LOAD May 15 00:55:49.446000 audit: BPF prog-id=25 op=LOAD May 15 00:55:49.447000 audit: BPF prog-id=7 op=UNLOAD May 15 00:55:49.448000 audit: BPF prog-id=8 op=UNLOAD May 15 00:55:49.450389 systemd[1]: Starting systemd-udevd.service... May 15 00:55:49.468785 systemd-udevd[1027]: Using default interface naming scheme 'v252'. May 15 00:55:49.489037 systemd[1]: Started systemd-udevd.service. May 15 00:55:49.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.490000 audit: BPF prog-id=26 op=LOAD May 15 00:55:49.492185 systemd[1]: Starting systemd-networkd.service... May 15 00:55:49.495000 audit: BPF prog-id=27 op=LOAD May 15 00:55:49.495000 audit: BPF prog-id=28 op=LOAD May 15 00:55:49.495000 audit: BPF prog-id=29 op=LOAD May 15 00:55:49.496624 systemd[1]: Starting systemd-userdbd.service... May 15 00:55:49.521525 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 15 00:55:49.528065 systemd[1]: Started systemd-userdbd.service. May 15 00:55:49.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.544822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:55:49.554735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:55:49.560796 kernel: ACPI: button: Power Button [PWRF] May 15 00:55:49.562000 audit[1030]: AVC avc: denied { confidentiality } for pid=1030 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 00:55:49.562000 audit[1030]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5648c7d163a0 a1=338ac a2=7f779fdbabc5 a3=5 items=110 ppid=1027 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:49.562000 audit: CWD cwd="/" May 15 00:55:49.562000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=1 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=2 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=3 name=(null) inode=14757 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=4 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=5 name=(null) inode=14758 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=6 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=7 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=8 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=9 name=(null) inode=14760 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=10 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=11 name=(null) inode=14761 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=12 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=13 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=14 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=15 name=(null) inode=14763 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=16 name=(null) inode=14759 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=17 name=(null) inode=14764 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=18 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=19 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=20 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=21 name=(null) inode=14766 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=22 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=23 name=(null) inode=14767 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=24 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=25 name=(null) inode=14768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=26 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=27 name=(null) inode=14769 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=28 name=(null) inode=14765 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=29 name=(null) inode=14770 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=30 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=31 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=32 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=33 name=(null) inode=14772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=34 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=35 name=(null) inode=14773 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=36 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=37 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=38 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=39 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=40 name=(null) inode=14771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=41 name=(null) inode=14776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=42 name=(null) inode=14756 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=43 name=(null) inode=14777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=44 name=(null) inode=14777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=45 name=(null) inode=14778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=46 name=(null) inode=14777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=47 name=(null) inode=14779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=48 name=(null) inode=14777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=49 name=(null) inode=14780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=50 name=(null) inode=14777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=51 name=(null) inode=14781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=52 name=(null) inode=14777 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=53 name=(null) inode=14782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=55 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=56 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=57 name=(null) inode=14784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=58 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=59 name=(null) inode=14785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=60 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=61 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=62 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=63 name=(null) inode=14787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=64 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=65 name=(null) inode=14788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=66 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=67 name=(null) inode=14789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=68 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=69 name=(null) inode=14790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=70 name=(null) inode=14786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=71 name=(null) inode=14791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=72 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=73 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=74 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=75 name=(null) inode=14793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=76 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=77 name=(null) inode=14794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=78 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=79 name=(null) inode=14795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=80 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=81 name=(null) inode=14796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=82 name=(null) inode=14792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=83 name=(null) inode=14797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=84 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=85 name=(null) inode=14798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=86 name=(null) inode=14798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=87 name=(null) inode=14799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=88 name=(null) inode=14798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=89 name=(null) inode=14800 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=90 name=(null) inode=14798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=91 name=(null) inode=14801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=92 name=(null) inode=14798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=93 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=94 name=(null) inode=14798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=95 name=(null) inode=14803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=96 name=(null) inode=14783 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=97 name=(null) inode=14804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=98 name=(null) inode=14804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=99 name=(null) inode=14805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=100 name=(null) inode=14804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=101 name=(null) inode=14806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=102 name=(null) inode=14804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=103 name=(null) inode=14807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=104 name=(null) inode=14804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=105 name=(null) inode=14808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=106 name=(null) inode=14804 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=107 name=(null) inode=14809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PATH item=109 name=(null) inode=14810 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:49.562000 audit: PROCTITLE proctitle="(udev-worker)" May 15 00:55:49.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.570578 systemd-networkd[1035]: lo: Link UP May 15 00:55:49.570584 systemd-networkd[1035]: lo: Gained carrier May 15 00:55:49.570931 systemd-networkd[1035]: Enumeration completed May 15 00:55:49.571001 systemd[1]: Started systemd-networkd.service. May 15 00:55:49.572946 systemd-networkd[1035]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:55:49.575826 systemd-networkd[1035]: eth0: Link UP May 15 00:55:49.575832 systemd-networkd[1035]: eth0: Gained carrier May 15 00:55:49.589756 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 00:55:49.602040 systemd-networkd[1035]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:55:49.604738 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:55:49.612505 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:55:49.634128 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:55:49.634257 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:55:49.685073 kernel: kvm: Nested Virtualization enabled May 15 00:55:49.685270 kernel: SVM: kvm: Nested Paging enabled May 15 00:55:49.685300 kernel: SVM: Virtual VMLOAD VMSAVE supported May 15 00:55:49.685323 kernel: SVM: Virtual GIF supported May 15 00:55:49.700744 kernel: EDAC MC: Ver: 3.0.0 May 15 00:55:49.728114 systemd[1]: Finished systemd-udev-settle.service. May 15 00:55:49.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.730144 systemd[1]: Starting lvm2-activation-early.service... May 15 00:55:49.737670 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:55:49.772473 systemd[1]: Finished lvm2-activation-early.service. May 15 00:55:49.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.773534 systemd[1]: Reached target cryptsetup.target. May 15 00:55:49.775424 systemd[1]: Starting lvm2-activation.service... May 15 00:55:49.778711 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:55:49.801314 systemd[1]: Finished lvm2-activation.service. May 15 00:55:49.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.802262 systemd[1]: Reached target local-fs-pre.target. May 15 00:55:49.803153 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:55:49.803181 systemd[1]: Reached target local-fs.target. May 15 00:55:49.804031 systemd[1]: Reached target machines.target. May 15 00:55:49.805659 systemd[1]: Starting ldconfig.service... May 15 00:55:49.806709 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:49.806775 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:49.807732 systemd[1]: Starting systemd-boot-update.service... May 15 00:55:49.809405 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 00:55:49.811573 systemd[1]: Starting systemd-machine-id-commit.service... May 15 00:55:49.813323 systemd[1]: Starting systemd-sysext.service... May 15 00:55:49.814374 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1065 (bootctl) May 15 00:55:49.815285 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 00:55:49.819304 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 00:55:49.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.828642 systemd[1]: Unmounting usr-share-oem.mount... May 15 00:55:49.832513 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 00:55:49.832655 systemd[1]: Unmounted usr-share-oem.mount. May 15 00:55:49.841748 kernel: loop0: detected capacity change from 0 to 218376 May 15 00:55:49.850153 systemd-fsck[1072]: fsck.fat 4.2 (2021-01-31) May 15 00:55:49.850153 systemd-fsck[1072]: /dev/vda1: 790 files, 120690/258078 clusters May 15 00:55:49.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:49.851458 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 00:55:49.855660 systemd[1]: Mounting boot.mount... May 15 00:55:50.096439 systemd[1]: Mounted boot.mount. May 15 00:55:50.102743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:55:50.107023 systemd[1]: Finished systemd-boot-update.service. May 15 00:55:50.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.118742 kernel: loop1: detected capacity change from 0 to 218376 May 15 00:55:50.122953 (sd-sysext)[1078]: Using extensions 'kubernetes'. May 15 00:55:50.123387 (sd-sysext)[1078]: Merged extensions into '/usr'. May 15 00:55:50.141716 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.143274 systemd[1]: Mounting usr-share-oem.mount... May 15 00:55:50.144427 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:50.160090 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:50.162780 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:50.164799 systemd[1]: Starting modprobe@loop.service... May 15 00:55:50.165858 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:50.165982 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:50.166103 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.167933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:55:50.168954 systemd[1]: Finished systemd-machine-id-commit.service. May 15 00:55:50.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.170449 systemd[1]: Mounted usr-share-oem.mount. May 15 00:55:50.171630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:50.171771 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:50.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.173198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:50.173303 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:50.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.174820 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:50.174939 systemd[1]: Finished modprobe@loop.service. May 15 00:55:50.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.176375 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:50.176474 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:50.177365 systemd[1]: Finished systemd-sysext.service. May 15 00:55:50.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.179448 systemd[1]: Starting ensure-sysext.service... May 15 00:55:50.181228 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 00:55:50.187441 systemd[1]: Reloading. May 15 00:55:50.194290 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 00:55:50.195183 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:55:50.196771 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:55:50.208349 ldconfig[1064]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:55:50.258607 /usr/lib/systemd/system-generators/torcx-generator[1106]: time="2025-05-15T00:55:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:50.259023 /usr/lib/systemd/system-generators/torcx-generator[1106]: time="2025-05-15T00:55:50Z" level=info msg="torcx already run" May 15 00:55:50.324881 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:50.324899 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:50.341596 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:50.390000 audit: BPF prog-id=30 op=LOAD May 15 00:55:50.391854 kernel: kauditd_printk_skb: 247 callbacks suppressed May 15 00:55:50.391901 kernel: audit: type=1334 audit(1747270550.390:170): prog-id=30 op=LOAD May 15 00:55:50.390000 audit: BPF prog-id=27 op=UNLOAD May 15 00:55:50.393853 kernel: audit: type=1334 audit(1747270550.390:171): prog-id=27 op=UNLOAD May 15 00:55:50.393875 kernel: audit: type=1334 audit(1747270550.391:172): prog-id=31 op=LOAD May 15 00:55:50.391000 audit: BPF prog-id=31 op=LOAD May 15 00:55:50.394840 kernel: audit: type=1334 audit(1747270550.393:173): prog-id=32 op=LOAD May 15 00:55:50.393000 audit: BPF prog-id=32 op=LOAD May 15 00:55:50.395814 kernel: audit: type=1334 audit(1747270550.393:174): prog-id=28 op=UNLOAD May 15 00:55:50.393000 audit: BPF prog-id=28 op=UNLOAD May 15 00:55:50.396829 kernel: audit: type=1334 audit(1747270550.393:175): prog-id=29 op=UNLOAD May 15 00:55:50.393000 audit: BPF prog-id=29 op=UNLOAD May 15 00:55:50.397924 kernel: audit: type=1334 audit(1747270550.395:176): prog-id=33 op=LOAD May 15 00:55:50.395000 audit: BPF prog-id=33 op=LOAD May 15 00:55:50.397000 audit: BPF prog-id=34 op=LOAD May 15 00:55:50.399894 kernel: audit: type=1334 audit(1747270550.397:177): prog-id=34 op=LOAD May 15 00:55:50.399924 kernel: audit: type=1334 audit(1747270550.397:178): prog-id=24 op=UNLOAD May 15 00:55:50.397000 audit: BPF prog-id=24 op=UNLOAD May 15 00:55:50.400911 kernel: audit: type=1334 audit(1747270550.397:179): prog-id=25 op=UNLOAD May 15 00:55:50.397000 audit: BPF prog-id=25 op=UNLOAD May 15 00:55:50.399000 audit: BPF prog-id=35 op=LOAD May 15 00:55:50.399000 audit: BPF prog-id=26 op=UNLOAD May 15 00:55:50.401000 audit: BPF prog-id=36 op=LOAD May 15 00:55:50.401000 audit: BPF prog-id=21 op=UNLOAD May 15 00:55:50.401000 audit: BPF prog-id=37 op=LOAD May 15 00:55:50.401000 audit: BPF prog-id=38 op=LOAD May 15 00:55:50.401000 audit: BPF prog-id=22 op=UNLOAD May 15 00:55:50.401000 audit: BPF prog-id=23 op=UNLOAD May 15 00:55:50.403775 systemd[1]: Finished ldconfig.service. May 15 00:55:50.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.405504 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 00:55:50.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.409054 systemd[1]: Starting audit-rules.service... May 15 00:55:50.410656 systemd[1]: Starting clean-ca-certificates.service... May 15 00:55:50.412528 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 00:55:50.413000 audit: BPF prog-id=39 op=LOAD May 15 00:55:50.415115 systemd[1]: Starting systemd-resolved.service... May 15 00:55:50.415000 audit: BPF prog-id=40 op=LOAD May 15 00:55:50.417317 systemd[1]: Starting systemd-timesyncd.service... May 15 00:55:50.419127 systemd[1]: Starting systemd-update-utmp.service... May 15 00:55:50.420556 systemd[1]: Finished clean-ca-certificates.service. May 15 00:55:50.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.422000 audit[1159]: SYSTEM_BOOT pid=1159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 00:55:50.426335 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.426574 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:50.427956 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:50.429814 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:50.432029 systemd[1]: Starting modprobe@loop.service... May 15 00:55:50.432995 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:50.433284 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:50.433586 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:50.433849 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.435544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:50.435778 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:50.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.437484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:50.437576 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:50.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.439144 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:50.439357 systemd[1]: Finished modprobe@loop.service. May 15 00:55:50.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.440771 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:50.441643 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:50.442040 systemd[1]: Finished systemd-update-utmp.service. May 15 00:55:50.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.445339 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.445507 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:50.446641 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:50.448438 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:50.450154 systemd[1]: Starting modprobe@loop.service... May 15 00:55:50.450928 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:50.451022 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:50.451103 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:50.451166 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:50.455000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 00:55:50.455000 audit[1174]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc4440f470 a2=420 a3=0 items=0 ppid=1148 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:50.455000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 00:55:50.457030 augenrules[1174]: No rules May 15 00:55:50.452043 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 00:55:50.453359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:50.453452 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:50.454702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:50.454816 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:50.456172 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:50.456265 systemd[1]: Finished modprobe@loop.service. May 15 00:55:50.457735 systemd[1]: Finished audit-rules.service. May 15 00:55:50.458923 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:50.459006 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:50.460013 systemd[1]: Starting systemd-update-done.service... May 15 00:55:50.463090 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.463273 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:50.464920 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:50.466874 systemd[1]: Starting modprobe@drm.service... May 15 00:55:50.468518 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:50.470331 systemd[1]: Starting modprobe@loop.service... May 15 00:55:50.471921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:50.472079 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:50.473621 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 00:55:50.474619 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:50.474737 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:50.475811 systemd[1]: Finished systemd-update-done.service. May 15 00:55:50.477026 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:50.477138 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:50.478335 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:55:50.478447 systemd[1]: Finished modprobe@drm.service. May 15 00:55:50.479516 systemd[1]: Started systemd-timesyncd.service. May 15 00:55:51.329174 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:55:51.329433 systemd-timesyncd[1158]: Initial clock synchronization to Thu 2025-05-15 00:55:51.329103 UTC. May 15 00:55:51.330355 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:51.330465 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:51.331632 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:51.331742 systemd[1]: Finished modprobe@loop.service. May 15 00:55:51.333195 systemd[1]: Reached target time-set.target. May 15 00:55:51.334096 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:51.334136 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:51.334407 systemd[1]: Finished ensure-sysext.service. May 15 00:55:51.334759 systemd-resolved[1154]: Positive Trust Anchors: May 15 00:55:51.334781 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:55:51.334814 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:55:51.346083 systemd-resolved[1154]: Defaulting to hostname 'linux'. May 15 00:55:51.347561 systemd[1]: Started systemd-resolved.service. May 15 00:55:51.348532 systemd[1]: Reached target network.target. May 15 00:55:51.349423 systemd[1]: Reached target nss-lookup.target. May 15 00:55:51.350310 systemd[1]: Reached target sysinit.target. May 15 00:55:51.351203 systemd[1]: Started motdgen.path. May 15 00:55:51.351956 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 00:55:51.353255 systemd[1]: Started logrotate.timer. May 15 00:55:51.354096 systemd[1]: Started mdadm.timer. May 15 00:55:51.354828 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 00:55:51.355751 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:55:51.355782 systemd[1]: Reached target paths.target. May 15 00:55:51.356599 systemd[1]: Reached target timers.target. May 15 00:55:51.357732 systemd[1]: Listening on dbus.socket. May 15 00:55:51.359497 systemd[1]: Starting docker.socket... May 15 00:55:51.362231 systemd[1]: Listening on sshd.socket. May 15 00:55:51.363089 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:51.363461 systemd[1]: Listening on docker.socket. May 15 00:55:51.364335 systemd[1]: Reached target sockets.target. May 15 00:55:51.365170 systemd[1]: Reached target basic.target. May 15 00:55:51.366006 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:55:51.366031 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:55:51.366887 systemd[1]: Starting containerd.service... May 15 00:55:51.368429 systemd[1]: Starting dbus.service... May 15 00:55:51.370028 systemd[1]: Starting enable-oem-cloudinit.service... May 15 00:55:51.371682 systemd[1]: Starting extend-filesystems.service... May 15 00:55:51.372819 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 00:55:51.374018 jq[1190]: false May 15 00:55:51.373858 systemd[1]: Starting motdgen.service... May 15 00:55:51.375396 systemd[1]: Starting prepare-helm.service... May 15 00:55:51.377311 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 00:55:51.379784 systemd[1]: Starting sshd-keygen.service... May 15 00:55:51.383244 extend-filesystems[1191]: Found loop1 May 15 00:55:51.384389 extend-filesystems[1191]: Found sr0 May 15 00:55:51.385230 extend-filesystems[1191]: Found vda May 15 00:55:51.385230 extend-filesystems[1191]: Found vda1 May 15 00:55:51.385230 extend-filesystems[1191]: Found vda2 May 15 00:55:51.385230 extend-filesystems[1191]: Found vda3 May 15 00:55:51.385230 extend-filesystems[1191]: Found usr May 15 00:55:51.385230 extend-filesystems[1191]: Found vda4 May 15 00:55:51.385230 extend-filesystems[1191]: Found vda6 May 15 00:55:51.385230 extend-filesystems[1191]: Found vda7 May 15 00:55:51.394094 extend-filesystems[1191]: Found vda9 May 15 00:55:51.394094 extend-filesystems[1191]: Checking size of /dev/vda9 May 15 00:55:51.392183 dbus-daemon[1189]: [system] SELinux support is enabled May 15 00:55:51.392061 systemd[1]: Starting systemd-logind.service... May 15 00:55:51.394306 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:51.394362 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:55:51.394819 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:55:51.395621 systemd[1]: Starting update-engine.service... May 15 00:55:51.398025 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 00:55:51.400321 systemd[1]: Started dbus.service. May 15 00:55:51.406020 jq[1211]: true May 15 00:55:51.403657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:55:51.403835 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 00:55:51.404156 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:55:51.404800 systemd[1]: Finished motdgen.service. May 15 00:55:51.409069 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:55:51.409256 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 00:55:51.418745 extend-filesystems[1191]: Resized partition /dev/vda9 May 15 00:55:51.414488 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:55:51.460008 jq[1217]: true May 15 00:55:51.414513 systemd[1]: Reached target system-config.target. May 15 00:55:51.415578 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:55:51.415601 systemd[1]: Reached target user-config.target. May 15 00:55:51.464091 extend-filesystems[1216]: resize2fs 1.46.5 (30-Dec-2021) May 15 00:55:51.469021 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:55:51.479372 tar[1215]: linux-amd64/LICENSE May 15 00:55:51.479657 tar[1215]: linux-amd64/helm May 15 00:55:51.525702 update_engine[1210]: I0515 00:55:51.498773 1210 main.cc:92] Flatcar Update Engine starting May 15 00:55:51.525702 update_engine[1210]: I0515 00:55:51.501053 1210 update_check_scheduler.cc:74] Next update check in 7m19s May 15 00:55:51.501016 systemd[1]: Started update-engine.service. May 15 00:55:51.516144 systemd[1]: Started locksmithd.service. May 15 00:55:51.536981 env[1218]: time="2025-05-15T00:55:51.536914801Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 00:55:51.539015 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:55:51.556547 env[1218]: time="2025-05-15T00:55:51.556503574Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:55:51.566822 env[1218]: time="2025-05-15T00:55:51.566803213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:51.567125 systemd-logind[1205]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:55:51.567149 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:55:51.567591 extend-filesystems[1216]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:55:51.567591 extend-filesystems[1216]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:55:51.567591 extend-filesystems[1216]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:55:51.575523 extend-filesystems[1191]: Resized filesystem in /dev/vda9 May 15 00:55:51.576566 bash[1236]: Updated "/home/core/.ssh/authorized_keys" May 15 00:55:51.568298 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.568626884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.568649166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.568930113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.568956272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.568969947Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.568978824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.569084562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.569289346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.569399613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:51.576692 env[1218]: time="2025-05-15T00:55:51.569412387Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:55:51.569042 systemd[1]: Finished extend-filesystems.service. May 15 00:55:51.577038 env[1218]: time="2025-05-15T00:55:51.569450649Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 00:55:51.577038 env[1218]: time="2025-05-15T00:55:51.569460167Z" level=info msg="metadata content store policy set" policy=shared May 15 00:55:51.569796 systemd-logind[1205]: New seat seat0. May 15 00:55:51.573853 systemd[1]: Started systemd-logind.service. May 15 00:55:51.577584 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 00:55:51.580185 env[1218]: time="2025-05-15T00:55:51.580076981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:55:51.580185 env[1218]: time="2025-05-15T00:55:51.580144327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:55:51.580185 env[1218]: time="2025-05-15T00:55:51.580164906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:55:51.580272 env[1218]: time="2025-05-15T00:55:51.580216483Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580272 env[1218]: time="2025-05-15T00:55:51.580239285Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580272 env[1218]: time="2025-05-15T00:55:51.580256989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580341 env[1218]: time="2025-05-15T00:55:51.580273680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580341 env[1218]: time="2025-05-15T00:55:51.580292555Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580341 env[1218]: time="2025-05-15T00:55:51.580304999Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580341 env[1218]: time="2025-05-15T00:55:51.580320648Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580341 env[1218]: time="2025-05-15T00:55:51.580332660Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:55:51.580441 env[1218]: time="2025-05-15T00:55:51.580343411Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:55:51.580490 env[1218]: time="2025-05-15T00:55:51.580461412Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:55:51.580579 env[1218]: time="2025-05-15T00:55:51.580557542Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580784839Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580814955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580827759Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580871070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580882372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580893412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580904203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580915734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580932125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580944318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580954307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.580965978Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.581076666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.581091093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585022 env[1218]: time="2025-05-15T00:55:51.581101963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:55:51.582847 systemd[1]: Started containerd.service. May 15 00:55:51.585420 env[1218]: time="2025-05-15T00:55:51.581121159Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:55:51.585420 env[1218]: time="2025-05-15T00:55:51.581135536Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 00:55:51.585420 env[1218]: time="2025-05-15T00:55:51.581144894Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:55:51.585420 env[1218]: time="2025-05-15T00:55:51.581164841Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 00:55:51.585420 env[1218]: time="2025-05-15T00:55:51.581201210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.581380877Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.581430680Z" level=info msg="Connect containerd service" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.581462460Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.581953080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582209401Z" level=info msg="Start subscribing containerd event" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582252031Z" level=info msg="Start recovering state" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582302295Z" level=info msg="Start event monitor" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582320930Z" level=info msg="Start snapshots syncer" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582328444Z" level=info msg="Start cni network conf syncer for default" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582335407Z" level=info msg="Start streaming server" May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582588321Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:55:51.585516 env[1218]: time="2025-05-15T00:55:51.582655748Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:55:51.639716 env[1218]: time="2025-05-15T00:55:51.600144131Z" level=info msg="containerd successfully booted in 0.080199s" May 15 00:55:51.670363 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:55:51.888217 systemd-networkd[1035]: eth0: Gained IPv6LL May 15 00:55:51.895919 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 00:55:51.897338 systemd[1]: Reached target network-online.target. May 15 00:55:51.899864 systemd[1]: Starting kubelet.service... May 15 00:55:52.023042 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:55:52.073111 systemd[1]: Finished sshd-keygen.service. May 15 00:55:52.076893 systemd[1]: Starting issuegen.service... May 15 00:55:52.082289 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:55:52.082399 systemd[1]: Finished issuegen.service. May 15 00:55:52.084293 systemd[1]: Starting systemd-user-sessions.service... May 15 00:55:52.093564 systemd[1]: Finished systemd-user-sessions.service. May 15 00:55:52.096104 systemd[1]: Started getty@tty1.service. May 15 00:55:52.100621 systemd[1]: Started serial-getty@ttyS0.service. May 15 00:55:52.101685 systemd[1]: Reached target getty.target. May 15 00:55:52.139575 tar[1215]: linux-amd64/README.md May 15 00:55:52.143719 systemd[1]: Finished prepare-helm.service. May 15 00:55:53.020775 systemd[1]: Started kubelet.service. May 15 00:55:53.021936 systemd[1]: Reached target multi-user.target. May 15 00:55:53.023760 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 00:55:53.029717 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 00:55:53.029829 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 00:55:53.030934 systemd[1]: Startup finished in 612ms (kernel) + 4.912s (initrd) + 6.564s (userspace) = 12.088s. May 15 00:55:53.564496 kubelet[1270]: E0515 00:55:53.564414 1270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:55:53.566570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:55:53.566687 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:55:53.566884 systemd[1]: kubelet.service: Consumed 1.562s CPU time. May 15 00:55:53.870564 systemd[1]: Created slice system-sshd.slice. May 15 00:55:53.871618 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:36846.service. May 15 00:55:53.905699 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:53.907365 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.916904 systemd-logind[1205]: New session 1 of user core. May 15 00:55:53.917934 systemd[1]: Created slice user-500.slice. May 15 00:55:53.918927 systemd[1]: Starting user-runtime-dir@500.service... May 15 00:55:53.928090 systemd[1]: Finished user-runtime-dir@500.service. May 15 00:55:53.929221 systemd[1]: Starting user@500.service... May 15 00:55:53.931859 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:54.006806 systemd[1282]: Queued start job for default target default.target. May 15 00:55:54.007285 systemd[1282]: Reached target paths.target. May 15 00:55:54.007304 systemd[1282]: Reached target sockets.target. May 15 00:55:54.007316 systemd[1282]: Reached target timers.target. May 15 00:55:54.007326 systemd[1282]: Reached target basic.target. May 15 00:55:54.007357 systemd[1282]: Reached target default.target. May 15 00:55:54.007377 systemd[1282]: Startup finished in 69ms. May 15 00:55:54.007441 systemd[1]: Started user@500.service. May 15 00:55:54.008293 systemd[1]: Started session-1.scope. May 15 00:55:54.059648 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:36854.service. May 15 00:55:54.089971 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 36854 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:54.091371 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:54.094856 systemd-logind[1205]: New session 2 of user core. May 15 00:55:54.095850 systemd[1]: Started session-2.scope. May 15 00:55:54.147741 sshd[1291]: pam_unix(sshd:session): session closed for user core May 15 00:55:54.150121 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:36854.service: Deactivated successfully. May 15 00:55:54.150611 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:55:54.151065 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. May 15 00:55:54.151860 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:36868.service. May 15 00:55:54.152490 systemd-logind[1205]: Removed session 2. May 15 00:55:54.182340 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 36868 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:54.183405 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:54.186568 systemd-logind[1205]: New session 3 of user core. May 15 00:55:54.187235 systemd[1]: Started session-3.scope. May 15 00:55:54.235096 sshd[1297]: pam_unix(sshd:session): session closed for user core May 15 00:55:54.237099 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:36868.service: Deactivated successfully. May 15 00:55:54.237526 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:55:54.237980 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. May 15 00:55:54.238894 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:36870.service. May 15 00:55:54.239501 systemd-logind[1205]: Removed session 3. May 15 00:55:54.267129 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 36870 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:54.268176 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:54.271283 systemd-logind[1205]: New session 4 of user core. May 15 00:55:54.272052 systemd[1]: Started session-4.scope. May 15 00:55:54.326845 sshd[1303]: pam_unix(sshd:session): session closed for user core May 15 00:55:54.330795 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:36870.service: Deactivated successfully. May 15 00:55:54.331635 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:55:54.332230 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. May 15 00:55:54.333836 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:36880.service. May 15 00:55:54.334873 systemd-logind[1205]: Removed session 4. May 15 00:55:54.365023 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 36880 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:54.366003 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:54.369481 systemd-logind[1205]: New session 5 of user core. May 15 00:55:54.370505 systemd[1]: Started session-5.scope. May 15 00:55:54.425543 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:55:54.425730 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:55:54.460178 systemd[1]: Starting docker.service... May 15 00:55:54.542015 env[1324]: time="2025-05-15T00:55:54.541932395Z" level=info msg="Starting up" May 15 00:55:54.543076 env[1324]: time="2025-05-15T00:55:54.543040383Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 00:55:54.543076 env[1324]: time="2025-05-15T00:55:54.543058177Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 00:55:54.543165 env[1324]: time="2025-05-15T00:55:54.543079116Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 00:55:54.543165 env[1324]: time="2025-05-15T00:55:54.543091048Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 00:55:54.544884 env[1324]: time="2025-05-15T00:55:54.544844127Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 00:55:54.544884 env[1324]: time="2025-05-15T00:55:54.544875566Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 00:55:54.544957 env[1324]: time="2025-05-15T00:55:54.544899891Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 00:55:54.544957 env[1324]: time="2025-05-15T00:55:54.544909379Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 00:55:56.735023 env[1324]: time="2025-05-15T00:55:56.734951863Z" level=info msg="Loading containers: start." May 15 00:55:56.853026 kernel: Initializing XFRM netlink socket May 15 00:55:56.882582 env[1324]: time="2025-05-15T00:55:56.882537141Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 00:55:56.933267 systemd-networkd[1035]: docker0: Link UP May 15 00:55:57.164581 env[1324]: time="2025-05-15T00:55:57.164465167Z" level=info msg="Loading containers: done." May 15 00:55:57.212694 env[1324]: time="2025-05-15T00:55:57.212637404Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:55:57.212887 env[1324]: time="2025-05-15T00:55:57.212861495Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 00:55:57.213007 env[1324]: time="2025-05-15T00:55:57.212969637Z" level=info msg="Daemon has completed initialization" May 15 00:55:57.236222 systemd[1]: Started docker.service. May 15 00:55:57.240082 env[1324]: time="2025-05-15T00:55:57.240029905Z" level=info msg="API listen on /run/docker.sock" May 15 00:55:57.988418 env[1218]: time="2025-05-15T00:55:57.988366591Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 00:55:58.632619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3792421889.mount: Deactivated successfully. May 15 00:56:00.489632 env[1218]: time="2025-05-15T00:56:00.489564282Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:00.491600 env[1218]: time="2025-05-15T00:56:00.491540048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:00.493343 env[1218]: time="2025-05-15T00:56:00.493299970Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:00.495067 env[1218]: time="2025-05-15T00:56:00.495039913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:00.495725 env[1218]: time="2025-05-15T00:56:00.495682659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 00:56:00.496498 env[1218]: time="2025-05-15T00:56:00.496471669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 00:56:02.387381 env[1218]: time="2025-05-15T00:56:02.387313608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:02.389383 env[1218]: time="2025-05-15T00:56:02.389341211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:02.393215 env[1218]: time="2025-05-15T00:56:02.393174772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:02.394357 env[1218]: time="2025-05-15T00:56:02.394323146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:02.394942 env[1218]: time="2025-05-15T00:56:02.394906049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 00:56:02.395553 env[1218]: time="2025-05-15T00:56:02.395528156Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 00:56:03.696826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:56:03.697091 systemd[1]: Stopped kubelet.service. May 15 00:56:03.697139 systemd[1]: kubelet.service: Consumed 1.562s CPU time. May 15 00:56:03.698927 systemd[1]: Starting kubelet.service... May 15 00:56:03.831918 systemd[1]: Started kubelet.service. May 15 00:56:03.912281 kubelet[1460]: E0515 00:56:03.912220 1460 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:56:03.915085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:56:03.915200 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:56:04.595017 env[1218]: time="2025-05-15T00:56:04.594944472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.596958 env[1218]: time="2025-05-15T00:56:04.596927051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.601187 env[1218]: time="2025-05-15T00:56:04.601146025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.601973 env[1218]: time="2025-05-15T00:56:04.601925737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:04.602762 env[1218]: time="2025-05-15T00:56:04.602719436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 00:56:04.603424 env[1218]: time="2025-05-15T00:56:04.603397518Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 00:56:06.701202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184798175.mount: Deactivated successfully. May 15 00:56:07.924894 env[1218]: time="2025-05-15T00:56:07.924831649Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.926925 env[1218]: time="2025-05-15T00:56:07.926876454Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.929002 env[1218]: time="2025-05-15T00:56:07.928946357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.930734 env[1218]: time="2025-05-15T00:56:07.930694346Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:07.931193 env[1218]: time="2025-05-15T00:56:07.931156502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 00:56:07.931691 env[1218]: time="2025-05-15T00:56:07.931667992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 00:56:09.186847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577641039.mount: Deactivated successfully. May 15 00:56:10.759507 env[1218]: time="2025-05-15T00:56:10.759438591Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.761733 env[1218]: time="2025-05-15T00:56:10.761683682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.763469 env[1218]: time="2025-05-15T00:56:10.763443493Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.766188 env[1218]: time="2025-05-15T00:56:10.766155280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:10.766900 env[1218]: time="2025-05-15T00:56:10.766858920Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 00:56:10.767510 env[1218]: time="2025-05-15T00:56:10.767469054Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:56:11.398972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3763273117.mount: Deactivated successfully. May 15 00:56:11.404589 env[1218]: time="2025-05-15T00:56:11.404548303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.406548 env[1218]: time="2025-05-15T00:56:11.406525862Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.408197 env[1218]: time="2025-05-15T00:56:11.408153585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.409605 env[1218]: time="2025-05-15T00:56:11.409543833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.409967 env[1218]: time="2025-05-15T00:56:11.409935227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 00:56:11.410405 env[1218]: time="2025-05-15T00:56:11.410378078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 00:56:11.994089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921491307.mount: Deactivated successfully. May 15 00:56:13.946648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:56:13.946830 systemd[1]: Stopped kubelet.service. May 15 00:56:13.948113 systemd[1]: Starting kubelet.service... May 15 00:56:14.027076 systemd[1]: Started kubelet.service. May 15 00:56:14.079050 kubelet[1471]: E0515 00:56:14.078982 1471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:56:14.080950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:56:14.081104 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:56:15.675792 env[1218]: time="2025-05-15T00:56:15.675723396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.678011 env[1218]: time="2025-05-15T00:56:15.677959481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.680495 env[1218]: time="2025-05-15T00:56:15.680429914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.682471 env[1218]: time="2025-05-15T00:56:15.682417863Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:15.683652 env[1218]: time="2025-05-15T00:56:15.683606312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 00:56:18.640116 systemd[1]: Stopped kubelet.service. May 15 00:56:18.642105 systemd[1]: Starting kubelet.service... May 15 00:56:18.665445 systemd[1]: Reloading. May 15 00:56:18.727141 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-05-15T00:56:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:56:18.727179 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-05-15T00:56:18Z" level=info msg="torcx already run" May 15 00:56:19.132836 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:56:19.132851 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:56:19.149943 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:56:19.224392 systemd[1]: Started kubelet.service. May 15 00:56:19.225735 systemd[1]: Stopping kubelet.service... May 15 00:56:19.226100 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:56:19.226250 systemd[1]: Stopped kubelet.service. May 15 00:56:19.227557 systemd[1]: Starting kubelet.service... May 15 00:56:19.303174 systemd[1]: Started kubelet.service. May 15 00:56:19.355353 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:19.355353 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:56:19.355353 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:19.355759 kubelet[1573]: I0515 00:56:19.355401 1573 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:56:19.548083 kubelet[1573]: I0515 00:56:19.548043 1573 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:56:19.548083 kubelet[1573]: I0515 00:56:19.548069 1573 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:56:19.548367 kubelet[1573]: I0515 00:56:19.548333 1573 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:56:19.576254 kubelet[1573]: E0515 00:56:19.576219 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.578253 kubelet[1573]: I0515 00:56:19.578221 1573 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:56:19.586302 kubelet[1573]: E0515 00:56:19.586266 1573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:56:19.586302 kubelet[1573]: I0515 00:56:19.586290 1573 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:56:19.589580 kubelet[1573]: I0515 00:56:19.589566 1573 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:56:19.589811 kubelet[1573]: I0515 00:56:19.589782 1573 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:56:19.589981 kubelet[1573]: I0515 00:56:19.589807 1573 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:56:19.590089 kubelet[1573]: I0515 00:56:19.590002 1573 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:56:19.590089 kubelet[1573]: I0515 00:56:19.590013 1573 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:56:19.590156 kubelet[1573]: I0515 00:56:19.590144 1573 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:19.593841 kubelet[1573]: I0515 00:56:19.593821 1573 kubelet.go:446] "Attempting to sync node with API server" May 15 00:56:19.593841 kubelet[1573]: I0515 00:56:19.593839 1573 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:56:19.593920 kubelet[1573]: I0515 00:56:19.593857 1573 kubelet.go:352] "Adding apiserver pod source" May 15 00:56:19.593920 kubelet[1573]: I0515 00:56:19.593866 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:56:19.611048 kubelet[1573]: I0515 00:56:19.611023 1573 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:56:19.611403 kubelet[1573]: I0515 00:56:19.611385 1573 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:56:19.618910 kubelet[1573]: W0515 00:56:19.618856 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:19.618979 kubelet[1573]: E0515 00:56:19.618909 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.618979 kubelet[1573]: W0515 00:56:19.618959 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:19.619138 kubelet[1573]: E0515 00:56:19.618981 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.621971 kubelet[1573]: W0515 00:56:19.621945 1573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:56:19.623678 kubelet[1573]: I0515 00:56:19.623657 1573 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:56:19.623730 kubelet[1573]: I0515 00:56:19.623703 1573 server.go:1287] "Started kubelet" May 15 00:56:19.624668 kubelet[1573]: I0515 00:56:19.624165 1573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:56:19.624668 kubelet[1573]: I0515 00:56:19.624512 1573 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:56:19.624668 kubelet[1573]: I0515 00:56:19.624569 1573 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:56:19.625525 kubelet[1573]: I0515 00:56:19.625499 1573 server.go:490] "Adding debug handlers to kubelet server" May 15 00:56:19.626651 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 00:56:19.626793 kubelet[1573]: I0515 00:56:19.626778 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:56:19.627131 kubelet[1573]: I0515 00:56:19.626953 1573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:56:19.628577 kubelet[1573]: I0515 00:56:19.628564 1573 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:56:19.628803 kubelet[1573]: E0515 00:56:19.628788 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.629320 kubelet[1573]: I0515 00:56:19.629306 1573 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:56:19.629601 kubelet[1573]: I0515 00:56:19.629588 1573 reconciler.go:26] "Reconciler: start to sync state" May 15 00:56:19.630254 kubelet[1573]: W0515 00:56:19.630203 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:19.630254 kubelet[1573]: E0515 00:56:19.630253 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.631719 kubelet[1573]: E0515 00:56:19.630326 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" May 15 00:56:19.631719 kubelet[1573]: E0515 00:56:19.630188 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8d55b0f5c170 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:56:19.623674224 +0000 UTC m=+0.317165603,LastTimestamp:2025-05-15 00:56:19.623674224 +0000 UTC m=+0.317165603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:56:19.631719 kubelet[1573]: I0515 00:56:19.631389 1573 factory.go:221] Registration of the systemd container factory successfully May 15 00:56:19.632095 kubelet[1573]: E0515 00:56:19.632077 1573 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:56:19.632773 kubelet[1573]: I0515 00:56:19.632749 1573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:56:19.633662 kubelet[1573]: I0515 00:56:19.633649 1573 factory.go:221] Registration of the containerd container factory successfully May 15 00:56:19.641252 kubelet[1573]: I0515 00:56:19.641212 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:56:19.642308 kubelet[1573]: I0515 00:56:19.642290 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:56:19.642373 kubelet[1573]: I0515 00:56:19.642312 1573 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:56:19.642373 kubelet[1573]: I0515 00:56:19.642348 1573 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:56:19.642373 kubelet[1573]: I0515 00:56:19.642354 1573 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:56:19.642447 kubelet[1573]: E0515 00:56:19.642392 1573 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:56:19.642962 kubelet[1573]: I0515 00:56:19.642947 1573 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:56:19.643051 kubelet[1573]: I0515 00:56:19.642962 1573 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:56:19.643051 kubelet[1573]: I0515 00:56:19.642982 1573 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:19.654650 kubelet[1573]: W0515 00:56:19.654620 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:19.654783 kubelet[1573]: E0515 00:56:19.654661 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:19.730049 kubelet[1573]: E0515 00:56:19.729980 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.743206 kubelet[1573]: E0515 00:56:19.743166 1573 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:56:19.830936 kubelet[1573]: E0515 00:56:19.830830 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.832325 kubelet[1573]: E0515 00:56:19.832268 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" May 15 00:56:19.931652 kubelet[1573]: E0515 00:56:19.931621 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:19.943834 kubelet[1573]: E0515 00:56:19.943800 1573 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:56:20.032292 kubelet[1573]: E0515 00:56:20.032233 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:20.106834 kubelet[1573]: I0515 00:56:20.106759 1573 policy_none.go:49] "None policy: Start" May 15 00:56:20.106834 kubelet[1573]: I0515 00:56:20.106789 1573 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:56:20.106834 kubelet[1573]: I0515 00:56:20.106802 1573 state_mem.go:35] "Initializing new in-memory state store" May 15 00:56:20.133174 kubelet[1573]: E0515 00:56:20.133150 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:20.151918 systemd[1]: Created slice kubepods.slice. May 15 00:56:20.153426 kubelet[1573]: E0515 00:56:20.153332 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8d55b0f5c170 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:56:19.623674224 +0000 UTC m=+0.317165603,LastTimestamp:2025-05-15 00:56:19.623674224 +0000 UTC m=+0.317165603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:56:20.155554 systemd[1]: Created slice kubepods-burstable.slice. May 15 00:56:20.157733 systemd[1]: Created slice kubepods-besteffort.slice. May 15 00:56:20.168560 kubelet[1573]: I0515 00:56:20.168527 1573 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:56:20.168666 kubelet[1573]: I0515 00:56:20.168653 1573 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:56:20.168740 kubelet[1573]: I0515 00:56:20.168670 1573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:56:20.169069 kubelet[1573]: I0515 00:56:20.168850 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:56:20.169459 kubelet[1573]: E0515 00:56:20.169425 1573 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:56:20.169519 kubelet[1573]: E0515 00:56:20.169478 1573 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:56:20.233628 kubelet[1573]: E0515 00:56:20.233598 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" May 15 00:56:20.270681 kubelet[1573]: I0515 00:56:20.270663 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:20.270940 kubelet[1573]: E0515 00:56:20.270917 1573 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" May 15 00:56:20.349894 systemd[1]: Created slice kubepods-burstable-pod2912a870ca8bd8708d5701cfa27ed384.slice. May 15 00:56:20.372437 kubelet[1573]: E0515 00:56:20.372353 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:20.374897 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 00:56:20.376418 kubelet[1573]: E0515 00:56:20.376384 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:20.377935 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 00:56:20.379158 kubelet[1573]: E0515 00:56:20.379130 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:20.434636 kubelet[1573]: I0515 00:56:20.434573 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:20.434636 kubelet[1573]: I0515 00:56:20.434623 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:20.434636 kubelet[1573]: I0515 00:56:20.434643 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.434841 kubelet[1573]: I0515 00:56:20.434660 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.434841 kubelet[1573]: I0515 00:56:20.434716 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:20.434841 kubelet[1573]: I0515 00:56:20.434783 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.434841 kubelet[1573]: I0515 00:56:20.434823 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.434973 kubelet[1573]: I0515 00:56:20.434845 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:20.434973 kubelet[1573]: I0515 00:56:20.434866 1573 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:56:20.472821 kubelet[1573]: I0515 00:56:20.472786 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:20.473249 kubelet[1573]: E0515 00:56:20.473210 1573 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" May 15 00:56:20.558370 kubelet[1573]: W0515 00:56:20.558275 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:20.558370 kubelet[1573]: E0515 00:56:20.558362 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.673669 kubelet[1573]: E0515 00:56:20.673537 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:20.674432 env[1218]: time="2025-05-15T00:56:20.674357946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2912a870ca8bd8708d5701cfa27ed384,Namespace:kube-system,Attempt:0,}" May 15 00:56:20.677530 kubelet[1573]: E0515 00:56:20.677496 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:20.677869 env[1218]: time="2025-05-15T00:56:20.677841710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 00:56:20.680100 kubelet[1573]: E0515 00:56:20.680071 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:20.680546 env[1218]: time="2025-05-15T00:56:20.680492362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 00:56:20.700872 kubelet[1573]: W0515 00:56:20.700820 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:20.700978 kubelet[1573]: E0515 00:56:20.700885 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.758111 kubelet[1573]: W0515 00:56:20.757983 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:20.758111 kubelet[1573]: E0515 00:56:20.758105 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:20.874802 kubelet[1573]: I0515 00:56:20.874782 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:20.875191 kubelet[1573]: E0515 00:56:20.875158 1573 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" May 15 00:56:21.016220 kubelet[1573]: W0515 00:56:21.016174 1573 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused May 15 00:56:21.016220 kubelet[1573]: E0515 00:56:21.016226 1573 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" May 15 00:56:21.034076 kubelet[1573]: E0515 00:56:21.034021 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" May 15 00:56:21.112827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount784965443.mount: Deactivated successfully. May 15 00:56:21.119175 env[1218]: time="2025-05-15T00:56:21.119127617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.123619 env[1218]: time="2025-05-15T00:56:21.123585719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.125131 env[1218]: time="2025-05-15T00:56:21.125101602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.126030 env[1218]: time="2025-05-15T00:56:21.126001630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.127768 env[1218]: time="2025-05-15T00:56:21.127719442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.128697 env[1218]: time="2025-05-15T00:56:21.128672170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.130693 env[1218]: time="2025-05-15T00:56:21.130646954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.131985 env[1218]: time="2025-05-15T00:56:21.131957732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.133588 env[1218]: time="2025-05-15T00:56:21.133553736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.134170 env[1218]: time="2025-05-15T00:56:21.134139875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.135379 env[1218]: time="2025-05-15T00:56:21.135348262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.136433 env[1218]: time="2025-05-15T00:56:21.136382772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:21.161154 env[1218]: time="2025-05-15T00:56:21.161090567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:21.161304 env[1218]: time="2025-05-15T00:56:21.161155228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:21.161304 env[1218]: time="2025-05-15T00:56:21.161176578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:21.161374 env[1218]: time="2025-05-15T00:56:21.161278850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f383eac0dd3664273ba98aba10cf916332338bb831858ef1978a09a4a516982 pid=1615 runtime=io.containerd.runc.v2 May 15 00:56:21.181898 env[1218]: time="2025-05-15T00:56:21.181114446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:21.181898 env[1218]: time="2025-05-15T00:56:21.181194436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:21.181898 env[1218]: time="2025-05-15T00:56:21.181216418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:21.182121 env[1218]: time="2025-05-15T00:56:21.181944323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff5752e46a0370b0efd17bc066de80274e4211f94e51cf044cbdd0f26405fea pid=1645 runtime=io.containerd.runc.v2 May 15 00:56:21.183348 env[1218]: time="2025-05-15T00:56:21.183103487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:21.183348 env[1218]: time="2025-05-15T00:56:21.183155124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:21.183348 env[1218]: time="2025-05-15T00:56:21.183179199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:21.183466 env[1218]: time="2025-05-15T00:56:21.183398030Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b0248b004fe507d81b9759f2d8f17c523323aa8410dd797cfe3590047076eae pid=1631 runtime=io.containerd.runc.v2 May 15 00:56:21.189102 systemd[1]: Started cri-containerd-7f383eac0dd3664273ba98aba10cf916332338bb831858ef1978a09a4a516982.scope. May 15 00:56:21.203674 systemd[1]: Started cri-containerd-4ff5752e46a0370b0efd17bc066de80274e4211f94e51cf044cbdd0f26405fea.scope. May 15 00:56:21.309683 systemd[1]: Started cri-containerd-4b0248b004fe507d81b9759f2d8f17c523323aa8410dd797cfe3590047076eae.scope. May 15 00:56:21.350781 env[1218]: time="2025-05-15T00:56:21.350736358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ff5752e46a0370b0efd17bc066de80274e4211f94e51cf044cbdd0f26405fea\"" May 15 00:56:21.351927 kubelet[1573]: E0515 00:56:21.351896 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.353463 env[1218]: time="2025-05-15T00:56:21.353427136Z" level=info msg="CreateContainer within sandbox \"4ff5752e46a0370b0efd17bc066de80274e4211f94e51cf044cbdd0f26405fea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:56:21.358148 env[1218]: time="2025-05-15T00:56:21.358109969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f383eac0dd3664273ba98aba10cf916332338bb831858ef1978a09a4a516982\"" May 15 00:56:21.358778 kubelet[1573]: E0515 00:56:21.358744 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.360276 env[1218]: time="2025-05-15T00:56:21.360243120Z" level=info msg="CreateContainer within sandbox \"7f383eac0dd3664273ba98aba10cf916332338bb831858ef1978a09a4a516982\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:56:21.367222 env[1218]: time="2025-05-15T00:56:21.367173370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2912a870ca8bd8708d5701cfa27ed384,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b0248b004fe507d81b9759f2d8f17c523323aa8410dd797cfe3590047076eae\"" May 15 00:56:21.367857 kubelet[1573]: E0515 00:56:21.367831 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.369193 env[1218]: time="2025-05-15T00:56:21.369145709Z" level=info msg="CreateContainer within sandbox \"4b0248b004fe507d81b9759f2d8f17c523323aa8410dd797cfe3590047076eae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:56:21.372204 env[1218]: time="2025-05-15T00:56:21.372165764Z" level=info msg="CreateContainer within sandbox \"4ff5752e46a0370b0efd17bc066de80274e4211f94e51cf044cbdd0f26405fea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc0b9162621c97889e1c1d3346a92bf19b5b90f2c0cdd25eefd8463b274efd9f\"" May 15 00:56:21.372738 env[1218]: time="2025-05-15T00:56:21.372715114Z" level=info msg="StartContainer for \"dc0b9162621c97889e1c1d3346a92bf19b5b90f2c0cdd25eefd8463b274efd9f\"" May 15 00:56:21.381076 env[1218]: time="2025-05-15T00:56:21.381029770Z" level=info msg="CreateContainer within sandbox \"7f383eac0dd3664273ba98aba10cf916332338bb831858ef1978a09a4a516982\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7933f3ffa2ff7a711f453074d9bf3896112153b203498579ad3fe3c1af5a5f52\"" May 15 00:56:21.381611 env[1218]: time="2025-05-15T00:56:21.381581896Z" level=info msg="StartContainer for \"7933f3ffa2ff7a711f453074d9bf3896112153b203498579ad3fe3c1af5a5f52\"" May 15 00:56:21.387346 env[1218]: time="2025-05-15T00:56:21.387311303Z" level=info msg="CreateContainer within sandbox \"4b0248b004fe507d81b9759f2d8f17c523323aa8410dd797cfe3590047076eae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f69443cf23ff5efa7af2720cb0f493f27be45bc06ef1bb30b0ba3bd510b4651\"" May 15 00:56:21.388297 systemd[1]: Started cri-containerd-dc0b9162621c97889e1c1d3346a92bf19b5b90f2c0cdd25eefd8463b274efd9f.scope. May 15 00:56:21.389266 env[1218]: time="2025-05-15T00:56:21.389245951Z" level=info msg="StartContainer for \"0f69443cf23ff5efa7af2720cb0f493f27be45bc06ef1bb30b0ba3bd510b4651\"" May 15 00:56:21.396553 systemd[1]: Started cri-containerd-7933f3ffa2ff7a711f453074d9bf3896112153b203498579ad3fe3c1af5a5f52.scope. May 15 00:56:21.407535 systemd[1]: Started cri-containerd-0f69443cf23ff5efa7af2720cb0f493f27be45bc06ef1bb30b0ba3bd510b4651.scope. May 15 00:56:21.520121 env[1218]: time="2025-05-15T00:56:21.520065802Z" level=info msg="StartContainer for \"dc0b9162621c97889e1c1d3346a92bf19b5b90f2c0cdd25eefd8463b274efd9f\" returns successfully" May 15 00:56:21.527195 env[1218]: time="2025-05-15T00:56:21.527144029Z" level=info msg="StartContainer for \"7933f3ffa2ff7a711f453074d9bf3896112153b203498579ad3fe3c1af5a5f52\" returns successfully" May 15 00:56:21.534271 env[1218]: time="2025-05-15T00:56:21.533150806Z" level=info msg="StartContainer for \"0f69443cf23ff5efa7af2720cb0f493f27be45bc06ef1bb30b0ba3bd510b4651\" returns successfully" May 15 00:56:21.650829 kubelet[1573]: E0515 00:56:21.650640 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:21.652149 kubelet[1573]: E0515 00:56:21.652055 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.654848 kubelet[1573]: E0515 00:56:21.654814 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:21.655103 kubelet[1573]: E0515 00:56:21.655021 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.657167 kubelet[1573]: E0515 00:56:21.657139 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:21.657252 kubelet[1573]: E0515 00:56:21.657227 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:21.676440 kubelet[1573]: I0515 00:56:21.676403 1573 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:22.659166 kubelet[1573]: E0515 00:56:22.659136 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:22.659818 kubelet[1573]: E0515 00:56:22.659718 1573 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 00:56:22.660032 kubelet[1573]: E0515 00:56:22.660020 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:22.660198 kubelet[1573]: E0515 00:56:22.660187 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:23.178676 kubelet[1573]: E0515 00:56:23.178633 1573 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:56:23.305766 kubelet[1573]: I0515 00:56:23.305720 1573 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:56:23.305766 kubelet[1573]: E0515 00:56:23.305759 1573 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 00:56:23.309013 kubelet[1573]: E0515 00:56:23.308957 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:23.409488 kubelet[1573]: E0515 00:56:23.409436 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:23.510189 kubelet[1573]: E0515 00:56:23.510148 1573 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:56:23.611018 kubelet[1573]: I0515 00:56:23.610974 1573 apiserver.go:52] "Watching apiserver" May 15 00:56:23.629267 kubelet[1573]: I0515 00:56:23.629216 1573 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:56:23.630050 kubelet[1573]: I0515 00:56:23.630032 1573 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:56:23.633458 kubelet[1573]: E0515 00:56:23.633433 1573 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 00:56:23.633551 kubelet[1573]: I0515 00:56:23.633460 1573 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:23.634632 kubelet[1573]: E0515 00:56:23.634613 1573 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 00:56:23.634632 kubelet[1573]: I0515 00:56:23.634629 1573 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:23.635899 kubelet[1573]: E0515 00:56:23.635871 1573 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.249333 systemd[1]: Reloading. May 15 00:56:25.358593 /usr/lib/systemd/system-generators/torcx-generator[1879]: time="2025-05-15T00:56:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:56:25.358616 /usr/lib/systemd/system-generators/torcx-generator[1879]: time="2025-05-15T00:56:25Z" level=info msg="torcx already run" May 15 00:56:25.402985 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:56:25.403031 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:56:25.419720 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:56:25.505542 kubelet[1573]: I0515 00:56:25.505139 1573 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:56:25.505251 systemd[1]: Stopping kubelet.service... May 15 00:56:25.529486 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:56:25.529722 systemd[1]: Stopped kubelet.service. May 15 00:56:25.531689 systemd[1]: Starting kubelet.service... May 15 00:56:25.617791 systemd[1]: Started kubelet.service. May 15 00:56:25.659845 kubelet[1916]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:25.659845 kubelet[1916]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 00:56:25.659845 kubelet[1916]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:56:25.660374 kubelet[1916]: I0515 00:56:25.659872 1916 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:56:25.667515 kubelet[1916]: I0515 00:56:25.667470 1916 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 00:56:25.667515 kubelet[1916]: I0515 00:56:25.667500 1916 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:56:25.667785 kubelet[1916]: I0515 00:56:25.667764 1916 server.go:954] "Client rotation is on, will bootstrap in background" May 15 00:56:25.668875 kubelet[1916]: I0515 00:56:25.668858 1916 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:56:25.671657 kubelet[1916]: I0515 00:56:25.671639 1916 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:56:25.674568 kubelet[1916]: E0515 00:56:25.674519 1916 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:56:25.674568 kubelet[1916]: I0515 00:56:25.674547 1916 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:56:25.678570 kubelet[1916]: I0515 00:56:25.678539 1916 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:56:25.678813 kubelet[1916]: I0515 00:56:25.678773 1916 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:56:25.678962 kubelet[1916]: I0515 00:56:25.678803 1916 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:56:25.679070 kubelet[1916]: I0515 00:56:25.678971 1916 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:56:25.679070 kubelet[1916]: I0515 00:56:25.678979 1916 container_manager_linux.go:304] "Creating device plugin manager" May 15 00:56:25.679070 kubelet[1916]: I0515 00:56:25.679033 1916 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:25.679179 kubelet[1916]: I0515 00:56:25.679167 1916 kubelet.go:446] "Attempting to sync node with API server" May 15 00:56:25.679179 kubelet[1916]: I0515 00:56:25.679180 1916 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:56:25.679245 kubelet[1916]: I0515 00:56:25.679201 1916 kubelet.go:352] "Adding apiserver pod source" May 15 00:56:25.679245 kubelet[1916]: I0515 00:56:25.679210 1916 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:56:25.679981 kubelet[1916]: I0515 00:56:25.679938 1916 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:56:25.680599 kubelet[1916]: I0515 00:56:25.680503 1916 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:56:25.681114 kubelet[1916]: I0515 00:56:25.681094 1916 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 00:56:25.681160 kubelet[1916]: I0515 00:56:25.681129 1916 server.go:1287] "Started kubelet" May 15 00:56:25.683541 kubelet[1916]: I0515 00:56:25.683474 1916 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:56:25.687056 kubelet[1916]: I0515 00:56:25.687036 1916 server.go:490] "Adding debug handlers to kubelet server" May 15 00:56:25.687233 kubelet[1916]: I0515 00:56:25.687161 1916 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:56:25.688456 kubelet[1916]: I0515 00:56:25.687066 1916 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:56:25.711491 kubelet[1916]: I0515 00:56:25.711374 1916 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:56:25.712125 kubelet[1916]: I0515 00:56:25.712103 1916 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:56:25.712544 kubelet[1916]: I0515 00:56:25.712507 1916 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 00:56:25.712699 kubelet[1916]: I0515 00:56:25.712667 1916 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:56:25.712809 kubelet[1916]: I0515 00:56:25.712791 1916 reconciler.go:26] "Reconciler: start to sync state" May 15 00:56:25.713508 kubelet[1916]: I0515 00:56:25.713493 1916 factory.go:221] Registration of the systemd container factory successfully May 15 00:56:25.713681 kubelet[1916]: I0515 00:56:25.713662 1916 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:56:25.714118 kubelet[1916]: E0515 00:56:25.714091 1916 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:56:25.714799 kubelet[1916]: I0515 00:56:25.714772 1916 factory.go:221] Registration of the containerd container factory successfully May 15 00:56:25.722676 kubelet[1916]: I0515 00:56:25.722650 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:56:25.723508 kubelet[1916]: I0515 00:56:25.723491 1916 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:56:25.723633 kubelet[1916]: I0515 00:56:25.723619 1916 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 00:56:25.723722 kubelet[1916]: I0515 00:56:25.723708 1916 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 00:56:25.723795 kubelet[1916]: I0515 00:56:25.723781 1916 kubelet.go:2388] "Starting kubelet main sync loop" May 15 00:56:25.724028 kubelet[1916]: E0515 00:56:25.724003 1916 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:56:25.751370 kubelet[1916]: I0515 00:56:25.751326 1916 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 00:56:25.751370 kubelet[1916]: I0515 00:56:25.751355 1916 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 00:56:25.751370 kubelet[1916]: I0515 00:56:25.751383 1916 state_mem.go:36] "Initialized new in-memory state store" May 15 00:56:25.751748 kubelet[1916]: I0515 00:56:25.751577 1916 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:56:25.751748 kubelet[1916]: I0515 00:56:25.751591 1916 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:56:25.751748 kubelet[1916]: I0515 00:56:25.751612 1916 policy_none.go:49] "None policy: Start" May 15 00:56:25.751748 kubelet[1916]: I0515 00:56:25.751637 1916 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 00:56:25.751748 kubelet[1916]: I0515 00:56:25.751649 1916 state_mem.go:35] "Initializing new in-memory state store" May 15 00:56:25.751980 kubelet[1916]: I0515 00:56:25.751826 1916 state_mem.go:75] "Updated machine memory state" May 15 00:56:25.756176 kubelet[1916]: I0515 00:56:25.756085 1916 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:56:25.756256 kubelet[1916]: I0515 00:56:25.756228 1916 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:56:25.756307 kubelet[1916]: I0515 00:56:25.756238 1916 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:56:25.757233 kubelet[1916]: I0515 00:56:25.757202 1916 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:56:25.759255 kubelet[1916]: E0515 00:56:25.758834 1916 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 00:56:25.824771 kubelet[1916]: I0515 00:56:25.824737 1916 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.824970 kubelet[1916]: I0515 00:56:25.824932 1916 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.825189 kubelet[1916]: I0515 00:56:25.824818 1916 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:25.862743 kubelet[1916]: I0515 00:56:25.862702 1916 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 00:56:25.868811 kubelet[1916]: I0515 00:56:25.868779 1916 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 00:56:25.868980 kubelet[1916]: I0515 00:56:25.868857 1916 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 00:56:25.914185 kubelet[1916]: I0515 00:56:25.914137 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.914185 kubelet[1916]: I0515 00:56:25.914176 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.914396 kubelet[1916]: I0515 00:56:25.914203 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.914396 kubelet[1916]: I0515 00:56:25.914219 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.914396 kubelet[1916]: I0515 00:56:25.914244 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.914396 kubelet[1916]: I0515 00:56:25.914256 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.914396 kubelet[1916]: I0515 00:56:25.914269 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2912a870ca8bd8708d5701cfa27ed384-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2912a870ca8bd8708d5701cfa27ed384\") " pod="kube-system/kube-apiserver-localhost" May 15 00:56:25.914509 kubelet[1916]: I0515 00:56:25.914283 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:56:25.914509 kubelet[1916]: I0515 00:56:25.914297 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 00:56:26.143240 kubelet[1916]: E0515 00:56:26.143099 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.143240 kubelet[1916]: E0515 00:56:26.143122 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.145321 kubelet[1916]: E0515 00:56:26.145287 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.247964 sudo[1951]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:56:26.248237 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 00:56:26.680640 kubelet[1916]: I0515 00:56:26.680591 1916 apiserver.go:52] "Watching apiserver" May 15 00:56:26.733591 kubelet[1916]: I0515 00:56:26.733543 1916 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 00:56:26.733831 kubelet[1916]: I0515 00:56:26.733810 1916 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 00:56:26.733925 kubelet[1916]: E0515 00:56:26.733898 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.814231 kubelet[1916]: I0515 00:56:26.814179 1916 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:56:26.840437 kubelet[1916]: I0515 00:56:26.840150 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.840110186 podStartE2EDuration="1.840110186s" podCreationTimestamp="2025-05-15 00:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:26.838725081 +0000 UTC m=+1.216536520" watchObservedRunningTime="2025-05-15 00:56:26.840110186 +0000 UTC m=+1.217921605" May 15 00:56:26.840658 kubelet[1916]: E0515 00:56:26.840498 1916 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:56:26.840689 kubelet[1916]: E0515 00:56:26.840674 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.840799 kubelet[1916]: E0515 00:56:26.840411 1916 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 00:56:26.840953 kubelet[1916]: E0515 00:56:26.840938 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:26.847753 kubelet[1916]: I0515 00:56:26.847689 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8476701709999999 podStartE2EDuration="1.847670171s" podCreationTimestamp="2025-05-15 00:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:26.847403347 +0000 UTC m=+1.225214786" watchObservedRunningTime="2025-05-15 00:56:26.847670171 +0000 UTC m=+1.225481590" May 15 00:56:26.865624 kubelet[1916]: I0515 00:56:26.865562 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.865516795 podStartE2EDuration="1.865516795s" podCreationTimestamp="2025-05-15 00:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:26.856620089 +0000 UTC m=+1.234431509" watchObservedRunningTime="2025-05-15 00:56:26.865516795 +0000 UTC m=+1.243328214" May 15 00:56:26.900510 sudo[1951]: pam_unix(sudo:session): session closed for user root May 15 00:56:27.734685 kubelet[1916]: E0515 00:56:27.734645 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:27.734685 kubelet[1916]: E0515 00:56:27.734666 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:27.761480 kubelet[1916]: E0515 00:56:27.761426 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:28.528756 sudo[1312]: pam_unix(sudo:session): session closed for user root May 15 00:56:28.530014 sshd[1309]: pam_unix(sshd:session): session closed for user core May 15 00:56:28.532350 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:36880.service: Deactivated successfully. May 15 00:56:28.533015 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:56:28.533136 systemd[1]: session-5.scope: Consumed 5.127s CPU time. May 15 00:56:28.533658 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. May 15 00:56:28.534415 systemd-logind[1205]: Removed session 5. May 15 00:56:30.101099 kubelet[1916]: I0515 00:56:30.101067 1916 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:56:30.101499 env[1218]: time="2025-05-15T00:56:30.101388262Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:56:30.101706 kubelet[1916]: I0515 00:56:30.101531 1916 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:56:31.139022 systemd[1]: Created slice kubepods-besteffort-pod35e69eb2_f795_4cc7_ba91_b253d8dcc909.slice. May 15 00:56:31.182334 systemd[1]: Created slice kubepods-burstable-pod8635ed43_f669_4c03_ae8c_3852cb34dd88.slice. May 15 00:56:31.277958 kubelet[1916]: I0515 00:56:31.277883 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-etc-cni-netd\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.277958 kubelet[1916]: I0515 00:56:31.277939 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-lib-modules\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.277958 kubelet[1916]: I0515 00:56:31.277962 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-config-path\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278475 kubelet[1916]: I0515 00:56:31.277985 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-bpf-maps\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278475 kubelet[1916]: I0515 00:56:31.278029 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8635ed43-f669-4c03-ae8c-3852cb34dd88-clustermesh-secrets\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278475 kubelet[1916]: I0515 00:56:31.278054 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-run\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278475 kubelet[1916]: I0515 00:56:31.278111 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cni-path\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278475 kubelet[1916]: I0515 00:56:31.278154 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-xtables-lock\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278475 kubelet[1916]: I0515 00:56:31.278174 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kpd\" (UniqueName: \"kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-kube-api-access-l2kpd\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278631 kubelet[1916]: I0515 00:56:31.278216 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35e69eb2-f795-4cc7-ba91-b253d8dcc909-lib-modules\") pod \"kube-proxy-4twjb\" (UID: \"35e69eb2-f795-4cc7-ba91-b253d8dcc909\") " pod="kube-system/kube-proxy-4twjb" May 15 00:56:31.278631 kubelet[1916]: I0515 00:56:31.278249 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-kernel\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278631 kubelet[1916]: I0515 00:56:31.278287 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-hubble-tls\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278631 kubelet[1916]: I0515 00:56:31.278381 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35e69eb2-f795-4cc7-ba91-b253d8dcc909-kube-proxy\") pod \"kube-proxy-4twjb\" (UID: \"35e69eb2-f795-4cc7-ba91-b253d8dcc909\") " pod="kube-system/kube-proxy-4twjb" May 15 00:56:31.278631 kubelet[1916]: I0515 00:56:31.278449 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35e69eb2-f795-4cc7-ba91-b253d8dcc909-xtables-lock\") pod \"kube-proxy-4twjb\" (UID: \"35e69eb2-f795-4cc7-ba91-b253d8dcc909\") " pod="kube-system/kube-proxy-4twjb" May 15 00:56:31.278759 kubelet[1916]: I0515 00:56:31.278496 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b8gh\" (UniqueName: \"kubernetes.io/projected/35e69eb2-f795-4cc7-ba91-b253d8dcc909-kube-api-access-5b8gh\") pod \"kube-proxy-4twjb\" (UID: \"35e69eb2-f795-4cc7-ba91-b253d8dcc909\") " pod="kube-system/kube-proxy-4twjb" May 15 00:56:31.278759 kubelet[1916]: I0515 00:56:31.278523 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-cgroup\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278759 kubelet[1916]: I0515 00:56:31.278557 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-hostproc\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.278759 kubelet[1916]: I0515 00:56:31.278591 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-net\") pod \"cilium-46v9w\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " pod="kube-system/cilium-46v9w" May 15 00:56:31.367137 systemd[1]: Created slice kubepods-besteffort-pod915bd244_03af_48ed_b4fc_e27f093c226f.slice. May 15 00:56:31.379436 kubelet[1916]: I0515 00:56:31.379400 1916 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 00:56:31.480055 kubelet[1916]: I0515 00:56:31.479977 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/915bd244-03af-48ed-b4fc-e27f093c226f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jshvs\" (UID: \"915bd244-03af-48ed-b4fc-e27f093c226f\") " pod="kube-system/cilium-operator-6c4d7847fc-jshvs" May 15 00:56:31.480195 kubelet[1916]: I0515 00:56:31.480062 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htktg\" (UniqueName: \"kubernetes.io/projected/915bd244-03af-48ed-b4fc-e27f093c226f-kube-api-access-htktg\") pod \"cilium-operator-6c4d7847fc-jshvs\" (UID: \"915bd244-03af-48ed-b4fc-e27f093c226f\") " pod="kube-system/cilium-operator-6c4d7847fc-jshvs" May 15 00:56:31.484405 kubelet[1916]: E0515 00:56:31.484367 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:31.485138 env[1218]: time="2025-05-15T00:56:31.485102208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46v9w,Uid:8635ed43-f669-4c03-ae8c-3852cb34dd88,Namespace:kube-system,Attempt:0,}" May 15 00:56:31.880289 kubelet[1916]: E0515 00:56:31.751069 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:31.880803 env[1218]: time="2025-05-15T00:56:31.880757717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4twjb,Uid:35e69eb2-f795-4cc7-ba91-b253d8dcc909,Namespace:kube-system,Attempt:0,}" May 15 00:56:31.913239 env[1218]: time="2025-05-15T00:56:31.913150256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:31.913464 env[1218]: time="2025-05-15T00:56:31.913211813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:31.913464 env[1218]: time="2025-05-15T00:56:31.913224397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:31.913578 env[1218]: time="2025-05-15T00:56:31.913290854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:31.913578 env[1218]: time="2025-05-15T00:56:31.913312776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:31.913578 env[1218]: time="2025-05-15T00:56:31.913321543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:31.913578 env[1218]: time="2025-05-15T00:56:31.913398780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3 pid=2020 runtime=io.containerd.runc.v2 May 15 00:56:31.913856 env[1218]: time="2025-05-15T00:56:31.913815857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4006f81eca1c2667869472dfaa813c3b854a2c19220e4d7d974d501f6a623c58 pid=2019 runtime=io.containerd.runc.v2 May 15 00:56:31.923898 systemd[1]: Started cri-containerd-59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3.scope. May 15 00:56:31.928222 systemd[1]: Started cri-containerd-4006f81eca1c2667869472dfaa813c3b854a2c19220e4d7d974d501f6a623c58.scope. May 15 00:56:31.949778 env[1218]: time="2025-05-15T00:56:31.949721579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4twjb,Uid:35e69eb2-f795-4cc7-ba91-b253d8dcc909,Namespace:kube-system,Attempt:0,} returns sandbox id \"4006f81eca1c2667869472dfaa813c3b854a2c19220e4d7d974d501f6a623c58\"" May 15 00:56:31.950686 kubelet[1916]: E0515 00:56:31.950645 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:31.952154 env[1218]: time="2025-05-15T00:56:31.952130130Z" level=info msg="CreateContainer within sandbox \"4006f81eca1c2667869472dfaa813c3b854a2c19220e4d7d974d501f6a623c58\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:56:31.969475 kubelet[1916]: E0515 00:56:31.969442 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:31.970862 env[1218]: time="2025-05-15T00:56:31.970825129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jshvs,Uid:915bd244-03af-48ed-b4fc-e27f093c226f,Namespace:kube-system,Attempt:0,}" May 15 00:56:31.998613 env[1218]: time="2025-05-15T00:56:31.998529612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:31.998613 env[1218]: time="2025-05-15T00:56:31.998570209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:31.998613 env[1218]: time="2025-05-15T00:56:31.998581400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:31.999058 env[1218]: time="2025-05-15T00:56:31.999011772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb pid=2089 runtime=io.containerd.runc.v2 May 15 00:56:32.008082 systemd[1]: Started cri-containerd-874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb.scope. May 15 00:56:32.051487 env[1218]: time="2025-05-15T00:56:32.051441615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-46v9w,Uid:8635ed43-f669-4c03-ae8c-3852cb34dd88,Namespace:kube-system,Attempt:0,} returns sandbox id \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\"" May 15 00:56:32.053920 kubelet[1916]: E0515 00:56:32.053893 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.054817 env[1218]: time="2025-05-15T00:56:32.054242510Z" level=info msg="CreateContainer within sandbox \"4006f81eca1c2667869472dfaa813c3b854a2c19220e4d7d974d501f6a623c58\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8baabe3a526f55c72102bc0ca2286abf90f4260e5642548fa305e51f1a368b2d\"" May 15 00:56:32.054817 env[1218]: time="2025-05-15T00:56:32.054593941Z" level=info msg="StartContainer for \"8baabe3a526f55c72102bc0ca2286abf90f4260e5642548fa305e51f1a368b2d\"" May 15 00:56:32.055050 env[1218]: time="2025-05-15T00:56:32.054987241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:56:32.078764 systemd[1]: Started cri-containerd-8baabe3a526f55c72102bc0ca2286abf90f4260e5642548fa305e51f1a368b2d.scope. May 15 00:56:32.097471 env[1218]: time="2025-05-15T00:56:32.097426863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jshvs,Uid:915bd244-03af-48ed-b4fc-e27f093c226f,Namespace:kube-system,Attempt:0,} returns sandbox id \"874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb\"" May 15 00:56:32.099042 kubelet[1916]: E0515 00:56:32.098979 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.112939 env[1218]: time="2025-05-15T00:56:32.112889142Z" level=info msg="StartContainer for \"8baabe3a526f55c72102bc0ca2286abf90f4260e5642548fa305e51f1a368b2d\" returns successfully" May 15 00:56:32.630199 kubelet[1916]: E0515 00:56:32.630161 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.743761 kubelet[1916]: E0515 00:56:32.743721 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.744380 kubelet[1916]: E0515 00:56:32.744365 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.905676 kubelet[1916]: E0515 00:56:32.905585 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:32.983780 kubelet[1916]: I0515 00:56:32.983706 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4twjb" podStartSLOduration=1.9836876989999999 podStartE2EDuration="1.983687699s" podCreationTimestamp="2025-05-15 00:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:32.983622785 +0000 UTC m=+7.361434234" watchObservedRunningTime="2025-05-15 00:56:32.983687699 +0000 UTC m=+7.361499108" May 15 00:56:33.746332 kubelet[1916]: E0515 00:56:33.746295 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:33.746332 kubelet[1916]: E0515 00:56:33.746325 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:36.947106 update_engine[1210]: I0515 00:56:36.947045 1210 update_attempter.cc:509] Updating boot flags... May 15 00:56:37.759422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988596959.mount: Deactivated successfully. May 15 00:56:37.767893 kubelet[1916]: E0515 00:56:37.767856 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:41.810347 env[1218]: time="2025-05-15T00:56:41.810289229Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:41.831105 env[1218]: time="2025-05-15T00:56:41.831076711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:41.836270 env[1218]: time="2025-05-15T00:56:41.836233647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:41.836723 env[1218]: time="2025-05-15T00:56:41.836694240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:56:41.838029 env[1218]: time="2025-05-15T00:56:41.837953816Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:56:41.840139 env[1218]: time="2025-05-15T00:56:41.840102194Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:56:41.853521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210352880.mount: Deactivated successfully. May 15 00:56:41.854551 env[1218]: time="2025-05-15T00:56:41.854513861Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\"" May 15 00:56:41.854964 env[1218]: time="2025-05-15T00:56:41.854934798Z" level=info msg="StartContainer for \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\"" May 15 00:56:41.872821 systemd[1]: Started cri-containerd-2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471.scope. May 15 00:56:41.892330 env[1218]: time="2025-05-15T00:56:41.892281323Z" level=info msg="StartContainer for \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\" returns successfully" May 15 00:56:41.898628 systemd[1]: cri-containerd-2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471.scope: Deactivated successfully. May 15 00:56:42.096949 env[1218]: time="2025-05-15T00:56:42.096804569Z" level=info msg="shim disconnected" id=2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471 May 15 00:56:42.096949 env[1218]: time="2025-05-15T00:56:42.096859443Z" level=warning msg="cleaning up after shim disconnected" id=2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471 namespace=k8s.io May 15 00:56:42.096949 env[1218]: time="2025-05-15T00:56:42.096869592Z" level=info msg="cleaning up dead shim" May 15 00:56:42.104027 env[1218]: time="2025-05-15T00:56:42.103954293Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2361 runtime=io.containerd.runc.v2\n" May 15 00:56:42.768404 kubelet[1916]: E0515 00:56:42.767454 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:42.770271 env[1218]: time="2025-05-15T00:56:42.770202541Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:56:42.851375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471-rootfs.mount: Deactivated successfully. May 15 00:56:42.880284 env[1218]: time="2025-05-15T00:56:42.880215811Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\"" May 15 00:56:42.880772 env[1218]: time="2025-05-15T00:56:42.880701642Z" level=info msg="StartContainer for \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\"" May 15 00:56:42.897460 systemd[1]: Started cri-containerd-3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1.scope. May 15 00:56:42.921773 env[1218]: time="2025-05-15T00:56:42.921714653Z" level=info msg="StartContainer for \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\" returns successfully" May 15 00:56:42.931254 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:56:42.931440 systemd[1]: Stopped systemd-sysctl.service. May 15 00:56:42.931604 systemd[1]: Stopping systemd-sysctl.service... May 15 00:56:42.932897 systemd[1]: Starting systemd-sysctl.service... May 15 00:56:42.935767 systemd[1]: cri-containerd-3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1.scope: Deactivated successfully. May 15 00:56:42.938930 systemd[1]: Finished systemd-sysctl.service. May 15 00:56:43.110077 env[1218]: time="2025-05-15T00:56:43.109944809Z" level=info msg="shim disconnected" id=3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1 May 15 00:56:43.110291 env[1218]: time="2025-05-15T00:56:43.110270245Z" level=warning msg="cleaning up after shim disconnected" id=3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1 namespace=k8s.io May 15 00:56:43.110381 env[1218]: time="2025-05-15T00:56:43.110348853Z" level=info msg="cleaning up dead shim" May 15 00:56:43.120835 env[1218]: time="2025-05-15T00:56:43.120800095Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2424 runtime=io.containerd.runc.v2\n" May 15 00:56:43.770892 kubelet[1916]: E0515 00:56:43.770858 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:43.773772 env[1218]: time="2025-05-15T00:56:43.773714615Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:56:43.851667 systemd[1]: run-containerd-runc-k8s.io-3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1-runc.DnP6Ib.mount: Deactivated successfully. May 15 00:56:43.851879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1-rootfs.mount: Deactivated successfully. May 15 00:56:43.890832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837406686.mount: Deactivated successfully. May 15 00:56:44.110422 env[1218]: time="2025-05-15T00:56:44.110306147Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\"" May 15 00:56:44.110931 env[1218]: time="2025-05-15T00:56:44.110861938Z" level=info msg="StartContainer for \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\"" May 15 00:56:44.126563 systemd[1]: Started cri-containerd-419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d.scope. May 15 00:56:44.151192 env[1218]: time="2025-05-15T00:56:44.151132100Z" level=info msg="StartContainer for \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\" returns successfully" May 15 00:56:44.151884 systemd[1]: cri-containerd-419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d.scope: Deactivated successfully. May 15 00:56:44.184774 env[1218]: time="2025-05-15T00:56:44.184700641Z" level=info msg="shim disconnected" id=419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d May 15 00:56:44.184774 env[1218]: time="2025-05-15T00:56:44.184753952Z" level=warning msg="cleaning up after shim disconnected" id=419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d namespace=k8s.io May 15 00:56:44.184774 env[1218]: time="2025-05-15T00:56:44.184768760Z" level=info msg="cleaning up dead shim" May 15 00:56:44.191053 env[1218]: time="2025-05-15T00:56:44.191015862Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2483 runtime=io.containerd.runc.v2\n" May 15 00:56:44.775333 kubelet[1916]: E0515 00:56:44.775295 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:44.777307 env[1218]: time="2025-05-15T00:56:44.777271057Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:56:44.823157 env[1218]: time="2025-05-15T00:56:44.823073740Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\"" May 15 00:56:44.823928 env[1218]: time="2025-05-15T00:56:44.823877309Z" level=info msg="StartContainer for \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\"" May 15 00:56:44.832741 env[1218]: time="2025-05-15T00:56:44.832700452Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:44.835074 env[1218]: time="2025-05-15T00:56:44.835040747Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:44.838070 systemd[1]: Started cri-containerd-ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e.scope. May 15 00:56:44.839756 env[1218]: time="2025-05-15T00:56:44.839703373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:44.840160 env[1218]: time="2025-05-15T00:56:44.840127174Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:56:44.846864 env[1218]: time="2025-05-15T00:56:44.846798929Z" level=info msg="CreateContainer within sandbox \"874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:56:44.863494 env[1218]: time="2025-05-15T00:56:44.863442509Z" level=info msg="CreateContainer within sandbox \"874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\"" May 15 00:56:44.866649 env[1218]: time="2025-05-15T00:56:44.866590150Z" level=info msg="StartContainer for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\"" May 15 00:56:44.871286 systemd[1]: cri-containerd-ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e.scope: Deactivated successfully. May 15 00:56:44.873823 env[1218]: time="2025-05-15T00:56:44.873757422Z" level=info msg="StartContainer for \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\" returns successfully" May 15 00:56:44.888654 systemd[1]: run-containerd-runc-k8s.io-555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1-runc.ZUKnVe.mount: Deactivated successfully. May 15 00:56:44.893946 systemd[1]: Started cri-containerd-555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1.scope. May 15 00:56:45.156980 env[1218]: time="2025-05-15T00:56:45.156811761Z" level=info msg="StartContainer for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" returns successfully" May 15 00:56:45.158246 env[1218]: time="2025-05-15T00:56:45.158201398Z" level=info msg="shim disconnected" id=ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e May 15 00:56:45.158321 env[1218]: time="2025-05-15T00:56:45.158246483Z" level=warning msg="cleaning up after shim disconnected" id=ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e namespace=k8s.io May 15 00:56:45.158321 env[1218]: time="2025-05-15T00:56:45.158256692Z" level=info msg="cleaning up dead shim" May 15 00:56:45.178944 env[1218]: time="2025-05-15T00:56:45.178878845Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2572 runtime=io.containerd.runc.v2\n" May 15 00:56:45.779242 kubelet[1916]: E0515 00:56:45.779194 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:45.781451 env[1218]: time="2025-05-15T00:56:45.781416329Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:56:45.782819 kubelet[1916]: E0515 00:56:45.781664 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:45.800459 env[1218]: time="2025-05-15T00:56:45.800388182Z" level=info msg="CreateContainer within sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\"" May 15 00:56:45.802088 env[1218]: time="2025-05-15T00:56:45.802021008Z" level=info msg="StartContainer for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\"" May 15 00:56:45.820013 systemd[1]: Started cri-containerd-5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382.scope. May 15 00:56:45.851960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e-rootfs.mount: Deactivated successfully. May 15 00:56:45.852902 env[1218]: time="2025-05-15T00:56:45.852860457Z" level=info msg="StartContainer for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" returns successfully" May 15 00:56:45.868344 systemd[1]: run-containerd-runc-k8s.io-5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382-runc.ieBud7.mount: Deactivated successfully. May 15 00:56:45.933521 kubelet[1916]: I0515 00:56:45.931333 1916 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 00:56:45.957059 kubelet[1916]: I0515 00:56:45.956966 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jshvs" podStartSLOduration=2.212060867 podStartE2EDuration="14.956946434s" podCreationTimestamp="2025-05-15 00:56:31 +0000 UTC" firstStartedPulling="2025-05-15 00:56:32.099485112 +0000 UTC m=+6.477296541" lastFinishedPulling="2025-05-15 00:56:44.844370689 +0000 UTC m=+19.222182108" observedRunningTime="2025-05-15 00:56:45.808197832 +0000 UTC m=+20.186009261" watchObservedRunningTime="2025-05-15 00:56:45.956946434 +0000 UTC m=+20.334757853" May 15 00:56:45.959358 kubelet[1916]: I0515 00:56:45.959264 1916 status_manager.go:890] "Failed to get status for pod" podUID="150852ec-d24f-4495-897e-35656ec3eafe" pod="kube-system/coredns-668d6bf9bc-j6g4j" err="pods \"coredns-668d6bf9bc-j6g4j\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 15 00:56:45.962256 kubelet[1916]: I0515 00:56:45.961960 1916 status_manager.go:890] "Failed to get status for pod" podUID="33a94d52-9c8d-45aa-b1bf-bacbfe6788a8" pod="kube-system/coredns-668d6bf9bc-qrrzg" err="pods \"coredns-668d6bf9bc-qrrzg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 15 00:56:45.962256 kubelet[1916]: W0515 00:56:45.962202 1916 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 15 00:56:45.962256 kubelet[1916]: E0515 00:56:45.962230 1916 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 15 00:56:45.963350 systemd[1]: Created slice kubepods-burstable-pod150852ec_d24f_4495_897e_35656ec3eafe.slice. May 15 00:56:45.965262 kubelet[1916]: I0515 00:56:45.965219 1916 status_manager.go:890] "Failed to get status for pod" podUID="150852ec-d24f-4495-897e-35656ec3eafe" pod="kube-system/coredns-668d6bf9bc-j6g4j" err="pods \"coredns-668d6bf9bc-j6g4j\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 15 00:56:45.967958 systemd[1]: Created slice kubepods-burstable-pod33a94d52_9c8d_45aa_b1bf_bacbfe6788a8.slice. May 15 00:56:46.087567 kubelet[1916]: I0515 00:56:46.087456 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/150852ec-d24f-4495-897e-35656ec3eafe-config-volume\") pod \"coredns-668d6bf9bc-j6g4j\" (UID: \"150852ec-d24f-4495-897e-35656ec3eafe\") " pod="kube-system/coredns-668d6bf9bc-j6g4j" May 15 00:56:46.087567 kubelet[1916]: I0515 00:56:46.087499 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klpsp\" (UniqueName: \"kubernetes.io/projected/150852ec-d24f-4495-897e-35656ec3eafe-kube-api-access-klpsp\") pod \"coredns-668d6bf9bc-j6g4j\" (UID: \"150852ec-d24f-4495-897e-35656ec3eafe\") " pod="kube-system/coredns-668d6bf9bc-j6g4j" May 15 00:56:46.087567 kubelet[1916]: I0515 00:56:46.087522 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33a94d52-9c8d-45aa-b1bf-bacbfe6788a8-config-volume\") pod \"coredns-668d6bf9bc-qrrzg\" (UID: \"33a94d52-9c8d-45aa-b1bf-bacbfe6788a8\") " pod="kube-system/coredns-668d6bf9bc-qrrzg" May 15 00:56:46.087567 kubelet[1916]: I0515 00:56:46.087538 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwfj\" (UniqueName: \"kubernetes.io/projected/33a94d52-9c8d-45aa-b1bf-bacbfe6788a8-kube-api-access-shwfj\") pod \"coredns-668d6bf9bc-qrrzg\" (UID: \"33a94d52-9c8d-45aa-b1bf-bacbfe6788a8\") " pod="kube-system/coredns-668d6bf9bc-qrrzg" May 15 00:56:46.784881 kubelet[1916]: E0515 00:56:46.784843 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:46.785391 kubelet[1916]: E0515 00:56:46.785372 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:47.167119 kubelet[1916]: E0515 00:56:47.166962 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:47.167702 env[1218]: time="2025-05-15T00:56:47.167661773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6g4j,Uid:150852ec-d24f-4495-897e-35656ec3eafe,Namespace:kube-system,Attempt:0,}" May 15 00:56:47.170730 kubelet[1916]: E0515 00:56:47.170687 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:47.171490 env[1218]: time="2025-05-15T00:56:47.171445667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrrzg,Uid:33a94d52-9c8d-45aa-b1bf-bacbfe6788a8,Namespace:kube-system,Attempt:0,}" May 15 00:56:47.786986 kubelet[1916]: E0515 00:56:47.786956 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:48.579175 systemd-networkd[1035]: cilium_host: Link UP May 15 00:56:48.579296 systemd-networkd[1035]: cilium_net: Link UP May 15 00:56:48.582107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 00:56:48.582168 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 00:56:48.582343 systemd-networkd[1035]: cilium_net: Gained carrier May 15 00:56:48.582498 systemd-networkd[1035]: cilium_host: Gained carrier May 15 00:56:48.582592 systemd-networkd[1035]: cilium_net: Gained IPv6LL May 15 00:56:48.582709 systemd-networkd[1035]: cilium_host: Gained IPv6LL May 15 00:56:48.664227 systemd-networkd[1035]: cilium_vxlan: Link UP May 15 00:56:48.664237 systemd-networkd[1035]: cilium_vxlan: Gained carrier May 15 00:56:48.788548 kubelet[1916]: E0515 00:56:48.788505 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:48.873030 kernel: NET: Registered PF_ALG protocol family May 15 00:56:49.407815 systemd-networkd[1035]: lxc_health: Link UP May 15 00:56:49.414287 systemd-networkd[1035]: lxc_health: Gained carrier May 15 00:56:49.415025 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 00:56:49.571403 kubelet[1916]: I0515 00:56:49.571333 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-46v9w" podStartSLOduration=8.788411379 podStartE2EDuration="18.571317817s" podCreationTimestamp="2025-05-15 00:56:31 +0000 UTC" firstStartedPulling="2025-05-15 00:56:32.054543955 +0000 UTC m=+6.432355364" lastFinishedPulling="2025-05-15 00:56:41.837450362 +0000 UTC m=+16.215261802" observedRunningTime="2025-05-15 00:56:46.810593912 +0000 UTC m=+21.188405331" watchObservedRunningTime="2025-05-15 00:56:49.571317817 +0000 UTC m=+23.949129226" May 15 00:56:49.783902 systemd-networkd[1035]: lxc6b35a30ed91b: Link UP May 15 00:56:49.790235 kubelet[1916]: E0515 00:56:49.790211 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:49.792264 systemd-networkd[1035]: lxcdf3e3c0c92b3: Link UP May 15 00:56:49.800026 kernel: eth0: renamed from tmp67bc0 May 15 00:56:49.810065 kernel: eth0: renamed from tmp385eb May 15 00:56:49.816210 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:56:49.816313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6b35a30ed91b: link becomes ready May 15 00:56:49.816377 systemd-networkd[1035]: lxc6b35a30ed91b: Gained carrier May 15 00:56:49.818543 systemd-networkd[1035]: lxcdf3e3c0c92b3: Gained carrier May 15 00:56:49.819015 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcdf3e3c0c92b3: link becomes ready May 15 00:56:50.178658 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:51754.service. May 15 00:56:50.210923 sshd[3126]: Accepted publickey for core from 10.0.0.1 port 51754 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:56:50.212307 sshd[3126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:56:50.216481 systemd-logind[1205]: New session 6 of user core. May 15 00:56:50.217197 systemd[1]: Started session-6.scope. May 15 00:56:50.340940 sshd[3126]: pam_unix(sshd:session): session closed for user core May 15 00:56:50.343540 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:51754.service: Deactivated successfully. May 15 00:56:50.344265 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:56:50.344926 systemd-logind[1205]: Session 6 logged out. Waiting for processes to exit. May 15 00:56:50.345624 systemd-logind[1205]: Removed session 6. May 15 00:56:50.576181 systemd-networkd[1035]: cilium_vxlan: Gained IPv6LL May 15 00:56:50.640150 systemd-networkd[1035]: lxc_health: Gained IPv6LL May 15 00:56:51.216165 systemd-networkd[1035]: lxcdf3e3c0c92b3: Gained IPv6LL May 15 00:56:51.536148 systemd-networkd[1035]: lxc6b35a30ed91b: Gained IPv6LL May 15 00:56:53.129018 env[1218]: time="2025-05-15T00:56:53.128903267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:53.129018 env[1218]: time="2025-05-15T00:56:53.128955566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:53.129018 env[1218]: time="2025-05-15T00:56:53.128969232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:53.129503 env[1218]: time="2025-05-15T00:56:53.129138420Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/385eb46b3ecef9d7c363dfe7f2bea24536777809cd3ca2b294a0ee4c17d028d5 pid=3166 runtime=io.containerd.runc.v2 May 15 00:56:53.141343 systemd[1]: run-containerd-runc-k8s.io-385eb46b3ecef9d7c363dfe7f2bea24536777809cd3ca2b294a0ee4c17d028d5-runc.ym2CxB.mount: Deactivated successfully. May 15 00:56:53.142256 env[1218]: time="2025-05-15T00:56:53.141304633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:53.142256 env[1218]: time="2025-05-15T00:56:53.141367632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:53.142256 env[1218]: time="2025-05-15T00:56:53.141388091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:53.142256 env[1218]: time="2025-05-15T00:56:53.141527733Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67bc05c2be40f9909f73601281598996816d56b14b600b2e2f9c2628789f1922 pid=3175 runtime=io.containerd.runc.v2 May 15 00:56:53.147566 systemd[1]: Started cri-containerd-385eb46b3ecef9d7c363dfe7f2bea24536777809cd3ca2b294a0ee4c17d028d5.scope. May 15 00:56:53.158372 systemd[1]: Started cri-containerd-67bc05c2be40f9909f73601281598996816d56b14b600b2e2f9c2628789f1922.scope. May 15 00:56:53.162401 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:53.170251 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:53.187147 env[1218]: time="2025-05-15T00:56:53.187105919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j6g4j,Uid:150852ec-d24f-4495-897e-35656ec3eafe,Namespace:kube-system,Attempt:0,} returns sandbox id \"385eb46b3ecef9d7c363dfe7f2bea24536777809cd3ca2b294a0ee4c17d028d5\"" May 15 00:56:53.190581 kubelet[1916]: E0515 00:56:53.190430 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:53.195770 env[1218]: time="2025-05-15T00:56:53.195737793Z" level=info msg="CreateContainer within sandbox \"385eb46b3ecef9d7c363dfe7f2bea24536777809cd3ca2b294a0ee4c17d028d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:56:53.205055 env[1218]: time="2025-05-15T00:56:53.205018629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qrrzg,Uid:33a94d52-9c8d-45aa-b1bf-bacbfe6788a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"67bc05c2be40f9909f73601281598996816d56b14b600b2e2f9c2628789f1922\"" May 15 00:56:53.206105 kubelet[1916]: E0515 00:56:53.206073 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:53.208353 env[1218]: time="2025-05-15T00:56:53.208326532Z" level=info msg="CreateContainer within sandbox \"67bc05c2be40f9909f73601281598996816d56b14b600b2e2f9c2628789f1922\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:56:53.221475 env[1218]: time="2025-05-15T00:56:53.221426745Z" level=info msg="CreateContainer within sandbox \"385eb46b3ecef9d7c363dfe7f2bea24536777809cd3ca2b294a0ee4c17d028d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"853de4f61053707007e8be4657e64dba657946255a189377a3670fc0eff9745d\"" May 15 00:56:53.222282 env[1218]: time="2025-05-15T00:56:53.222239557Z" level=info msg="StartContainer for \"853de4f61053707007e8be4657e64dba657946255a189377a3670fc0eff9745d\"" May 15 00:56:53.226290 env[1218]: time="2025-05-15T00:56:53.226245514Z" level=info msg="CreateContainer within sandbox \"67bc05c2be40f9909f73601281598996816d56b14b600b2e2f9c2628789f1922\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7b7710a45120259f489f8bd5f881029410037e56034ccf3070cc6c56aed163b\"" May 15 00:56:53.227015 env[1218]: time="2025-05-15T00:56:53.226924243Z" level=info msg="StartContainer for \"f7b7710a45120259f489f8bd5f881029410037e56034ccf3070cc6c56aed163b\"" May 15 00:56:53.237349 systemd[1]: Started cri-containerd-853de4f61053707007e8be4657e64dba657946255a189377a3670fc0eff9745d.scope. May 15 00:56:53.249529 systemd[1]: Started cri-containerd-f7b7710a45120259f489f8bd5f881029410037e56034ccf3070cc6c56aed163b.scope. May 15 00:56:53.336978 env[1218]: time="2025-05-15T00:56:53.336883071Z" level=info msg="StartContainer for \"853de4f61053707007e8be4657e64dba657946255a189377a3670fc0eff9745d\" returns successfully" May 15 00:56:53.336978 env[1218]: time="2025-05-15T00:56:53.336882931Z" level=info msg="StartContainer for \"f7b7710a45120259f489f8bd5f881029410037e56034ccf3070cc6c56aed163b\" returns successfully" May 15 00:56:53.805640 kubelet[1916]: E0515 00:56:53.805609 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:53.805855 kubelet[1916]: E0515 00:56:53.805610 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:53.824235 kubelet[1916]: I0515 00:56:53.824137 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qrrzg" podStartSLOduration=22.824114267 podStartE2EDuration="22.824114267s" podCreationTimestamp="2025-05-15 00:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:53.815461644 +0000 UTC m=+28.193273083" watchObservedRunningTime="2025-05-15 00:56:53.824114267 +0000 UTC m=+28.201925686" May 15 00:56:53.824467 kubelet[1916]: I0515 00:56:53.824248 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j6g4j" podStartSLOduration=22.824240105 podStartE2EDuration="22.824240105s" podCreationTimestamp="2025-05-15 00:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:56:53.82369617 +0000 UTC m=+28.201507619" watchObservedRunningTime="2025-05-15 00:56:53.824240105 +0000 UTC m=+28.202051544" May 15 00:56:55.345499 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:51762.service. May 15 00:56:55.377202 sshd[3320]: Accepted publickey for core from 10.0.0.1 port 51762 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:56:55.378232 sshd[3320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:56:55.381466 systemd-logind[1205]: New session 7 of user core. May 15 00:56:55.382213 systemd[1]: Started session-7.scope. May 15 00:56:55.492347 sshd[3320]: pam_unix(sshd:session): session closed for user core May 15 00:56:55.494759 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:51762.service: Deactivated successfully. May 15 00:56:55.495498 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:56:55.496399 systemd-logind[1205]: Session 7 logged out. Waiting for processes to exit. May 15 00:56:55.497189 systemd-logind[1205]: Removed session 7. May 15 00:56:55.774969 kubelet[1916]: I0515 00:56:55.774926 1916 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:56:55.775977 kubelet[1916]: E0515 00:56:55.775957 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:55.810587 kubelet[1916]: E0515 00:56:55.810558 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:00.495955 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:45762.service. May 15 00:57:00.525198 sshd[3335]: Accepted publickey for core from 10.0.0.1 port 45762 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:00.526099 sshd[3335]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:00.529101 systemd-logind[1205]: New session 8 of user core. May 15 00:57:00.529811 systemd[1]: Started session-8.scope. May 15 00:57:00.640838 sshd[3335]: pam_unix(sshd:session): session closed for user core May 15 00:57:00.643599 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:45762.service: Deactivated successfully. May 15 00:57:00.644542 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:57:00.645199 systemd-logind[1205]: Session 8 logged out. Waiting for processes to exit. May 15 00:57:00.645943 systemd-logind[1205]: Removed session 8. May 15 00:57:03.804348 kubelet[1916]: E0515 00:57:03.804295 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:03.804945 kubelet[1916]: E0515 00:57:03.804553 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:03.838648 kubelet[1916]: E0515 00:57:03.838606 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:03.838816 kubelet[1916]: E0515 00:57:03.838778 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:05.644666 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:45778.service. May 15 00:57:05.676069 sshd[3357]: Accepted publickey for core from 10.0.0.1 port 45778 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:05.677178 sshd[3357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:05.680723 systemd-logind[1205]: New session 9 of user core. May 15 00:57:05.681455 systemd[1]: Started session-9.scope. May 15 00:57:05.784608 sshd[3357]: pam_unix(sshd:session): session closed for user core May 15 00:57:05.786839 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:45778.service: Deactivated successfully. May 15 00:57:05.787606 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:57:05.788106 systemd-logind[1205]: Session 9 logged out. Waiting for processes to exit. May 15 00:57:05.788728 systemd-logind[1205]: Removed session 9. May 15 00:57:10.789143 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:59006.service. May 15 00:57:10.818833 sshd[3371]: Accepted publickey for core from 10.0.0.1 port 59006 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:10.819859 sshd[3371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:10.823136 systemd-logind[1205]: New session 10 of user core. May 15 00:57:10.823944 systemd[1]: Started session-10.scope. May 15 00:57:10.939737 sshd[3371]: pam_unix(sshd:session): session closed for user core May 15 00:57:10.943742 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:59006.service: Deactivated successfully. May 15 00:57:10.944309 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:57:10.944830 systemd-logind[1205]: Session 10 logged out. Waiting for processes to exit. May 15 00:57:10.945868 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:59014.service. May 15 00:57:10.946633 systemd-logind[1205]: Removed session 10. May 15 00:57:10.975672 sshd[3385]: Accepted publickey for core from 10.0.0.1 port 59014 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:10.976898 sshd[3385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:10.980333 systemd-logind[1205]: New session 11 of user core. May 15 00:57:10.981370 systemd[1]: Started session-11.scope. May 15 00:57:11.127941 sshd[3385]: pam_unix(sshd:session): session closed for user core May 15 00:57:11.131275 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:59030.service. May 15 00:57:11.133582 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:59014.service: Deactivated successfully. May 15 00:57:11.134393 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:57:11.135083 systemd-logind[1205]: Session 11 logged out. Waiting for processes to exit. May 15 00:57:11.139220 systemd-logind[1205]: Removed session 11. May 15 00:57:11.165050 sshd[3395]: Accepted publickey for core from 10.0.0.1 port 59030 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:11.166607 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:11.170163 systemd-logind[1205]: New session 12 of user core. May 15 00:57:11.170982 systemd[1]: Started session-12.scope. May 15 00:57:11.280154 sshd[3395]: pam_unix(sshd:session): session closed for user core May 15 00:57:11.282438 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:59030.service: Deactivated successfully. May 15 00:57:11.283172 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:57:11.283812 systemd-logind[1205]: Session 12 logged out. Waiting for processes to exit. May 15 00:57:11.284550 systemd-logind[1205]: Removed session 12. May 15 00:57:16.283905 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:59044.service. May 15 00:57:16.313836 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 59044 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:16.314983 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:16.318215 systemd-logind[1205]: New session 13 of user core. May 15 00:57:16.318924 systemd[1]: Started session-13.scope. May 15 00:57:16.417593 sshd[3410]: pam_unix(sshd:session): session closed for user core May 15 00:57:16.419521 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:59044.service: Deactivated successfully. May 15 00:57:16.420268 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:57:16.420757 systemd-logind[1205]: Session 13 logged out. Waiting for processes to exit. May 15 00:57:16.421369 systemd-logind[1205]: Removed session 13. May 15 00:57:21.421508 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:42040.service. May 15 00:57:21.450239 sshd[3424]: Accepted publickey for core from 10.0.0.1 port 42040 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:21.451081 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:21.454259 systemd-logind[1205]: New session 14 of user core. May 15 00:57:21.455039 systemd[1]: Started session-14.scope. May 15 00:57:21.554671 sshd[3424]: pam_unix(sshd:session): session closed for user core May 15 00:57:21.557309 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:42040.service: Deactivated successfully. May 15 00:57:21.558092 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:57:21.558627 systemd-logind[1205]: Session 14 logged out. Waiting for processes to exit. May 15 00:57:21.559355 systemd-logind[1205]: Removed session 14. May 15 00:57:26.559244 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:42054.service. May 15 00:57:26.588029 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 42054 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:26.589055 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:26.592386 systemd-logind[1205]: New session 15 of user core. May 15 00:57:26.593328 systemd[1]: Started session-15.scope. May 15 00:57:26.694166 sshd[3439]: pam_unix(sshd:session): session closed for user core May 15 00:57:26.697149 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:42054.service: Deactivated successfully. May 15 00:57:26.697710 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:57:26.698284 systemd-logind[1205]: Session 15 logged out. Waiting for processes to exit. May 15 00:57:26.699483 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:42062.service. May 15 00:57:26.700250 systemd-logind[1205]: Removed session 15. May 15 00:57:26.728922 sshd[3452]: Accepted publickey for core from 10.0.0.1 port 42062 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:26.729935 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:26.733360 systemd-logind[1205]: New session 16 of user core. May 15 00:57:26.734193 systemd[1]: Started session-16.scope. May 15 00:57:27.554889 sshd[3452]: pam_unix(sshd:session): session closed for user core May 15 00:57:27.557601 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:42062.service: Deactivated successfully. May 15 00:57:27.558144 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:57:27.558594 systemd-logind[1205]: Session 16 logged out. Waiting for processes to exit. May 15 00:57:27.559595 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:42070.service. May 15 00:57:27.560256 systemd-logind[1205]: Removed session 16. May 15 00:57:27.591156 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 42070 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:27.592262 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:27.595489 systemd-logind[1205]: New session 17 of user core. May 15 00:57:27.596255 systemd[1]: Started session-17.scope. May 15 00:57:28.438622 sshd[3463]: pam_unix(sshd:session): session closed for user core May 15 00:57:28.441588 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:42238.service. May 15 00:57:28.442442 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:42070.service: Deactivated successfully. May 15 00:57:28.442978 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:57:28.443838 systemd-logind[1205]: Session 17 logged out. Waiting for processes to exit. May 15 00:57:28.444490 systemd-logind[1205]: Removed session 17. May 15 00:57:28.476238 sshd[3481]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:28.477299 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:28.480460 systemd-logind[1205]: New session 18 of user core. May 15 00:57:28.481219 systemd[1]: Started session-18.scope. May 15 00:57:28.706857 sshd[3481]: pam_unix(sshd:session): session closed for user core May 15 00:57:28.710061 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:42238.service: Deactivated successfully. May 15 00:57:28.710600 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:57:28.712114 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:42254.service. May 15 00:57:28.712397 systemd-logind[1205]: Session 18 logged out. Waiting for processes to exit. May 15 00:57:28.713595 systemd-logind[1205]: Removed session 18. May 15 00:57:28.743049 sshd[3495]: Accepted publickey for core from 10.0.0.1 port 42254 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:28.744134 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:28.747402 systemd-logind[1205]: New session 19 of user core. May 15 00:57:28.748305 systemd[1]: Started session-19.scope. May 15 00:57:28.857438 sshd[3495]: pam_unix(sshd:session): session closed for user core May 15 00:57:28.859975 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:42254.service: Deactivated successfully. May 15 00:57:28.860648 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:57:28.861962 systemd-logind[1205]: Session 19 logged out. Waiting for processes to exit. May 15 00:57:28.862723 systemd-logind[1205]: Removed session 19. May 15 00:57:33.861194 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:42256.service. May 15 00:57:33.891538 sshd[3511]: Accepted publickey for core from 10.0.0.1 port 42256 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:33.892544 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:33.895799 systemd-logind[1205]: New session 20 of user core. May 15 00:57:33.896790 systemd[1]: Started session-20.scope. May 15 00:57:34.003596 sshd[3511]: pam_unix(sshd:session): session closed for user core May 15 00:57:34.006158 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:42256.service: Deactivated successfully. May 15 00:57:34.006817 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:57:34.007335 systemd-logind[1205]: Session 20 logged out. Waiting for processes to exit. May 15 00:57:34.008144 systemd-logind[1205]: Removed session 20. May 15 00:57:39.008389 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:41724.service. May 15 00:57:39.037620 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 41724 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:39.038594 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:39.041353 systemd-logind[1205]: New session 21 of user core. May 15 00:57:39.042058 systemd[1]: Started session-21.scope. May 15 00:57:39.140941 sshd[3527]: pam_unix(sshd:session): session closed for user core May 15 00:57:39.143170 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:41724.service: Deactivated successfully. May 15 00:57:39.143860 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:57:39.144456 systemd-logind[1205]: Session 21 logged out. Waiting for processes to exit. May 15 00:57:39.145074 systemd-logind[1205]: Removed session 21. May 15 00:57:44.146045 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:41726.service. May 15 00:57:44.175679 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 41726 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:44.176796 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:44.179919 systemd-logind[1205]: New session 22 of user core. May 15 00:57:44.180707 systemd[1]: Started session-22.scope. May 15 00:57:44.284937 sshd[3540]: pam_unix(sshd:session): session closed for user core May 15 00:57:44.287574 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:41726.service: Deactivated successfully. May 15 00:57:44.288252 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:57:44.288952 systemd-logind[1205]: Session 22 logged out. Waiting for processes to exit. May 15 00:57:44.289654 systemd-logind[1205]: Removed session 22. May 15 00:57:49.290301 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:38208.service. May 15 00:57:49.324837 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 38208 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:49.325974 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:49.329482 systemd-logind[1205]: New session 23 of user core. May 15 00:57:49.330480 systemd[1]: Started session-23.scope. May 15 00:57:49.433932 sshd[3553]: pam_unix(sshd:session): session closed for user core May 15 00:57:49.436740 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:38208.service: Deactivated successfully. May 15 00:57:49.437280 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:57:49.439071 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:38214.service. May 15 00:57:49.439558 systemd-logind[1205]: Session 23 logged out. Waiting for processes to exit. May 15 00:57:49.440612 systemd-logind[1205]: Removed session 23. May 15 00:57:49.468207 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 38214 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:49.469348 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:49.472622 systemd-logind[1205]: New session 24 of user core. May 15 00:57:49.473392 systemd[1]: Started session-24.scope. May 15 00:57:51.039562 env[1218]: time="2025-05-15T00:57:51.039478617Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:57:51.044126 env[1218]: time="2025-05-15T00:57:51.044089869Z" level=info msg="StopContainer for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" with timeout 2 (s)" May 15 00:57:51.044362 env[1218]: time="2025-05-15T00:57:51.044320992Z" level=info msg="Stop container \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" with signal terminated" May 15 00:57:51.049674 systemd-networkd[1035]: lxc_health: Link DOWN May 15 00:57:51.049682 systemd-networkd[1035]: lxc_health: Lost carrier May 15 00:57:51.081905 env[1218]: time="2025-05-15T00:57:51.081846086Z" level=info msg="StopContainer for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" with timeout 30 (s)" May 15 00:57:51.082377 env[1218]: time="2025-05-15T00:57:51.082354491Z" level=info msg="Stop container \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" with signal terminated" May 15 00:57:51.091319 systemd[1]: cri-containerd-555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1.scope: Deactivated successfully. May 15 00:57:51.092305 systemd[1]: cri-containerd-5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382.scope: Deactivated successfully. May 15 00:57:51.092533 systemd[1]: cri-containerd-5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382.scope: Consumed 6.011s CPU time. May 15 00:57:51.108791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1-rootfs.mount: Deactivated successfully. May 15 00:57:51.112297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382-rootfs.mount: Deactivated successfully. May 15 00:57:51.126495 env[1218]: time="2025-05-15T00:57:51.126437353Z" level=info msg="shim disconnected" id=555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1 May 15 00:57:51.126495 env[1218]: time="2025-05-15T00:57:51.126487449Z" level=warning msg="cleaning up after shim disconnected" id=555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1 namespace=k8s.io May 15 00:57:51.126495 env[1218]: time="2025-05-15T00:57:51.126498822Z" level=info msg="cleaning up dead shim" May 15 00:57:51.126749 env[1218]: time="2025-05-15T00:57:51.126723111Z" level=info msg="shim disconnected" id=5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382 May 15 00:57:51.126779 env[1218]: time="2025-05-15T00:57:51.126749132Z" level=warning msg="cleaning up after shim disconnected" id=5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382 namespace=k8s.io May 15 00:57:51.126779 env[1218]: time="2025-05-15T00:57:51.126757488Z" level=info msg="cleaning up dead shim" May 15 00:57:51.134130 env[1218]: time="2025-05-15T00:57:51.133846396Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3635 runtime=io.containerd.runc.v2\n" May 15 00:57:51.135748 env[1218]: time="2025-05-15T00:57:51.135680586Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3636 runtime=io.containerd.runc.v2\n" May 15 00:57:51.215184 env[1218]: time="2025-05-15T00:57:51.215124818Z" level=info msg="StopContainer for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" returns successfully" May 15 00:57:51.215705 env[1218]: time="2025-05-15T00:57:51.215681336Z" level=info msg="StopPodSandbox for \"874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb\"" May 15 00:57:51.215758 env[1218]: time="2025-05-15T00:57:51.215737605Z" level=info msg="Container to stop \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:51.217622 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb-shm.mount: Deactivated successfully. May 15 00:57:51.221308 systemd[1]: cri-containerd-874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb.scope: Deactivated successfully. May 15 00:57:51.236616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb-rootfs.mount: Deactivated successfully. May 15 00:57:51.269001 env[1218]: time="2025-05-15T00:57:51.268940433Z" level=info msg="StopContainer for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" returns successfully" May 15 00:57:51.269548 env[1218]: time="2025-05-15T00:57:51.269522120Z" level=info msg="StopPodSandbox for \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\"" May 15 00:57:51.269596 env[1218]: time="2025-05-15T00:57:51.269579930Z" level=info msg="Container to stop \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:51.269596 env[1218]: time="2025-05-15T00:57:51.269592164Z" level=info msg="Container to stop \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:51.269650 env[1218]: time="2025-05-15T00:57:51.269602784Z" level=info msg="Container to stop \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:51.269650 env[1218]: time="2025-05-15T00:57:51.269612093Z" level=info msg="Container to stop \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:51.269650 env[1218]: time="2025-05-15T00:57:51.269621330Z" level=info msg="Container to stop \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:57:51.274281 systemd[1]: cri-containerd-59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3.scope: Deactivated successfully. May 15 00:57:51.276408 env[1218]: time="2025-05-15T00:57:51.276356479Z" level=info msg="shim disconnected" id=874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb May 15 00:57:51.276408 env[1218]: time="2025-05-15T00:57:51.276402647Z" level=warning msg="cleaning up after shim disconnected" id=874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb namespace=k8s.io May 15 00:57:51.276504 env[1218]: time="2025-05-15T00:57:51.276411665Z" level=info msg="cleaning up dead shim" May 15 00:57:51.284399 env[1218]: time="2025-05-15T00:57:51.284359611Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\n" May 15 00:57:51.284786 env[1218]: time="2025-05-15T00:57:51.284762806Z" level=info msg="TearDown network for sandbox \"874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb\" successfully" May 15 00:57:51.284869 env[1218]: time="2025-05-15T00:57:51.284848540Z" level=info msg="StopPodSandbox for \"874da55baa43da2d57b208376ac2d72c7c9990ce7063c75872ed25b7d0d1b2cb\" returns successfully" May 15 00:57:51.318546 env[1218]: time="2025-05-15T00:57:51.317809702Z" level=info msg="shim disconnected" id=59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3 May 15 00:57:51.318546 env[1218]: time="2025-05-15T00:57:51.317852084Z" level=warning msg="cleaning up after shim disconnected" id=59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3 namespace=k8s.io May 15 00:57:51.318546 env[1218]: time="2025-05-15T00:57:51.317860320Z" level=info msg="cleaning up dead shim" May 15 00:57:51.324371 env[1218]: time="2025-05-15T00:57:51.324340659Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3709 runtime=io.containerd.runc.v2\n" May 15 00:57:51.324620 env[1218]: time="2025-05-15T00:57:51.324591792Z" level=info msg="TearDown network for sandbox \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" successfully" May 15 00:57:51.324620 env[1218]: time="2025-05-15T00:57:51.324615007Z" level=info msg="StopPodSandbox for \"59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3\" returns successfully" May 15 00:57:51.367711 kubelet[1916]: I0515 00:57:51.367653 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-cgroup\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.367711 kubelet[1916]: I0515 00:57:51.367696 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-bpf-maps\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.367711 kubelet[1916]: I0515 00:57:51.367710 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-run\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368183 kubelet[1916]: I0515 00:57:51.367732 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/915bd244-03af-48ed-b4fc-e27f093c226f-cilium-config-path\") pod \"915bd244-03af-48ed-b4fc-e27f093c226f\" (UID: \"915bd244-03af-48ed-b4fc-e27f093c226f\") " May 15 00:57:51.368183 kubelet[1916]: I0515 00:57:51.367749 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-lib-modules\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368183 kubelet[1916]: I0515 00:57:51.367763 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-hubble-tls\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368183 kubelet[1916]: I0515 00:57:51.367774 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-hostproc\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368183 kubelet[1916]: I0515 00:57:51.367787 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-etc-cni-netd\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368183 kubelet[1916]: I0515 00:57:51.367805 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8635ed43-f669-4c03-ae8c-3852cb34dd88-clustermesh-secrets\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368321 kubelet[1916]: I0515 00:57:51.367787 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.368321 kubelet[1916]: I0515 00:57:51.367819 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kpd\" (UniqueName: \"kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-kube-api-access-l2kpd\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368321 kubelet[1916]: I0515 00:57:51.367893 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-xtables-lock\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368321 kubelet[1916]: I0515 00:57:51.367914 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cni-path\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368321 kubelet[1916]: I0515 00:57:51.367939 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htktg\" (UniqueName: \"kubernetes.io/projected/915bd244-03af-48ed-b4fc-e27f093c226f-kube-api-access-htktg\") pod \"915bd244-03af-48ed-b4fc-e27f093c226f\" (UID: \"915bd244-03af-48ed-b4fc-e27f093c226f\") " May 15 00:57:51.368321 kubelet[1916]: I0515 00:57:51.367974 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-config-path\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368468 kubelet[1916]: I0515 00:57:51.368039 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-kernel\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368468 kubelet[1916]: I0515 00:57:51.368061 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-net\") pod \"8635ed43-f669-4c03-ae8c-3852cb34dd88\" (UID: \"8635ed43-f669-4c03-ae8c-3852cb34dd88\") " May 15 00:57:51.368468 kubelet[1916]: I0515 00:57:51.368130 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.368468 kubelet[1916]: I0515 00:57:51.368136 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.368468 kubelet[1916]: I0515 00:57:51.368159 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.368579 kubelet[1916]: I0515 00:57:51.368185 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.368579 kubelet[1916]: I0515 00:57:51.368190 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.368579 kubelet[1916]: I0515 00:57:51.368205 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.368579 kubelet[1916]: I0515 00:57:51.368206 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cni-path" (OuterVolumeSpecName: "cni-path") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.370444 kubelet[1916]: I0515 00:57:51.369166 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.370444 kubelet[1916]: I0515 00:57:51.369221 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-hostproc" (OuterVolumeSpecName: "hostproc") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.370444 kubelet[1916]: I0515 00:57:51.369540 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:51.370545 kubelet[1916]: I0515 00:57:51.370466 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-kube-api-access-l2kpd" (OuterVolumeSpecName: "kube-api-access-l2kpd") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "kube-api-access-l2kpd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:51.370680 kubelet[1916]: I0515 00:57:51.370651 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/915bd244-03af-48ed-b4fc-e27f093c226f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "915bd244-03af-48ed-b4fc-e27f093c226f" (UID: "915bd244-03af-48ed-b4fc-e27f093c226f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:57:51.371561 kubelet[1916]: I0515 00:57:51.371519 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:57:51.373552 kubelet[1916]: I0515 00:57:51.373524 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8635ed43-f669-4c03-ae8c-3852cb34dd88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:57:51.373773 kubelet[1916]: I0515 00:57:51.373756 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8635ed43-f669-4c03-ae8c-3852cb34dd88" (UID: "8635ed43-f669-4c03-ae8c-3852cb34dd88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:51.374605 kubelet[1916]: I0515 00:57:51.374560 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915bd244-03af-48ed-b4fc-e27f093c226f-kube-api-access-htktg" (OuterVolumeSpecName: "kube-api-access-htktg") pod "915bd244-03af-48ed-b4fc-e27f093c226f" (UID: "915bd244-03af-48ed-b4fc-e27f093c226f"). InnerVolumeSpecName "kube-api-access-htktg". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:51.468569 kubelet[1916]: I0515 00:57:51.468532 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/915bd244-03af-48ed-b4fc-e27f093c226f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468569 kubelet[1916]: I0515 00:57:51.468556 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468569 kubelet[1916]: I0515 00:57:51.468563 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468569 kubelet[1916]: I0515 00:57:51.468571 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468569 kubelet[1916]: I0515 00:57:51.468579 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468586 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8635ed43-f669-4c03-ae8c-3852cb34dd88-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468593 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l2kpd\" (UniqueName: \"kubernetes.io/projected/8635ed43-f669-4c03-ae8c-3852cb34dd88-kube-api-access-l2kpd\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468602 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468609 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468617 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-htktg\" (UniqueName: \"kubernetes.io/projected/915bd244-03af-48ed-b4fc-e27f093c226f-kube-api-access-htktg\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468624 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468630 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.468823 kubelet[1916]: I0515 00:57:51.468637 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.469027 kubelet[1916]: I0515 00:57:51.468643 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.469027 kubelet[1916]: I0515 00:57:51.468650 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8635ed43-f669-4c03-ae8c-3852cb34dd88-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:57:51.725470 kubelet[1916]: E0515 00:57:51.725431 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:51.731643 systemd[1]: Removed slice kubepods-burstable-pod8635ed43_f669_4c03_ae8c_3852cb34dd88.slice. May 15 00:57:51.731716 systemd[1]: kubepods-burstable-pod8635ed43_f669_4c03_ae8c_3852cb34dd88.slice: Consumed 6.098s CPU time. May 15 00:57:51.733041 systemd[1]: Removed slice kubepods-besteffort-pod915bd244_03af_48ed_b4fc_e27f093c226f.slice. May 15 00:57:51.931740 kubelet[1916]: I0515 00:57:51.931695 1916 scope.go:117] "RemoveContainer" containerID="555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1" May 15 00:57:51.933788 env[1218]: time="2025-05-15T00:57:51.933744154Z" level=info msg="RemoveContainer for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\"" May 15 00:57:51.939870 env[1218]: time="2025-05-15T00:57:51.939813745Z" level=info msg="RemoveContainer for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" returns successfully" May 15 00:57:51.940093 kubelet[1916]: I0515 00:57:51.940060 1916 scope.go:117] "RemoveContainer" containerID="555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1" May 15 00:57:51.940326 env[1218]: time="2025-05-15T00:57:51.940236797Z" level=error msg="ContainerStatus for \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\": not found" May 15 00:57:51.940454 kubelet[1916]: E0515 00:57:51.940424 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\": not found" containerID="555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1" May 15 00:57:51.940590 kubelet[1916]: I0515 00:57:51.940463 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1"} err="failed to get container status \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\": rpc error: code = NotFound desc = an error occurred when try to find container \"555f70afe796d8c5f74aa93ae98019bee339cf6c2e959e516efa1ba27ff6adf1\": not found" May 15 00:57:51.940590 kubelet[1916]: I0515 00:57:51.940565 1916 scope.go:117] "RemoveContainer" containerID="5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382" May 15 00:57:51.941956 env[1218]: time="2025-05-15T00:57:51.941922844Z" level=info msg="RemoveContainer for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\"" May 15 00:57:51.945320 env[1218]: time="2025-05-15T00:57:51.945279678Z" level=info msg="RemoveContainer for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" returns successfully" May 15 00:57:51.945437 kubelet[1916]: I0515 00:57:51.945421 1916 scope.go:117] "RemoveContainer" containerID="ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e" May 15 00:57:51.946439 env[1218]: time="2025-05-15T00:57:51.946411120Z" level=info msg="RemoveContainer for \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\"" May 15 00:57:51.949459 env[1218]: time="2025-05-15T00:57:51.949432940Z" level=info msg="RemoveContainer for \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\" returns successfully" May 15 00:57:51.949682 kubelet[1916]: I0515 00:57:51.949644 1916 scope.go:117] "RemoveContainer" containerID="419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d" May 15 00:57:51.950856 env[1218]: time="2025-05-15T00:57:51.950818790Z" level=info msg="RemoveContainer for \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\"" May 15 00:57:51.954940 env[1218]: time="2025-05-15T00:57:51.954861170Z" level=info msg="RemoveContainer for \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\" returns successfully" May 15 00:57:51.955137 kubelet[1916]: I0515 00:57:51.955118 1916 scope.go:117] "RemoveContainer" containerID="3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1" May 15 00:57:51.957419 env[1218]: time="2025-05-15T00:57:51.957273261Z" level=info msg="RemoveContainer for \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\"" May 15 00:57:51.960008 env[1218]: time="2025-05-15T00:57:51.959958796Z" level=info msg="RemoveContainer for \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\" returns successfully" May 15 00:57:51.960161 kubelet[1916]: I0515 00:57:51.960139 1916 scope.go:117] "RemoveContainer" containerID="2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471" May 15 00:57:51.961212 env[1218]: time="2025-05-15T00:57:51.961179850Z" level=info msg="RemoveContainer for \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\"" May 15 00:57:51.964140 env[1218]: time="2025-05-15T00:57:51.964097581Z" level=info msg="RemoveContainer for \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\" returns successfully" May 15 00:57:51.964282 kubelet[1916]: I0515 00:57:51.964244 1916 scope.go:117] "RemoveContainer" containerID="5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382" May 15 00:57:51.964517 env[1218]: time="2025-05-15T00:57:51.964443425Z" level=error msg="ContainerStatus for \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\": not found" May 15 00:57:51.964686 kubelet[1916]: E0515 00:57:51.964605 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\": not found" containerID="5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382" May 15 00:57:51.964686 kubelet[1916]: I0515 00:57:51.964638 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382"} err="failed to get container status \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e4276ea29b1882a7089bb5c69833807dd67fe38ffa2dc7c7966c35e80b6f382\": not found" May 15 00:57:51.964686 kubelet[1916]: I0515 00:57:51.964659 1916 scope.go:117] "RemoveContainer" containerID="ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e" May 15 00:57:51.964872 env[1218]: time="2025-05-15T00:57:51.964813846Z" level=error msg="ContainerStatus for \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\": not found" May 15 00:57:51.965017 kubelet[1916]: E0515 00:57:51.964968 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\": not found" containerID="ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e" May 15 00:57:51.965017 kubelet[1916]: I0515 00:57:51.965005 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e"} err="failed to get container status \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffe65d224a0545f97ad3516b28949d62cf1359a1f53bfce075633fc5d764a86e\": not found" May 15 00:57:51.965103 kubelet[1916]: I0515 00:57:51.965022 1916 scope.go:117] "RemoveContainer" containerID="419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d" May 15 00:57:51.965226 env[1218]: time="2025-05-15T00:57:51.965168987Z" level=error msg="ContainerStatus for \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\": not found" May 15 00:57:51.965317 kubelet[1916]: E0515 00:57:51.965297 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\": not found" containerID="419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d" May 15 00:57:51.965317 kubelet[1916]: I0515 00:57:51.965315 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d"} err="failed to get container status \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"419c1efe0a59812c84eea77250dc5e599c9cb276c24cc5d10841ad2e561a6e0d\": not found" May 15 00:57:51.965417 kubelet[1916]: I0515 00:57:51.965326 1916 scope.go:117] "RemoveContainer" containerID="3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1" May 15 00:57:51.965505 env[1218]: time="2025-05-15T00:57:51.965460156Z" level=error msg="ContainerStatus for \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\": not found" May 15 00:57:51.965677 kubelet[1916]: E0515 00:57:51.965652 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\": not found" containerID="3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1" May 15 00:57:51.965731 kubelet[1916]: I0515 00:57:51.965682 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1"} err="failed to get container status \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f1c1beb78a2aec82318dfa9b791e0cc7ebbdf48e459c9e7f2df49ff0bc7d8e1\": not found" May 15 00:57:51.965731 kubelet[1916]: I0515 00:57:51.965706 1916 scope.go:117] "RemoveContainer" containerID="2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471" May 15 00:57:51.965896 env[1218]: time="2025-05-15T00:57:51.965858441Z" level=error msg="ContainerStatus for \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\": not found" May 15 00:57:51.966049 kubelet[1916]: E0515 00:57:51.966023 1916 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\": not found" containerID="2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471" May 15 00:57:51.966085 kubelet[1916]: I0515 00:57:51.966060 1916 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471"} err="failed to get container status \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bc5ccbf28f53a47ed770fd998a387aea0c2a715fa69e67661ff25360e851471\": not found" May 15 00:57:52.029474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3-rootfs.mount: Deactivated successfully. May 15 00:57:52.029571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59105058c9484a7103d3ae9b9a82a6720e1ff84482bcf0dc5adc8e1f8d26b5c3-shm.mount: Deactivated successfully. May 15 00:57:52.029623 systemd[1]: var-lib-kubelet-pods-915bd244\x2d03af\x2d48ed\x2db4fc\x2de27f093c226f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhtktg.mount: Deactivated successfully. May 15 00:57:52.029673 systemd[1]: var-lib-kubelet-pods-8635ed43\x2df669\x2d4c03\x2dae8c\x2d3852cb34dd88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl2kpd.mount: Deactivated successfully. May 15 00:57:52.029733 systemd[1]: var-lib-kubelet-pods-8635ed43\x2df669\x2d4c03\x2dae8c\x2d3852cb34dd88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:57:52.029782 systemd[1]: var-lib-kubelet-pods-8635ed43\x2df669\x2d4c03\x2dae8c\x2d3852cb34dd88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:57:52.761806 sshd[3567]: pam_unix(sshd:session): session closed for user core May 15 00:57:52.765105 systemd[1]: Started sshd@24-10.0.0.131:22-10.0.0.1:38228.service. May 15 00:57:52.765559 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:38214.service: Deactivated successfully. May 15 00:57:52.766140 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:57:52.766741 systemd-logind[1205]: Session 24 logged out. Waiting for processes to exit. May 15 00:57:52.767474 systemd-logind[1205]: Removed session 24. May 15 00:57:52.796761 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 38228 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:52.797725 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:52.800658 systemd-logind[1205]: New session 25 of user core. May 15 00:57:52.801364 systemd[1]: Started session-25.scope. May 15 00:57:53.264360 sshd[3726]: pam_unix(sshd:session): session closed for user core May 15 00:57:53.268568 systemd[1]: Started sshd@25-10.0.0.131:22-10.0.0.1:38230.service. May 15 00:57:53.272543 systemd-logind[1205]: Session 25 logged out. Waiting for processes to exit. May 15 00:57:53.273829 systemd[1]: sshd@24-10.0.0.131:22-10.0.0.1:38228.service: Deactivated successfully. May 15 00:57:53.274505 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:57:53.275572 kubelet[1916]: I0515 00:57:53.275527 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="915bd244-03af-48ed-b4fc-e27f093c226f" containerName="cilium-operator" May 15 00:57:53.275572 kubelet[1916]: I0515 00:57:53.275560 1916 memory_manager.go:355] "RemoveStaleState removing state" podUID="8635ed43-f669-4c03-ae8c-3852cb34dd88" containerName="cilium-agent" May 15 00:57:53.275911 systemd-logind[1205]: Removed session 25. May 15 00:57:53.282179 systemd[1]: Created slice kubepods-burstable-pod906e07c9_ee7a_4a74_9631_d8899dcedc9f.slice. May 15 00:57:53.304656 sshd[3739]: Accepted publickey for core from 10.0.0.1 port 38230 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:53.306030 sshd[3739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:53.310349 systemd-logind[1205]: New session 26 of user core. May 15 00:57:53.310818 systemd[1]: Started session-26.scope. May 15 00:57:53.382244 kubelet[1916]: I0515 00:57:53.382207 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hostproc\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.382443 kubelet[1916]: I0515 00:57:53.382425 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-cgroup\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.382540 kubelet[1916]: I0515 00:57:53.382523 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-net\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.382648 kubelet[1916]: I0515 00:57:53.382630 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-kernel\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.382755 kubelet[1916]: I0515 00:57:53.382738 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-bpf-maps\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.382857 kubelet[1916]: I0515 00:57:53.382841 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-etc-cni-netd\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.382946 kubelet[1916]: I0515 00:57:53.382927 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-clustermesh-secrets\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383048 kubelet[1916]: I0515 00:57:53.383031 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-xtables-lock\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383146 kubelet[1916]: I0515 00:57:53.383129 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvkc\" (UniqueName: \"kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-kube-api-access-wfvkc\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383254 kubelet[1916]: I0515 00:57:53.383238 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-run\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383355 kubelet[1916]: I0515 00:57:53.383338 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-lib-modules\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383459 kubelet[1916]: I0515 00:57:53.383443 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cni-path\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383557 kubelet[1916]: I0515 00:57:53.383540 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-config-path\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383706 kubelet[1916]: I0515 00:57:53.383673 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-ipsec-secrets\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.383765 kubelet[1916]: I0515 00:57:53.383711 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hubble-tls\") pod \"cilium-4b7cb\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " pod="kube-system/cilium-4b7cb" May 15 00:57:53.430948 sshd[3739]: pam_unix(sshd:session): session closed for user core May 15 00:57:53.434094 systemd[1]: sshd@25-10.0.0.131:22-10.0.0.1:38230.service: Deactivated successfully. May 15 00:57:53.434587 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:57:53.436203 systemd[1]: Started sshd@26-10.0.0.131:22-10.0.0.1:38236.service. May 15 00:57:53.437079 systemd-logind[1205]: Session 26 logged out. Waiting for processes to exit. May 15 00:57:53.446009 systemd-logind[1205]: Removed session 26. May 15 00:57:53.447440 kubelet[1916]: E0515 00:57:53.447385 1916 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-wfvkc lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-4b7cb" podUID="906e07c9-ee7a-4a74-9631-d8899dcedc9f" May 15 00:57:53.470609 sshd[3753]: Accepted publickey for core from 10.0.0.1 port 38236 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:57:53.471687 sshd[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:53.475835 systemd-logind[1205]: New session 27 of user core. May 15 00:57:53.476564 systemd[1]: Started session-27.scope. May 15 00:57:53.727208 kubelet[1916]: I0515 00:57:53.727167 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8635ed43-f669-4c03-ae8c-3852cb34dd88" path="/var/lib/kubelet/pods/8635ed43-f669-4c03-ae8c-3852cb34dd88/volumes" May 15 00:57:53.727673 kubelet[1916]: I0515 00:57:53.727647 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915bd244-03af-48ed-b4fc-e27f093c226f" path="/var/lib/kubelet/pods/915bd244-03af-48ed-b4fc-e27f093c226f/volumes" May 15 00:57:53.986833 kubelet[1916]: I0515 00:57:53.986725 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-net\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.986833 kubelet[1916]: I0515 00:57:53.986754 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-run\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.986833 kubelet[1916]: I0515 00:57:53.986769 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-kernel\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.986833 kubelet[1916]: I0515 00:57:53.986787 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-etc-cni-netd\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.986833 kubelet[1916]: I0515 00:57:53.986804 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-cgroup\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.986833 kubelet[1916]: I0515 00:57:53.986831 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-bpf-maps\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987106 kubelet[1916]: I0515 00:57:53.986845 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-xtables-lock\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987106 kubelet[1916]: I0515 00:57:53.986867 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hubble-tls\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987106 kubelet[1916]: I0515 00:57:53.986885 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-clustermesh-secrets\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987106 kubelet[1916]: I0515 00:57:53.986839 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987106 kubelet[1916]: I0515 00:57:53.986901 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-ipsec-secrets\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987106 kubelet[1916]: I0515 00:57:53.986920 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfvkc\" (UniqueName: \"kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-kube-api-access-wfvkc\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987241 kubelet[1916]: I0515 00:57:53.986936 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hostproc\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987241 kubelet[1916]: I0515 00:57:53.986948 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cni-path\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987241 kubelet[1916]: I0515 00:57:53.986963 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-config-path\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987241 kubelet[1916]: I0515 00:57:53.986975 1916 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-lib-modules\") pod \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\" (UID: \"906e07c9-ee7a-4a74-9631-d8899dcedc9f\") " May 15 00:57:53.987241 kubelet[1916]: I0515 00:57:53.987022 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:57:53.987351 kubelet[1916]: I0515 00:57:53.986876 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987351 kubelet[1916]: I0515 00:57:53.986859 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987351 kubelet[1916]: I0515 00:57:53.986877 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987351 kubelet[1916]: I0515 00:57:53.986886 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987351 kubelet[1916]: I0515 00:57:53.986916 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987483 kubelet[1916]: I0515 00:57:53.987057 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987483 kubelet[1916]: I0515 00:57:53.987252 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987483 kubelet[1916]: I0515 00:57:53.987274 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hostproc" (OuterVolumeSpecName: "hostproc") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.987483 kubelet[1916]: I0515 00:57:53.987422 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cni-path" (OuterVolumeSpecName: "cni-path") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 00:57:53.989261 kubelet[1916]: I0515 00:57:53.989229 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 00:57:53.990805 systemd[1]: var-lib-kubelet-pods-906e07c9\x2dee7a\x2d4a74\x2d9631\x2dd8899dcedc9f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:57:53.992109 kubelet[1916]: I0515 00:57:53.992081 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:53.992323 kubelet[1916]: I0515 00:57:53.992282 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:57:53.992521 kubelet[1916]: I0515 00:57:53.992497 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-kube-api-access-wfvkc" (OuterVolumeSpecName: "kube-api-access-wfvkc") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "kube-api-access-wfvkc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 00:57:53.992852 kubelet[1916]: I0515 00:57:53.992826 1916 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "906e07c9-ee7a-4a74-9631-d8899dcedc9f" (UID: "906e07c9-ee7a-4a74-9631-d8899dcedc9f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 00:57:54.087191 kubelet[1916]: I0515 00:57:54.087155 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087191 kubelet[1916]: I0515 00:57:54.087180 1916 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087191 kubelet[1916]: I0515 00:57:54.087188 1916 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087196 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087204 1916 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087212 1916 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087219 1916 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087225 1916 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087232 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087239 1916 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wfvkc\" (UniqueName: \"kubernetes.io/projected/906e07c9-ee7a-4a74-9631-d8899dcedc9f-kube-api-access-wfvkc\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087301 kubelet[1916]: I0515 00:57:54.087246 1916 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087487 kubelet[1916]: I0515 00:57:54.087252 1916 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087487 kubelet[1916]: I0515 00:57:54.087258 1916 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/906e07c9-ee7a-4a74-9631-d8899dcedc9f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.087487 kubelet[1916]: I0515 00:57:54.087265 1916 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906e07c9-ee7a-4a74-9631-d8899dcedc9f-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:57:54.488427 systemd[1]: var-lib-kubelet-pods-906e07c9\x2dee7a\x2d4a74\x2d9631\x2dd8899dcedc9f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfvkc.mount: Deactivated successfully. May 15 00:57:54.488521 systemd[1]: var-lib-kubelet-pods-906e07c9\x2dee7a\x2d4a74\x2d9631\x2dd8899dcedc9f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 00:57:54.488572 systemd[1]: var-lib-kubelet-pods-906e07c9\x2dee7a\x2d4a74\x2d9631\x2dd8899dcedc9f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:57:54.725210 kubelet[1916]: E0515 00:57:54.725167 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:54.949583 systemd[1]: Removed slice kubepods-burstable-pod906e07c9_ee7a_4a74_9631_d8899dcedc9f.slice. May 15 00:57:54.984627 systemd[1]: Created slice kubepods-burstable-podaeb8c74f_fa28_4b52_a174_a91791363775.slice. May 15 00:57:54.993705 kubelet[1916]: I0515 00:57:54.993667 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-cilium-cgroup\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993705 kubelet[1916]: I0515 00:57:54.993701 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-xtables-lock\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993705 kubelet[1916]: I0515 00:57:54.993719 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-hostproc\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993938 kubelet[1916]: I0515 00:57:54.993733 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-cni-path\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993938 kubelet[1916]: I0515 00:57:54.993749 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aeb8c74f-fa28-4b52-a174-a91791363775-cilium-config-path\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993938 kubelet[1916]: I0515 00:57:54.993765 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aeb8c74f-fa28-4b52-a174-a91791363775-hubble-tls\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993938 kubelet[1916]: I0515 00:57:54.993781 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t5p\" (UniqueName: \"kubernetes.io/projected/aeb8c74f-fa28-4b52-a174-a91791363775-kube-api-access-85t5p\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993938 kubelet[1916]: I0515 00:57:54.993826 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-cilium-run\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.993938 kubelet[1916]: I0515 00:57:54.993862 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-lib-modules\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.994092 kubelet[1916]: I0515 00:57:54.993880 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aeb8c74f-fa28-4b52-a174-a91791363775-clustermesh-secrets\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.994092 kubelet[1916]: I0515 00:57:54.993907 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-etc-cni-netd\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.994092 kubelet[1916]: I0515 00:57:54.993922 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-host-proc-sys-net\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.994092 kubelet[1916]: I0515 00:57:54.993937 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-bpf-maps\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.994092 kubelet[1916]: I0515 00:57:54.993950 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aeb8c74f-fa28-4b52-a174-a91791363775-host-proc-sys-kernel\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:54.994201 kubelet[1916]: I0515 00:57:54.994061 1916 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aeb8c74f-fa28-4b52-a174-a91791363775-cilium-ipsec-secrets\") pod \"cilium-n69gh\" (UID: \"aeb8c74f-fa28-4b52-a174-a91791363775\") " pod="kube-system/cilium-n69gh" May 15 00:57:55.286707 kubelet[1916]: E0515 00:57:55.286585 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:55.287389 env[1218]: time="2025-05-15T00:57:55.287334367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n69gh,Uid:aeb8c74f-fa28-4b52-a174-a91791363775,Namespace:kube-system,Attempt:0,}" May 15 00:57:55.300022 env[1218]: time="2025-05-15T00:57:55.299940143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:55.300022 env[1218]: time="2025-05-15T00:57:55.299978265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:55.300022 env[1218]: time="2025-05-15T00:57:55.299999296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:55.300196 env[1218]: time="2025-05-15T00:57:55.300143673Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6 pid=3781 runtime=io.containerd.runc.v2 May 15 00:57:55.309887 systemd[1]: Started cri-containerd-61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6.scope. May 15 00:57:55.328584 env[1218]: time="2025-05-15T00:57:55.328542639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n69gh,Uid:aeb8c74f-fa28-4b52-a174-a91791363775,Namespace:kube-system,Attempt:0,} returns sandbox id \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\"" May 15 00:57:55.329468 kubelet[1916]: E0515 00:57:55.329448 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:55.332674 env[1218]: time="2025-05-15T00:57:55.332636681Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:57:55.344398 env[1218]: time="2025-05-15T00:57:55.344340138Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90\"" May 15 00:57:55.344800 env[1218]: time="2025-05-15T00:57:55.344775403Z" level=info msg="StartContainer for \"86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90\"" May 15 00:57:55.357538 systemd[1]: Started cri-containerd-86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90.scope. May 15 00:57:55.382239 env[1218]: time="2025-05-15T00:57:55.382164669Z" level=info msg="StartContainer for \"86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90\" returns successfully" May 15 00:57:55.389077 systemd[1]: cri-containerd-86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90.scope: Deactivated successfully. May 15 00:57:55.417847 env[1218]: time="2025-05-15T00:57:55.417800949Z" level=info msg="shim disconnected" id=86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90 May 15 00:57:55.417847 env[1218]: time="2025-05-15T00:57:55.417843481Z" level=warning msg="cleaning up after shim disconnected" id=86b36a8884424c0240c197346e4fd517f6e910e418d0aa1575ace11352b36c90 namespace=k8s.io May 15 00:57:55.417847 env[1218]: time="2025-05-15T00:57:55.417851857Z" level=info msg="cleaning up dead shim" May 15 00:57:55.424078 env[1218]: time="2025-05-15T00:57:55.424047684Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3863 runtime=io.containerd.runc.v2\n" May 15 00:57:55.727742 kubelet[1916]: I0515 00:57:55.727680 1916 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906e07c9-ee7a-4a74-9631-d8899dcedc9f" path="/var/lib/kubelet/pods/906e07c9-ee7a-4a74-9631-d8899dcedc9f/volumes" May 15 00:57:55.778134 kubelet[1916]: E0515 00:57:55.778065 1916 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:57:55.949689 kubelet[1916]: E0515 00:57:55.949660 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:55.951753 env[1218]: time="2025-05-15T00:57:55.951692191Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:57:55.962831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856662237.mount: Deactivated successfully. May 15 00:57:55.965693 env[1218]: time="2025-05-15T00:57:55.965648133Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04\"" May 15 00:57:55.966192 env[1218]: time="2025-05-15T00:57:55.966152920Z" level=info msg="StartContainer for \"fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04\"" May 15 00:57:55.981065 systemd[1]: Started cri-containerd-fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04.scope. May 15 00:57:56.003538 env[1218]: time="2025-05-15T00:57:56.003490315Z" level=info msg="StartContainer for \"fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04\" returns successfully" May 15 00:57:56.007929 systemd[1]: cri-containerd-fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04.scope: Deactivated successfully. May 15 00:57:56.035872 env[1218]: time="2025-05-15T00:57:56.035743874Z" level=info msg="shim disconnected" id=fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04 May 15 00:57:56.036078 env[1218]: time="2025-05-15T00:57:56.035896266Z" level=warning msg="cleaning up after shim disconnected" id=fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04 namespace=k8s.io May 15 00:57:56.036078 env[1218]: time="2025-05-15T00:57:56.035972773Z" level=info msg="cleaning up dead shim" May 15 00:57:56.043674 env[1218]: time="2025-05-15T00:57:56.043631545Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3923 runtime=io.containerd.runc.v2\n" May 15 00:57:56.488955 systemd[1]: run-containerd-runc-k8s.io-fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04-runc.CnSArN.mount: Deactivated successfully. May 15 00:57:56.489057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdebf0f2e44a5f56be7b2e6929efad7bdb1f7dc30a2d4d39794123906ea4aa04-rootfs.mount: Deactivated successfully. May 15 00:57:56.953450 kubelet[1916]: E0515 00:57:56.953414 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:56.954896 env[1218]: time="2025-05-15T00:57:56.954827953Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:57:56.971577 env[1218]: time="2025-05-15T00:57:56.971516222Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac\"" May 15 00:57:56.972138 env[1218]: time="2025-05-15T00:57:56.972097936Z" level=info msg="StartContainer for \"c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac\"" May 15 00:57:56.987644 systemd[1]: Started cri-containerd-c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac.scope. May 15 00:57:57.009956 env[1218]: time="2025-05-15T00:57:57.009903125Z" level=info msg="StartContainer for \"c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac\" returns successfully" May 15 00:57:57.011051 systemd[1]: cri-containerd-c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac.scope: Deactivated successfully. May 15 00:57:57.038744 env[1218]: time="2025-05-15T00:57:57.038689740Z" level=info msg="shim disconnected" id=c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac May 15 00:57:57.038744 env[1218]: time="2025-05-15T00:57:57.038742010Z" level=warning msg="cleaning up after shim disconnected" id=c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac namespace=k8s.io May 15 00:57:57.038744 env[1218]: time="2025-05-15T00:57:57.038754604Z" level=info msg="cleaning up dead shim" May 15 00:57:57.046485 env[1218]: time="2025-05-15T00:57:57.046408971Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3979 runtime=io.containerd.runc.v2\n" May 15 00:57:57.488762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c437924626bd88740cd34a7296c04c9c5bded2690698996abbed69bfcfb373ac-rootfs.mount: Deactivated successfully. May 15 00:57:57.724857 kubelet[1916]: E0515 00:57:57.724828 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:57.887957 kubelet[1916]: I0515 00:57:57.887868 1916 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:57:57Z","lastTransitionTime":"2025-05-15T00:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:57:57.957328 kubelet[1916]: E0515 00:57:57.957272 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:57.959058 env[1218]: time="2025-05-15T00:57:57.959026752Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:57:57.969541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180795721.mount: Deactivated successfully. May 15 00:57:57.973114 env[1218]: time="2025-05-15T00:57:57.973073320Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f\"" May 15 00:57:57.973531 env[1218]: time="2025-05-15T00:57:57.973512110Z" level=info msg="StartContainer for \"10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f\"" May 15 00:57:57.986755 systemd[1]: Started cri-containerd-10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f.scope. May 15 00:57:58.003730 systemd[1]: cri-containerd-10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f.scope: Deactivated successfully. May 15 00:57:58.005133 env[1218]: time="2025-05-15T00:57:58.005093400Z" level=info msg="StartContainer for \"10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f\" returns successfully" May 15 00:57:58.025441 env[1218]: time="2025-05-15T00:57:58.025373718Z" level=info msg="shim disconnected" id=10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f May 15 00:57:58.025441 env[1218]: time="2025-05-15T00:57:58.025437852Z" level=warning msg="cleaning up after shim disconnected" id=10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f namespace=k8s.io May 15 00:57:58.025756 env[1218]: time="2025-05-15T00:57:58.025449624Z" level=info msg="cleaning up dead shim" May 15 00:57:58.038479 env[1218]: time="2025-05-15T00:57:58.038416294Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4033 runtime=io.containerd.runc.v2\n" May 15 00:57:58.488742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10daecbfd06cb0de125b0d7769468120fa552aa53e9981c4e1f2965223354b4f-rootfs.mount: Deactivated successfully. May 15 00:57:58.961596 kubelet[1916]: E0515 00:57:58.961575 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:58.963095 env[1218]: time="2025-05-15T00:57:58.963063281Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:57:58.977290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056758929.mount: Deactivated successfully. May 15 00:57:58.980783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007235408.mount: Deactivated successfully. May 15 00:57:58.984017 env[1218]: time="2025-05-15T00:57:58.983953717Z" level=info msg="CreateContainer within sandbox \"61c3b8569b57b43497aea5292254d5013ec3c02537309a64c3a0c8558eed87d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfacfd835c8a3f237a855b4b6ddcdc400a0a1bd10923c721795ec874216fddb9\"" May 15 00:57:58.984623 env[1218]: time="2025-05-15T00:57:58.984580465Z" level=info msg="StartContainer for \"bfacfd835c8a3f237a855b4b6ddcdc400a0a1bd10923c721795ec874216fddb9\"" May 15 00:57:58.997348 systemd[1]: Started cri-containerd-bfacfd835c8a3f237a855b4b6ddcdc400a0a1bd10923c721795ec874216fddb9.scope. May 15 00:57:59.022189 env[1218]: time="2025-05-15T00:57:59.022126060Z" level=info msg="StartContainer for \"bfacfd835c8a3f237a855b4b6ddcdc400a0a1bd10923c721795ec874216fddb9\" returns successfully" May 15 00:57:59.283021 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:57:59.966308 kubelet[1916]: E0515 00:57:59.966277 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:59.986267 kubelet[1916]: I0515 00:57:59.986199 1916 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n69gh" podStartSLOduration=5.986180228 podStartE2EDuration="5.986180228s" podCreationTimestamp="2025-05-15 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:59.985755335 +0000 UTC m=+94.363566754" watchObservedRunningTime="2025-05-15 00:57:59.986180228 +0000 UTC m=+94.363991647" May 15 00:58:01.288097 kubelet[1916]: E0515 00:58:01.288063 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:01.747562 systemd-networkd[1035]: lxc_health: Link UP May 15 00:58:01.766613 systemd-networkd[1035]: lxc_health: Gained carrier May 15 00:58:01.767368 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 00:58:02.725063 kubelet[1916]: E0515 00:58:02.725033 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:03.288927 kubelet[1916]: E0515 00:58:03.288887 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:03.600181 systemd-networkd[1035]: lxc_health: Gained IPv6LL May 15 00:58:03.975917 kubelet[1916]: E0515 00:58:03.975885 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:04.977217 kubelet[1916]: E0515 00:58:04.977177 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:06.724596 kubelet[1916]: E0515 00:58:06.724559 1916 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:07.967792 sshd[3753]: pam_unix(sshd:session): session closed for user core May 15 00:58:07.970245 systemd[1]: sshd@26-10.0.0.131:22-10.0.0.1:38236.service: Deactivated successfully. May 15 00:58:07.970894 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:58:07.971377 systemd-logind[1205]: Session 27 logged out. Waiting for processes to exit. May 15 00:58:07.971971 systemd-logind[1205]: Removed session 27.