May 13 00:41:04.846287 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon May 12 23:08:12 -00 2025 May 13 00:41:04.846305 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:04.846314 kernel: BIOS-provided physical RAM map: May 13 00:41:04.846321 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 13 00:41:04.846326 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 13 00:41:04.846331 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 13 00:41:04.846339 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 13 00:41:04.846346 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 13 00:41:04.846352 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 13 00:41:04.846359 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 13 00:41:04.846365 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 13 00:41:04.846371 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 13 00:41:04.846376 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 13 00:41:04.846382 kernel: NX (Execute Disable) protection: active May 13 00:41:04.846390 kernel: SMBIOS 2.8 present. May 13 00:41:04.846396 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 13 00:41:04.846402 kernel: Hypervisor detected: KVM May 13 00:41:04.846408 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 13 00:41:04.846413 kernel: kvm-clock: cpu 0, msr 52196001, primary cpu clock May 13 00:41:04.846419 kernel: kvm-clock: using sched offset of 2461071572 cycles May 13 00:41:04.846426 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 13 00:41:04.846432 kernel: tsc: Detected 2794.746 MHz processor May 13 00:41:04.846438 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 00:41:04.846445 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 00:41:04.846451 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 13 00:41:04.846457 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 00:41:04.846463 kernel: Using GB pages for direct mapping May 13 00:41:04.846469 kernel: ACPI: Early table checksum verification disabled May 13 00:41:04.846475 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 13 00:41:04.846481 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846487 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846493 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846500 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 13 00:41:04.846506 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846512 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846518 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846524 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:41:04.846530 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 13 00:41:04.846536 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 13 00:41:04.846542 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 13 00:41:04.846565 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 13 00:41:04.846571 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 13 00:41:04.846578 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 13 00:41:04.846584 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 13 00:41:04.846590 kernel: No NUMA configuration found May 13 00:41:04.846597 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 13 00:41:04.846604 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 13 00:41:04.846611 kernel: Zone ranges: May 13 00:41:04.846617 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 00:41:04.846623 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 13 00:41:04.846630 kernel: Normal empty May 13 00:41:04.846636 kernel: Movable zone start for each node May 13 00:41:04.846642 kernel: Early memory node ranges May 13 00:41:04.846649 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 13 00:41:04.846655 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 13 00:41:04.846662 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 13 00:41:04.846669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 00:41:04.846675 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 13 00:41:04.846682 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 13 00:41:04.846688 kernel: ACPI: PM-Timer IO Port: 0x608 May 13 00:41:04.846694 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 13 00:41:04.846701 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 13 00:41:04.846707 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 13 00:41:04.846713 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 13 00:41:04.846720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 00:41:04.846727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 13 00:41:04.846734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 13 00:41:04.846740 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 00:41:04.846746 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 13 00:41:04.846753 kernel: TSC deadline timer available May 13 00:41:04.846759 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 13 00:41:04.846765 kernel: kvm-guest: KVM setup pv remote TLB flush May 13 00:41:04.846772 kernel: kvm-guest: setup PV sched yield May 13 00:41:04.846778 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 13 00:41:04.846786 kernel: Booting paravirtualized kernel on KVM May 13 00:41:04.846792 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 00:41:04.846799 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 13 00:41:04.846808 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 13 00:41:04.846817 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 13 00:41:04.846825 kernel: pcpu-alloc: [0] 0 1 2 3 May 13 00:41:04.846833 kernel: kvm-guest: setup async PF for cpu 0 May 13 00:41:04.846839 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 13 00:41:04.846846 kernel: kvm-guest: PV spinlocks enabled May 13 00:41:04.846853 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 00:41:04.846860 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 13 00:41:04.846866 kernel: Policy zone: DMA32 May 13 00:41:04.846873 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:04.846880 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:41:04.846887 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:41:04.846893 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:41:04.846900 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:41:04.846908 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 13 00:41:04.846914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:41:04.846921 kernel: ftrace: allocating 34584 entries in 136 pages May 13 00:41:04.846927 kernel: ftrace: allocated 136 pages with 2 groups May 13 00:41:04.846933 kernel: rcu: Hierarchical RCU implementation. May 13 00:41:04.846940 kernel: rcu: RCU event tracing is enabled. May 13 00:41:04.846947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:41:04.846953 kernel: Rude variant of Tasks RCU enabled. May 13 00:41:04.846960 kernel: Tracing variant of Tasks RCU enabled. May 13 00:41:04.846967 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:41:04.846974 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:41:04.846980 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 13 00:41:04.846986 kernel: random: crng init done May 13 00:41:04.846993 kernel: Console: colour VGA+ 80x25 May 13 00:41:04.846999 kernel: printk: console [ttyS0] enabled May 13 00:41:04.847005 kernel: ACPI: Core revision 20210730 May 13 00:41:04.847012 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 13 00:41:04.847018 kernel: APIC: Switch to symmetric I/O mode setup May 13 00:41:04.847026 kernel: x2apic enabled May 13 00:41:04.847032 kernel: Switched APIC routing to physical x2apic. May 13 00:41:04.847038 kernel: kvm-guest: setup PV IPIs May 13 00:41:04.847045 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 13 00:41:04.847051 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 13 00:41:04.847057 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 13 00:41:04.847064 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 13 00:41:04.847070 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 13 00:41:04.847077 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 13 00:41:04.847088 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 00:41:04.847095 kernel: Spectre V2 : Mitigation: Retpolines May 13 00:41:04.847102 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 00:41:04.847110 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 13 00:41:04.847116 kernel: RETBleed: Mitigation: untrained return thunk May 13 00:41:04.847130 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 13 00:41:04.847136 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 13 00:41:04.847144 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 00:41:04.847151 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 00:41:04.847159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 00:41:04.847166 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 00:41:04.847173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 13 00:41:04.847180 kernel: Freeing SMP alternatives memory: 32K May 13 00:41:04.847186 kernel: pid_max: default: 32768 minimum: 301 May 13 00:41:04.847193 kernel: LSM: Security Framework initializing May 13 00:41:04.847200 kernel: SELinux: Initializing. May 13 00:41:04.847207 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:04.847214 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:41:04.847222 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 13 00:41:04.847229 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 13 00:41:04.847237 kernel: ... version: 0 May 13 00:41:04.847244 kernel: ... bit width: 48 May 13 00:41:04.847251 kernel: ... generic registers: 6 May 13 00:41:04.847258 kernel: ... value mask: 0000ffffffffffff May 13 00:41:04.847264 kernel: ... max period: 00007fffffffffff May 13 00:41:04.847271 kernel: ... fixed-purpose events: 0 May 13 00:41:04.847279 kernel: ... event mask: 000000000000003f May 13 00:41:04.847285 kernel: signal: max sigframe size: 1776 May 13 00:41:04.847292 kernel: rcu: Hierarchical SRCU implementation. May 13 00:41:04.847299 kernel: smp: Bringing up secondary CPUs ... May 13 00:41:04.847305 kernel: x86: Booting SMP configuration: May 13 00:41:04.847312 kernel: .... node #0, CPUs: #1 May 13 00:41:04.847319 kernel: kvm-clock: cpu 1, msr 52196041, secondary cpu clock May 13 00:41:04.847325 kernel: kvm-guest: setup async PF for cpu 1 May 13 00:41:04.847332 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 13 00:41:04.847340 kernel: #2 May 13 00:41:04.847346 kernel: kvm-clock: cpu 2, msr 52196081, secondary cpu clock May 13 00:41:04.847353 kernel: kvm-guest: setup async PF for cpu 2 May 13 00:41:04.847360 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 13 00:41:04.847366 kernel: #3 May 13 00:41:04.847373 kernel: kvm-clock: cpu 3, msr 521960c1, secondary cpu clock May 13 00:41:04.847379 kernel: kvm-guest: setup async PF for cpu 3 May 13 00:41:04.847386 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 13 00:41:04.847393 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:41:04.847401 kernel: smpboot: Max logical packages: 1 May 13 00:41:04.847407 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 13 00:41:04.847414 kernel: devtmpfs: initialized May 13 00:41:04.847421 kernel: x86/mm: Memory block size: 128MB May 13 00:41:04.847428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:41:04.847434 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:41:04.847441 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:41:04.847448 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:41:04.847454 kernel: audit: initializing netlink subsys (disabled) May 13 00:41:04.847462 kernel: audit: type=2000 audit(1747096863.921:1): state=initialized audit_enabled=0 res=1 May 13 00:41:04.847469 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:41:04.847476 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 00:41:04.847482 kernel: cpuidle: using governor menu May 13 00:41:04.847489 kernel: ACPI: bus type PCI registered May 13 00:41:04.847496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:41:04.847502 kernel: dca service started, version 1.12.1 May 13 00:41:04.847509 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 13 00:41:04.847516 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 13 00:41:04.847524 kernel: PCI: Using configuration type 1 for base access May 13 00:41:04.847530 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 00:41:04.847537 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:41:04.847544 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:41:04.847561 kernel: ACPI: Added _OSI(Module Device) May 13 00:41:04.847568 kernel: ACPI: Added _OSI(Processor Device) May 13 00:41:04.847575 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:41:04.847582 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:41:04.847588 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 13 00:41:04.847596 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 13 00:41:04.847603 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 13 00:41:04.847610 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:41:04.847616 kernel: ACPI: Interpreter enabled May 13 00:41:04.847623 kernel: ACPI: PM: (supports S0 S3 S5) May 13 00:41:04.847630 kernel: ACPI: Using IOAPIC for interrupt routing May 13 00:41:04.847637 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 00:41:04.847643 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 13 00:41:04.847650 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:41:04.847756 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:41:04.847836 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 13 00:41:04.847906 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 13 00:41:04.847915 kernel: PCI host bridge to bus 0000:00 May 13 00:41:04.847985 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 13 00:41:04.848047 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 13 00:41:04.848106 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 13 00:41:04.848178 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 13 00:41:04.848236 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 13 00:41:04.848295 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 13 00:41:04.848359 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:41:04.848444 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 13 00:41:04.848519 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 13 00:41:04.848606 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 13 00:41:04.848675 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 13 00:41:04.848742 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 13 00:41:04.848810 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 13 00:41:04.848894 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:41:04.848992 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 13 00:41:04.849066 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 13 00:41:04.849158 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 13 00:41:04.849242 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 13 00:41:04.849312 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 13 00:41:04.849379 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 13 00:41:04.849448 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 13 00:41:04.849519 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 13 00:41:04.849604 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 13 00:41:04.849676 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 13 00:41:04.849789 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 13 00:41:04.849871 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 13 00:41:04.850005 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 13 00:41:04.850077 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 13 00:41:04.850159 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 13 00:41:04.850229 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 13 00:41:04.850299 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 13 00:41:04.850372 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 13 00:41:04.850439 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 13 00:41:04.850449 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 13 00:41:04.850456 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 13 00:41:04.850463 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 13 00:41:04.850470 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 13 00:41:04.850478 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 13 00:41:04.850485 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 13 00:41:04.850492 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 13 00:41:04.850499 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 13 00:41:04.850505 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 13 00:41:04.850512 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 13 00:41:04.850519 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 13 00:41:04.850526 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 13 00:41:04.850532 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 13 00:41:04.850540 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 13 00:41:04.850560 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 13 00:41:04.850567 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 13 00:41:04.850574 kernel: iommu: Default domain type: Translated May 13 00:41:04.850581 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 00:41:04.850649 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 13 00:41:04.850716 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 13 00:41:04.850782 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 13 00:41:04.850792 kernel: vgaarb: loaded May 13 00:41:04.850801 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 00:41:04.850808 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 00:41:04.850815 kernel: PTP clock support registered May 13 00:41:04.850822 kernel: PCI: Using ACPI for IRQ routing May 13 00:41:04.850829 kernel: PCI: pci_cache_line_size set to 64 bytes May 13 00:41:04.850835 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 13 00:41:04.850842 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 13 00:41:04.850849 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 13 00:41:04.850856 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 13 00:41:04.850864 kernel: clocksource: Switched to clocksource kvm-clock May 13 00:41:04.850870 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:41:04.850877 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:41:04.850884 kernel: pnp: PnP ACPI init May 13 00:41:04.850965 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 13 00:41:04.850975 kernel: pnp: PnP ACPI: found 6 devices May 13 00:41:04.850982 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 00:41:04.850989 kernel: NET: Registered PF_INET protocol family May 13 00:41:04.850997 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:41:04.851004 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:41:04.851011 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:41:04.851018 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:41:04.851025 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 13 00:41:04.851031 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:41:04.851038 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:04.851045 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:41:04.851053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:41:04.851060 kernel: NET: Registered PF_XDP protocol family May 13 00:41:04.851130 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 13 00:41:04.851191 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 13 00:41:04.851251 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 13 00:41:04.851308 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 13 00:41:04.851373 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 13 00:41:04.851433 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 13 00:41:04.851442 kernel: PCI: CLS 0 bytes, default 64 May 13 00:41:04.851451 kernel: Initialise system trusted keyrings May 13 00:41:04.851458 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:41:04.851465 kernel: Key type asymmetric registered May 13 00:41:04.851472 kernel: Asymmetric key parser 'x509' registered May 13 00:41:04.851478 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 00:41:04.851485 kernel: io scheduler mq-deadline registered May 13 00:41:04.851492 kernel: io scheduler kyber registered May 13 00:41:04.851499 kernel: io scheduler bfq registered May 13 00:41:04.851506 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 00:41:04.851514 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 13 00:41:04.851521 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 13 00:41:04.851528 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 13 00:41:04.851535 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:41:04.851542 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 00:41:04.851560 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 13 00:41:04.851567 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 13 00:41:04.851574 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 13 00:41:04.851645 kernel: rtc_cmos 00:04: RTC can wake from S4 May 13 00:41:04.851657 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 13 00:41:04.851718 kernel: rtc_cmos 00:04: registered as rtc0 May 13 00:41:04.851781 kernel: rtc_cmos 00:04: setting system clock to 2025-05-13T00:41:04 UTC (1747096864) May 13 00:41:04.851842 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 13 00:41:04.851852 kernel: NET: Registered PF_INET6 protocol family May 13 00:41:04.851859 kernel: Segment Routing with IPv6 May 13 00:41:04.851865 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:41:04.851872 kernel: NET: Registered PF_PACKET protocol family May 13 00:41:04.851881 kernel: Key type dns_resolver registered May 13 00:41:04.851888 kernel: IPI shorthand broadcast: enabled May 13 00:41:04.851895 kernel: sched_clock: Marking stable (433452321, 102601169)->(585939123, -49885633) May 13 00:41:04.851901 kernel: registered taskstats version 1 May 13 00:41:04.851908 kernel: Loading compiled-in X.509 certificates May 13 00:41:04.851915 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 52373c12592f53b0567bb941a0a0fec888191095' May 13 00:41:04.851922 kernel: Key type .fscrypt registered May 13 00:41:04.851928 kernel: Key type fscrypt-provisioning registered May 13 00:41:04.851935 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:41:04.851943 kernel: ima: Allocated hash algorithm: sha1 May 13 00:41:04.851950 kernel: ima: No architecture policies found May 13 00:41:04.851956 kernel: clk: Disabling unused clocks May 13 00:41:04.851963 kernel: Freeing unused kernel image (initmem) memory: 47456K May 13 00:41:04.851970 kernel: Write protecting the kernel read-only data: 28672k May 13 00:41:04.851977 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 13 00:41:04.851983 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 13 00:41:04.851990 kernel: Run /init as init process May 13 00:41:04.851997 kernel: with arguments: May 13 00:41:04.852005 kernel: /init May 13 00:41:04.852011 kernel: with environment: May 13 00:41:04.852018 kernel: HOME=/ May 13 00:41:04.852025 kernel: TERM=linux May 13 00:41:04.852031 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:41:04.852041 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:04.852050 systemd[1]: Detected virtualization kvm. May 13 00:41:04.852058 systemd[1]: Detected architecture x86-64. May 13 00:41:04.852066 systemd[1]: Running in initrd. May 13 00:41:04.852073 systemd[1]: No hostname configured, using default hostname. May 13 00:41:04.852080 systemd[1]: Hostname set to . May 13 00:41:04.852087 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:04.852094 systemd[1]: Queued start job for default target initrd.target. May 13 00:41:04.852101 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:04.852109 systemd[1]: Reached target cryptsetup.target. May 13 00:41:04.852116 systemd[1]: Reached target paths.target. May 13 00:41:04.852133 systemd[1]: Reached target slices.target. May 13 00:41:04.852146 systemd[1]: Reached target swap.target. May 13 00:41:04.852155 systemd[1]: Reached target timers.target. May 13 00:41:04.852163 systemd[1]: Listening on iscsid.socket. May 13 00:41:04.852171 systemd[1]: Listening on iscsiuio.socket. May 13 00:41:04.852179 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:41:04.852187 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:41:04.852194 systemd[1]: Listening on systemd-journald.socket. May 13 00:41:04.852202 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:04.852209 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:04.852217 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:04.852224 systemd[1]: Reached target sockets.target. May 13 00:41:04.852232 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:04.852239 systemd[1]: Finished network-cleanup.service. May 13 00:41:04.852250 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:41:04.852258 systemd[1]: Starting systemd-journald.service... May 13 00:41:04.852265 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:04.852273 systemd[1]: Starting systemd-resolved.service... May 13 00:41:04.852280 systemd[1]: Starting systemd-vconsole-setup.service... May 13 00:41:04.852287 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:04.852297 systemd-journald[198]: Journal started May 13 00:41:04.852335 systemd-journald[198]: Runtime Journal (/run/log/journal/76caea93e1694e76ab19dac50ce64713) is 6.0M, max 48.5M, 42.5M free. May 13 00:41:04.850020 systemd-modules-load[199]: Inserted module 'overlay' May 13 00:41:04.887141 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:41:04.887180 kernel: Bridge firewalling registered May 13 00:41:04.887194 kernel: audit: type=1130 audit(1747096864.886:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.864577 systemd-resolved[200]: Positive Trust Anchors: May 13 00:41:04.864600 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:04.864637 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:04.899774 systemd[1]: Started systemd-journald.service. May 13 00:41:04.899808 kernel: audit: type=1130 audit(1747096864.899:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.867106 systemd-resolved[200]: Defaulting to hostname 'linux'. May 13 00:41:04.886026 systemd-modules-load[199]: Inserted module 'br_netfilter' May 13 00:41:04.902809 systemd[1]: Started systemd-resolved.service. May 13 00:41:04.903598 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:41:04.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.905971 systemd[1]: Finished systemd-vconsole-setup.service. May 13 00:41:04.920698 kernel: audit: type=1130 audit(1747096864.902:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.920727 kernel: audit: type=1130 audit(1747096864.905:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.920747 kernel: audit: type=1130 audit(1747096864.905:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.920757 kernel: SCSI subsystem initialized May 13 00:41:04.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.906497 systemd[1]: Reached target nss-lookup.target. May 13 00:41:04.920721 systemd[1]: Starting dracut-cmdline-ask.service... May 13 00:41:04.921751 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:41:04.927526 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:41:04.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.933582 kernel: audit: type=1130 audit(1747096864.927:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.933616 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:41:04.936718 kernel: device-mapper: uevent: version 1.0.3 May 13 00:41:04.936763 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 13 00:41:04.936943 systemd[1]: Finished dracut-cmdline-ask.service. May 13 00:41:04.942284 kernel: audit: type=1130 audit(1747096864.937:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.938405 systemd[1]: Starting dracut-cmdline.service... May 13 00:41:04.943095 systemd-modules-load[199]: Inserted module 'dm_multipath' May 13 00:41:04.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.943669 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:04.950153 kernel: audit: type=1130 audit(1747096864.944:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.950186 dracut-cmdline[217]: dracut-dracut-053 May 13 00:41:04.950186 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b36b4a233fdb797f33aa4a04cfdf4a35ceaebd893b04da45dfb96d44a18c6166 May 13 00:41:04.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.945627 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:04.961253 kernel: audit: type=1130 audit(1747096864.956:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:04.954021 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:05.005567 kernel: Loading iSCSI transport class v2.0-870. May 13 00:41:05.025582 kernel: iscsi: registered transport (tcp) May 13 00:41:05.046583 kernel: iscsi: registered transport (qla4xxx) May 13 00:41:05.046647 kernel: QLogic iSCSI HBA Driver May 13 00:41:05.067368 systemd[1]: Finished dracut-cmdline.service. May 13 00:41:05.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.069239 systemd[1]: Starting dracut-pre-udev.service... May 13 00:41:05.113591 kernel: raid6: avx2x4 gen() 25403 MB/s May 13 00:41:05.130573 kernel: raid6: avx2x4 xor() 7062 MB/s May 13 00:41:05.147573 kernel: raid6: avx2x2 gen() 30464 MB/s May 13 00:41:05.164573 kernel: raid6: avx2x2 xor() 18624 MB/s May 13 00:41:05.181573 kernel: raid6: avx2x1 gen() 26050 MB/s May 13 00:41:05.198570 kernel: raid6: avx2x1 xor() 15161 MB/s May 13 00:41:05.215573 kernel: raid6: sse2x4 gen() 14646 MB/s May 13 00:41:05.232575 kernel: raid6: sse2x4 xor() 7258 MB/s May 13 00:41:05.249570 kernel: raid6: sse2x2 gen() 16303 MB/s May 13 00:41:05.266573 kernel: raid6: sse2x2 xor() 9729 MB/s May 13 00:41:05.283573 kernel: raid6: sse2x1 gen() 11871 MB/s May 13 00:41:05.300973 kernel: raid6: sse2x1 xor() 7714 MB/s May 13 00:41:05.300991 kernel: raid6: using algorithm avx2x2 gen() 30464 MB/s May 13 00:41:05.301000 kernel: raid6: .... xor() 18624 MB/s, rmw enabled May 13 00:41:05.301697 kernel: raid6: using avx2x2 recovery algorithm May 13 00:41:05.313572 kernel: xor: automatically using best checksumming function avx May 13 00:41:05.403582 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 13 00:41:05.410875 systemd[1]: Finished dracut-pre-udev.service. May 13 00:41:05.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.412000 audit: BPF prog-id=7 op=LOAD May 13 00:41:05.412000 audit: BPF prog-id=8 op=LOAD May 13 00:41:05.413049 systemd[1]: Starting systemd-udevd.service... May 13 00:41:05.424915 systemd-udevd[400]: Using default interface naming scheme 'v252'. May 13 00:41:05.428622 systemd[1]: Started systemd-udevd.service. May 13 00:41:05.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.430356 systemd[1]: Starting dracut-pre-trigger.service... May 13 00:41:05.438210 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 13 00:41:05.461218 systemd[1]: Finished dracut-pre-trigger.service. May 13 00:41:05.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.463011 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:05.492746 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:05.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:05.523147 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:41:05.530081 kernel: cryptd: max_cpu_qlen set to 1000 May 13 00:41:05.530120 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:41:05.530145 kernel: GPT:9289727 != 19775487 May 13 00:41:05.530158 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:41:05.530170 kernel: GPT:9289727 != 19775487 May 13 00:41:05.530190 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:41:05.530202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:05.534577 kernel: libata version 3.00 loaded. May 13 00:41:05.543194 kernel: ahci 0000:00:1f.2: version 3.0 May 13 00:41:05.567487 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 13 00:41:05.567504 kernel: AVX2 version of gcm_enc/dec engaged. May 13 00:41:05.567512 kernel: AES CTR mode by8 optimization enabled May 13 00:41:05.567521 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 13 00:41:05.567633 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 13 00:41:05.567718 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) May 13 00:41:05.567730 kernel: scsi host0: ahci May 13 00:41:05.567819 kernel: scsi host1: ahci May 13 00:41:05.567903 kernel: scsi host2: ahci May 13 00:41:05.567989 kernel: scsi host3: ahci May 13 00:41:05.568069 kernel: scsi host4: ahci May 13 00:41:05.568161 kernel: scsi host5: ahci May 13 00:41:05.568242 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 13 00:41:05.568252 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 13 00:41:05.568261 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 13 00:41:05.568270 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 13 00:41:05.568278 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 13 00:41:05.568287 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 13 00:41:05.558118 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 13 00:41:05.604484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 13 00:41:05.607683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 13 00:41:05.618329 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 13 00:41:05.623563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:05.625653 systemd[1]: Starting disk-uuid.service... May 13 00:41:05.634848 disk-uuid[522]: Primary Header is updated. May 13 00:41:05.634848 disk-uuid[522]: Secondary Entries is updated. May 13 00:41:05.634848 disk-uuid[522]: Secondary Header is updated. May 13 00:41:05.642576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:05.646575 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:05.879060 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 13 00:41:05.879140 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 13 00:41:05.879160 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 13 00:41:05.880590 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 13 00:41:05.881589 kernel: ata3.00: applying bridge limits May 13 00:41:05.881601 kernel: ata3.00: configured for UDMA/100 May 13 00:41:05.882570 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 13 00:41:05.884582 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 13 00:41:05.885578 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 13 00:41:05.886591 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 13 00:41:05.936898 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 13 00:41:05.954343 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 00:41:05.954362 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 13 00:41:06.646580 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:41:06.647005 disk-uuid[523]: The operation has completed successfully. May 13 00:41:06.650582 kernel: block device autoloading is deprecated. It will be removed in Linux 5.19 May 13 00:41:06.675193 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:41:06.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.675289 systemd[1]: Finished disk-uuid.service. May 13 00:41:06.680126 systemd[1]: Starting verity-setup.service... May 13 00:41:06.694595 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 13 00:41:06.711237 systemd[1]: Found device dev-mapper-usr.device. May 13 00:41:06.714056 systemd[1]: Mounting sysusr-usr.mount... May 13 00:41:06.715874 systemd[1]: Finished verity-setup.service. May 13 00:41:06.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.770572 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 13 00:41:06.771009 systemd[1]: Mounted sysusr-usr.mount. May 13 00:41:06.771869 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 13 00:41:06.772503 systemd[1]: Starting ignition-setup.service... May 13 00:41:06.775131 systemd[1]: Starting parse-ip-for-networkd.service... May 13 00:41:06.781132 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:06.781154 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:06.781163 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:06.788796 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:41:06.797536 systemd[1]: Finished ignition-setup.service. May 13 00:41:06.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.798728 systemd[1]: Starting ignition-fetch-offline.service... May 13 00:41:06.836400 ignition[641]: Ignition 2.14.0 May 13 00:41:06.836411 ignition[641]: Stage: fetch-offline May 13 00:41:06.836799 systemd[1]: Finished parse-ip-for-networkd.service. May 13 00:41:06.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.838000 audit: BPF prog-id=9 op=LOAD May 13 00:41:06.836484 ignition[641]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:06.839992 systemd[1]: Starting systemd-networkd.service... May 13 00:41:06.836493 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:06.836587 ignition[641]: parsed url from cmdline: "" May 13 00:41:06.836591 ignition[641]: no config URL provided May 13 00:41:06.836595 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:41:06.836601 ignition[641]: no config at "/usr/lib/ignition/user.ign" May 13 00:41:06.836620 ignition[641]: op(1): [started] loading QEMU firmware config module May 13 00:41:06.836624 ignition[641]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:41:06.839411 ignition[641]: op(1): [finished] loading QEMU firmware config module May 13 00:41:06.839437 ignition[641]: QEMU firmware config was not found. Ignoring... May 13 00:41:06.887638 ignition[641]: parsing config with SHA512: 7f4eb39e966b04408abb025a935f118f3a8420b7d64e0aac1e7f09e1e97f257bd48a434974fdf529a3dda793f9972a5844a59c17d6b5b46c7bedb04354f9798a May 13 00:41:06.894282 unknown[641]: fetched base config from "system" May 13 00:41:06.894294 unknown[641]: fetched user config from "qemu" May 13 00:41:06.896486 ignition[641]: fetch-offline: fetch-offline passed May 13 00:41:06.896535 ignition[641]: Ignition finished successfully May 13 00:41:06.899193 systemd[1]: Finished ignition-fetch-offline.service. May 13 00:41:06.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.904676 systemd-networkd[718]: lo: Link UP May 13 00:41:06.904684 systemd-networkd[718]: lo: Gained carrier May 13 00:41:06.905042 systemd-networkd[718]: Enumeration completed May 13 00:41:06.905126 systemd[1]: Started systemd-networkd.service. May 13 00:41:06.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.905228 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:06.906188 systemd-networkd[718]: eth0: Link UP May 13 00:41:06.906190 systemd-networkd[718]: eth0: Gained carrier May 13 00:41:06.906536 systemd[1]: Reached target network.target. May 13 00:41:06.908776 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:41:06.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.909531 systemd[1]: Starting ignition-kargs.service... May 13 00:41:06.910668 systemd[1]: Starting iscsiuio.service... May 13 00:41:06.919210 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:06.919210 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 13 00:41:06.919210 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 13 00:41:06.919210 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. May 13 00:41:06.919210 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 13 00:41:06.919210 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 13 00:41:06.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.914441 systemd[1]: Started iscsiuio.service. May 13 00:41:06.924883 ignition[720]: Ignition 2.14.0 May 13 00:41:06.915703 systemd[1]: Starting iscsid.service... May 13 00:41:06.924889 ignition[720]: Stage: kargs May 13 00:41:06.919408 systemd[1]: Started iscsid.service. May 13 00:41:06.924974 ignition[720]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:06.927312 systemd[1]: Starting dracut-initqueue.service... May 13 00:41:06.924983 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:06.928539 systemd[1]: Finished ignition-kargs.service. May 13 00:41:06.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.926244 ignition[720]: kargs: kargs passed May 13 00:41:06.930833 systemd[1]: Starting ignition-disks.service... May 13 00:41:06.926280 ignition[720]: Ignition finished successfully May 13 00:41:06.938667 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:06.937673 ignition[731]: Ignition 2.14.0 May 13 00:41:06.939759 systemd[1]: Finished ignition-disks.service. May 13 00:41:06.937678 ignition[731]: Stage: disks May 13 00:41:06.941633 systemd[1]: Reached target initrd-root-device.target. May 13 00:41:06.937772 ignition[731]: no configs at "/usr/lib/ignition/base.d" May 13 00:41:06.943068 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:06.937780 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:06.944518 systemd[1]: Reached target local-fs.target. May 13 00:41:06.938957 ignition[731]: disks: disks passed May 13 00:41:06.945045 systemd[1]: Reached target sysinit.target. May 13 00:41:06.938989 ignition[731]: Ignition finished successfully May 13 00:41:06.947111 systemd[1]: Reached target basic.target. May 13 00:41:06.965144 systemd[1]: Finished dracut-initqueue.service. May 13 00:41:06.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.965465 systemd[1]: Reached target remote-fs-pre.target. May 13 00:41:06.967144 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:06.968503 systemd[1]: Reached target remote-fs.target. May 13 00:41:06.971386 systemd[1]: Starting dracut-pre-mount.service... May 13 00:41:06.979486 systemd[1]: Finished dracut-pre-mount.service. May 13 00:41:06.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.980480 systemd[1]: Starting systemd-fsck-root.service... May 13 00:41:06.990362 systemd-fsck[751]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 13 00:41:06.995791 systemd[1]: Finished systemd-fsck-root.service. May 13 00:41:06.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:06.998168 systemd[1]: Mounting sysroot.mount... May 13 00:41:07.006570 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 13 00:41:07.007021 systemd[1]: Mounted sysroot.mount. May 13 00:41:07.007312 systemd[1]: Reached target initrd-root-fs.target. May 13 00:41:07.009589 systemd[1]: Mounting sysroot-usr.mount... May 13 00:41:07.011571 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 13 00:41:07.011598 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:41:07.011615 systemd[1]: Reached target ignition-diskful.target. May 13 00:41:07.013835 systemd[1]: Mounted sysroot-usr.mount. May 13 00:41:07.015829 systemd[1]: Starting initrd-setup-root.service... May 13 00:41:07.020486 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:41:07.021873 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory May 13 00:41:07.024150 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:41:07.026192 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:41:07.041312 systemd-resolved[200]: Detected conflict on linux IN A 10.0.0.51 May 13 00:41:07.041326 systemd-resolved[200]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 13 00:41:07.047758 systemd[1]: Finished initrd-setup-root.service. May 13 00:41:07.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.050059 systemd[1]: Starting ignition-mount.service... May 13 00:41:07.050973 systemd[1]: Starting sysroot-boot.service... May 13 00:41:07.057214 bash[803]: umount: /sysroot/usr/share/oem: not mounted. May 13 00:41:07.063648 ignition[804]: INFO : Ignition 2.14.0 May 13 00:41:07.063648 ignition[804]: INFO : Stage: mount May 13 00:41:07.065308 ignition[804]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:07.065308 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:07.065308 ignition[804]: INFO : mount: mount passed May 13 00:41:07.065308 ignition[804]: INFO : Ignition finished successfully May 13 00:41:07.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:07.065301 systemd[1]: Finished ignition-mount.service. May 13 00:41:07.069547 systemd[1]: Finished sysroot-boot.service. May 13 00:41:07.721956 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 13 00:41:07.730441 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) May 13 00:41:07.730471 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 13 00:41:07.730482 kernel: BTRFS info (device vda6): using free space tree May 13 00:41:07.732087 kernel: BTRFS info (device vda6): has skinny extents May 13 00:41:07.735070 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 13 00:41:07.737453 systemd[1]: Starting ignition-files.service... May 13 00:41:07.750724 ignition[832]: INFO : Ignition 2.14.0 May 13 00:41:07.750724 ignition[832]: INFO : Stage: files May 13 00:41:07.753183 ignition[832]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:07.753183 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:07.753183 ignition[832]: DEBUG : files: compiled without relabeling support, skipping May 13 00:41:07.753183 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:41:07.753183 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:41:07.761910 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:41:07.761910 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:41:07.761910 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:41:07.761910 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:41:07.761910 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:41:07.761910 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:41:07.761910 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 00:41:07.754121 unknown[832]: wrote ssh authorized keys file for user: core May 13 00:41:07.796900 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:41:07.921105 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 00:41:07.923312 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:41:07.923312 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 13 00:41:08.085172 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 13 00:41:08.190435 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:08.192495 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 00:41:08.496571 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 13 00:41:08.684695 systemd-networkd[718]: eth0: Gained IPv6LL May 13 00:41:08.845722 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 00:41:08.845722 ignition[832]: INFO : files: op(d): [started] processing unit "containerd.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:41:08.849657 ignition[832]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:41:08.849657 ignition[832]: INFO : files: op(d): [finished] processing unit "containerd.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:41:08.849657 ignition[832]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:08.894280 ignition[832]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:41:08.896182 ignition[832]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:41:08.896182 ignition[832]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:08.896182 ignition[832]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:41:08.896182 ignition[832]: INFO : files: files passed May 13 00:41:08.896182 ignition[832]: INFO : Ignition finished successfully May 13 00:41:08.903637 systemd[1]: Finished ignition-files.service. May 13 00:41:08.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.905347 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 13 00:41:08.905836 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 13 00:41:08.906368 systemd[1]: Starting ignition-quench.service... May 13 00:41:08.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.909299 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:41:08.909373 systemd[1]: Finished ignition-quench.service. May 13 00:41:08.918241 initrd-setup-root-after-ignition[858]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 13 00:41:08.921207 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:41:08.923171 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 13 00:41:08.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.923490 systemd[1]: Reached target ignition-complete.target. May 13 00:41:08.927377 systemd[1]: Starting initrd-parse-etc.service... May 13 00:41:08.941161 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:41:08.941236 systemd[1]: Finished initrd-parse-etc.service. May 13 00:41:08.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.943357 systemd[1]: Reached target initrd-fs.target. May 13 00:41:08.945008 systemd[1]: Reached target initrd.target. May 13 00:41:08.946641 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 13 00:41:08.947442 systemd[1]: Starting dracut-pre-pivot.service... May 13 00:41:08.961849 systemd[1]: Finished dracut-pre-pivot.service. May 13 00:41:08.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.963753 systemd[1]: Starting initrd-cleanup.service... May 13 00:41:08.973732 systemd[1]: Stopped target nss-lookup.target. May 13 00:41:08.974056 systemd[1]: Stopped target remote-cryptsetup.target. May 13 00:41:08.976063 systemd[1]: Stopped target timers.target. May 13 00:41:08.977923 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:41:08.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.978007 systemd[1]: Stopped dracut-pre-pivot.service. May 13 00:41:08.979634 systemd[1]: Stopped target initrd.target. May 13 00:41:08.981372 systemd[1]: Stopped target basic.target. May 13 00:41:08.983756 systemd[1]: Stopped target ignition-complete.target. May 13 00:41:08.984321 systemd[1]: Stopped target ignition-diskful.target. May 13 00:41:08.986603 systemd[1]: Stopped target initrd-root-device.target. May 13 00:41:08.988133 systemd[1]: Stopped target remote-fs.target. May 13 00:41:08.989776 systemd[1]: Stopped target remote-fs-pre.target. May 13 00:41:08.990161 systemd[1]: Stopped target sysinit.target. May 13 00:41:08.993278 systemd[1]: Stopped target local-fs.target. May 13 00:41:08.994987 systemd[1]: Stopped target local-fs-pre.target. May 13 00:41:08.996805 systemd[1]: Stopped target swap.target. May 13 00:41:08.998439 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:41:08.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:08.998517 systemd[1]: Stopped dracut-pre-mount.service. May 13 00:41:08.999121 systemd[1]: Stopped target cryptsetup.target. May 13 00:41:09.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.001582 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:41:09.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.001657 systemd[1]: Stopped dracut-initqueue.service. May 13 00:41:09.003357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:41:09.003435 systemd[1]: Stopped ignition-fetch-offline.service. May 13 00:41:09.005198 systemd[1]: Stopped target paths.target. May 13 00:41:09.007200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:41:09.010692 systemd[1]: Stopped systemd-ask-password-console.path. May 13 00:41:09.011123 systemd[1]: Stopped target slices.target. May 13 00:41:09.013510 systemd[1]: Stopped target sockets.target. May 13 00:41:09.015013 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:41:09.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.015081 systemd[1]: Closed iscsid.socket. May 13 00:41:09.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.016436 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:41:09.016513 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 13 00:41:09.017902 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:41:09.017976 systemd[1]: Stopped ignition-files.service. May 13 00:41:09.020081 systemd[1]: Stopping ignition-mount.service... May 13 00:41:09.024019 systemd[1]: Stopping iscsiuio.service... May 13 00:41:09.025497 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:41:09.026505 systemd[1]: Stopped kmod-static-nodes.service. May 13 00:41:09.029766 ignition[873]: INFO : Ignition 2.14.0 May 13 00:41:09.029766 ignition[873]: INFO : Stage: umount May 13 00:41:09.029766 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:41:09.029766 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:41:09.029766 ignition[873]: INFO : umount: umount passed May 13 00:41:09.029766 ignition[873]: INFO : Ignition finished successfully May 13 00:41:09.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.035510 systemd[1]: Stopping sysroot-boot.service... May 13 00:41:09.036953 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:41:09.037057 systemd[1]: Stopped systemd-udev-trigger.service. May 13 00:41:09.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.039882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:41:09.039969 systemd[1]: Stopped dracut-pre-trigger.service. May 13 00:41:09.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.044347 systemd[1]: iscsiuio.service: Deactivated successfully. May 13 00:41:09.044429 systemd[1]: Stopped iscsiuio.service. May 13 00:41:09.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.047800 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:41:09.049200 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:41:09.050281 systemd[1]: Stopped ignition-mount.service. May 13 00:41:09.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.052232 systemd[1]: Stopped target network.target. May 13 00:41:09.053913 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:41:09.053942 systemd[1]: Closed iscsiuio.socket. May 13 00:41:09.056219 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:41:09.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.056258 systemd[1]: Stopped ignition-disks.service. May 13 00:41:09.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.058090 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:41:09.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.058120 systemd[1]: Stopped ignition-kargs.service. May 13 00:41:09.059793 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:41:09.059823 systemd[1]: Stopped ignition-setup.service. May 13 00:41:09.061698 systemd[1]: Stopping systemd-networkd.service... May 13 00:41:09.065853 systemd[1]: Stopping systemd-resolved.service... May 13 00:41:09.067786 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:41:09.068903 systemd[1]: Finished initrd-cleanup.service. May 13 00:41:09.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.072590 systemd-networkd[718]: eth0: DHCPv6 lease lost May 13 00:41:09.073882 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:41:09.075252 systemd[1]: Stopped systemd-networkd.service. May 13 00:41:09.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.077788 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:41:09.077833 systemd[1]: Closed systemd-networkd.socket. May 13 00:41:09.081547 systemd[1]: Stopping network-cleanup.service... May 13 00:41:09.081000 audit: BPF prog-id=9 op=UNLOAD May 13 00:41:09.083547 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:41:09.083618 systemd[1]: Stopped parse-ip-for-networkd.service. May 13 00:41:09.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.085634 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:41:09.085667 systemd[1]: Stopped systemd-sysctl.service. May 13 00:41:09.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.089429 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:41:09.090503 systemd[1]: Stopped systemd-modules-load.service. May 13 00:41:09.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.092740 systemd[1]: Stopping systemd-udevd.service... May 13 00:41:09.095750 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:41:09.097782 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:41:09.099017 systemd[1]: Stopped systemd-resolved.service. May 13 00:41:09.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.102800 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:41:09.104166 systemd[1]: Stopped systemd-udevd.service. May 13 00:41:09.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.106854 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:41:09.108065 systemd[1]: Stopped network-cleanup.service. May 13 00:41:09.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.109000 audit: BPF prog-id=6 op=UNLOAD May 13 00:41:09.110257 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:41:09.110309 systemd[1]: Closed systemd-udevd-control.socket. May 13 00:41:09.113597 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:41:09.113639 systemd[1]: Closed systemd-udevd-kernel.socket. May 13 00:41:09.117008 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:41:09.118304 systemd[1]: Stopped dracut-pre-udev.service. May 13 00:41:09.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.120215 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:41:09.121389 systemd[1]: Stopped dracut-cmdline.service. May 13 00:41:09.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.123325 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:41:09.123376 systemd[1]: Stopped dracut-cmdline-ask.service. May 13 00:41:09.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.127535 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 13 00:41:09.129621 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:41:09.130914 systemd[1]: Stopped systemd-vconsole-setup.service. May 13 00:41:09.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.133249 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:41:09.134374 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 13 00:41:09.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.147834 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:41:09.147923 systemd[1]: Stopped sysroot-boot.service. May 13 00:41:09.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.150474 systemd[1]: Reached target initrd-switch-root.target. May 13 00:41:09.152256 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:41:09.153309 systemd[1]: Stopped initrd-setup-root.service. May 13 00:41:09.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:09.155643 systemd[1]: Starting initrd-switch-root.service... May 13 00:41:09.161225 systemd[1]: Switching root. May 13 00:41:09.163000 audit: BPF prog-id=8 op=UNLOAD May 13 00:41:09.163000 audit: BPF prog-id=7 op=UNLOAD May 13 00:41:09.163000 audit: BPF prog-id=5 op=UNLOAD May 13 00:41:09.163000 audit: BPF prog-id=4 op=UNLOAD May 13 00:41:09.163000 audit: BPF prog-id=3 op=UNLOAD May 13 00:41:09.182031 iscsid[724]: iscsid shutting down. May 13 00:41:09.182774 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 13 00:41:09.182821 systemd-journald[198]: Journal stopped May 13 00:41:12.076629 kernel: kauditd_printk_skb: 70 callbacks suppressed May 13 00:41:12.076678 kernel: audit: type=1335 audit(1747096869.182:81): pid=198 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" nl-mcgrp=1 op=disconnect res=1 May 13 00:41:12.076692 kernel: SELinux: Class mctp_socket not defined in policy. May 13 00:41:12.076703 kernel: SELinux: Class anon_inode not defined in policy. May 13 00:41:12.076713 kernel: SELinux: the above unknown classes and permissions will be allowed May 13 00:41:12.076722 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:41:12.076731 kernel: SELinux: policy capability open_perms=1 May 13 00:41:12.076746 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:41:12.076755 kernel: SELinux: policy capability always_check_network=0 May 13 00:41:12.076764 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:41:12.076773 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:41:12.076783 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:41:12.076795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:41:12.076804 kernel: audit: type=1403 audit(1747096869.279:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:41:12.076818 systemd[1]: Successfully loaded SELinux policy in 44.091ms. May 13 00:41:12.076835 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.164ms. May 13 00:41:12.076847 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 13 00:41:12.076858 systemd[1]: Detected virtualization kvm. May 13 00:41:12.076870 systemd[1]: Detected architecture x86-64. May 13 00:41:12.076883 systemd[1]: Detected first boot. May 13 00:41:12.076897 systemd[1]: Initializing machine ID from VM UUID. May 13 00:41:12.076908 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 13 00:41:12.076918 kernel: audit: type=1400 audit(1747096869.892:83): avc: denied { associate } for pid=924 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 13 00:41:12.076929 kernel: audit: type=1300 audit(1747096869.892:83): arch=c000003e syscall=188 success=yes exit=0 a0=c0001d7672 a1=c0000daae0 a2=c0000e2a00 a3=32 items=0 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:12.076939 kernel: audit: type=1327 audit(1747096869.892:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 13 00:41:12.076949 kernel: audit: type=1400 audit(1747096869.893:84): avc: denied { associate } for pid=924 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 13 00:41:12.076966 kernel: audit: type=1300 audit(1747096869.893:84): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001d7749 a2=1ed a3=0 items=2 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:12.076978 kernel: audit: type=1307 audit(1747096869.893:84): cwd="/" May 13 00:41:12.076988 kernel: audit: type=1302 audit(1747096869.893:84): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.076998 kernel: audit: type=1302 audit(1747096869.893:84): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.077008 systemd[1]: Populated /etc with preset unit settings. May 13 00:41:12.077018 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:12.077030 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:12.077041 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:12.077051 systemd[1]: Queued start job for default target multi-user.target. May 13 00:41:12.077061 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 13 00:41:12.077071 systemd[1]: Created slice system-addon\x2dconfig.slice. May 13 00:41:12.077081 systemd[1]: Created slice system-addon\x2drun.slice. May 13 00:41:12.077091 systemd[1]: Created slice system-getty.slice. May 13 00:41:12.077101 systemd[1]: Created slice system-modprobe.slice. May 13 00:41:12.077113 systemd[1]: Created slice system-serial\x2dgetty.slice. May 13 00:41:12.077123 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 13 00:41:12.077133 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 13 00:41:12.077143 systemd[1]: Created slice user.slice. May 13 00:41:12.077153 systemd[1]: Started systemd-ask-password-console.path. May 13 00:41:12.077163 systemd[1]: Started systemd-ask-password-wall.path. May 13 00:41:12.077174 systemd[1]: Set up automount boot.automount. May 13 00:41:12.077184 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 13 00:41:12.077195 systemd[1]: Reached target integritysetup.target. May 13 00:41:12.077205 systemd[1]: Reached target remote-cryptsetup.target. May 13 00:41:12.077215 systemd[1]: Reached target remote-fs.target. May 13 00:41:12.077225 systemd[1]: Reached target slices.target. May 13 00:41:12.077234 systemd[1]: Reached target swap.target. May 13 00:41:12.077244 systemd[1]: Reached target torcx.target. May 13 00:41:12.077254 systemd[1]: Reached target veritysetup.target. May 13 00:41:12.077264 systemd[1]: Listening on systemd-coredump.socket. May 13 00:41:12.077274 systemd[1]: Listening on systemd-initctl.socket. May 13 00:41:12.077284 systemd[1]: Listening on systemd-journald-audit.socket. May 13 00:41:12.077296 systemd[1]: Listening on systemd-journald-dev-log.socket. May 13 00:41:12.077306 systemd[1]: Listening on systemd-journald.socket. May 13 00:41:12.077316 systemd[1]: Listening on systemd-networkd.socket. May 13 00:41:12.077326 systemd[1]: Listening on systemd-udevd-control.socket. May 13 00:41:12.077336 systemd[1]: Listening on systemd-udevd-kernel.socket. May 13 00:41:12.077345 systemd[1]: Listening on systemd-userdbd.socket. May 13 00:41:12.077355 systemd[1]: Mounting dev-hugepages.mount... May 13 00:41:12.077365 systemd[1]: Mounting dev-mqueue.mount... May 13 00:41:12.077375 systemd[1]: Mounting media.mount... May 13 00:41:12.077386 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.077396 systemd[1]: Mounting sys-kernel-debug.mount... May 13 00:41:12.077406 systemd[1]: Mounting sys-kernel-tracing.mount... May 13 00:41:12.077415 systemd[1]: Mounting tmp.mount... May 13 00:41:12.077425 systemd[1]: Starting flatcar-tmpfiles.service... May 13 00:41:12.077435 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:12.077446 systemd[1]: Starting kmod-static-nodes.service... May 13 00:41:12.077455 systemd[1]: Starting modprobe@configfs.service... May 13 00:41:12.077465 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:12.077476 systemd[1]: Starting modprobe@drm.service... May 13 00:41:12.077486 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:12.077496 systemd[1]: Starting modprobe@fuse.service... May 13 00:41:12.077506 systemd[1]: Starting modprobe@loop.service... May 13 00:41:12.077516 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:41:12.077527 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:41:12.077537 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 13 00:41:12.077568 systemd[1]: Starting systemd-journald.service... May 13 00:41:12.077580 kernel: fuse: init (API version 7.34) May 13 00:41:12.077591 systemd[1]: Starting systemd-modules-load.service... May 13 00:41:12.077601 kernel: loop: module loaded May 13 00:41:12.077610 systemd[1]: Starting systemd-network-generator.service... May 13 00:41:12.077620 systemd[1]: Starting systemd-remount-fs.service... May 13 00:41:12.077630 systemd[1]: Starting systemd-udev-trigger.service... May 13 00:41:12.077640 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:12.077650 systemd[1]: Mounted dev-hugepages.mount. May 13 00:41:12.077660 systemd[1]: Mounted dev-mqueue.mount. May 13 00:41:12.077670 systemd[1]: Mounted media.mount. May 13 00:41:12.077682 systemd[1]: Mounted sys-kernel-debug.mount. May 13 00:41:12.077691 systemd[1]: Mounted sys-kernel-tracing.mount. May 13 00:41:12.077701 systemd[1]: Mounted tmp.mount. May 13 00:41:12.077711 systemd[1]: Finished kmod-static-nodes.service. May 13 00:41:12.077721 systemd[1]: Finished flatcar-tmpfiles.service. May 13 00:41:12.077733 systemd-journald[1015]: Journal started May 13 00:41:12.077769 systemd-journald[1015]: Runtime Journal (/run/log/journal/76caea93e1694e76ab19dac50ce64713) is 6.0M, max 48.5M, 42.5M free. May 13 00:41:11.991000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 13 00:41:11.991000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 13 00:41:12.074000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 13 00:41:12.074000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff564d2780 a2=4000 a3=7fff564d281c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:12.074000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 13 00:41:12.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.080584 systemd[1]: Started systemd-journald.service. May 13 00:41:12.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.081777 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:41:12.081925 systemd[1]: Finished modprobe@configfs.service. May 13 00:41:12.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.082916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:12.083094 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:12.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.084093 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:12.084230 systemd[1]: Finished modprobe@drm.service. May 13 00:41:12.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.085165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:12.085311 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:12.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.086325 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:41:12.086461 systemd[1]: Finished modprobe@fuse.service. May 13 00:41:12.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.087409 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:12.087570 systemd[1]: Finished modprobe@loop.service. May 13 00:41:12.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.088669 systemd[1]: Finished systemd-modules-load.service. May 13 00:41:12.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.089756 systemd[1]: Finished systemd-network-generator.service. May 13 00:41:12.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.091317 systemd[1]: Finished systemd-remount-fs.service. May 13 00:41:12.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.092604 systemd[1]: Reached target network-pre.target. May 13 00:41:12.094438 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 13 00:41:12.096243 systemd[1]: Mounting sys-kernel-config.mount... May 13 00:41:12.097023 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:41:12.098204 systemd[1]: Starting systemd-hwdb-update.service... May 13 00:41:12.103257 systemd[1]: Starting systemd-journal-flush.service... May 13 00:41:12.104458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:12.105244 systemd[1]: Starting systemd-random-seed.service... May 13 00:41:12.106249 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:12.107089 systemd[1]: Starting systemd-sysctl.service... May 13 00:41:12.108759 systemd[1]: Starting systemd-sysusers.service... May 13 00:41:12.109902 systemd-journald[1015]: Time spent on flushing to /var/log/journal/76caea93e1694e76ab19dac50ce64713 is 15.330ms for 1042 entries. May 13 00:41:12.109902 systemd-journald[1015]: System Journal (/var/log/journal/76caea93e1694e76ab19dac50ce64713) is 8.0M, max 195.6M, 187.6M free. May 13 00:41:12.143906 systemd-journald[1015]: Received client request to flush runtime journal. May 13 00:41:12.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.111845 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 13 00:41:12.113830 systemd[1]: Mounted sys-kernel-config.mount. May 13 00:41:12.144901 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:41:12.118936 systemd[1]: Finished systemd-random-seed.service. May 13 00:41:12.120245 systemd[1]: Reached target first-boot-complete.target. May 13 00:41:12.122410 systemd[1]: Finished systemd-udev-trigger.service. May 13 00:41:12.124351 systemd[1]: Starting systemd-udev-settle.service... May 13 00:41:12.125594 systemd[1]: Finished systemd-sysusers.service. May 13 00:41:12.126763 systemd[1]: Finished systemd-sysctl.service. May 13 00:41:12.128470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 13 00:41:12.144778 systemd[1]: Finished systemd-journal-flush.service. May 13 00:41:12.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.150220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 13 00:41:12.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.558723 systemd[1]: Finished systemd-hwdb-update.service. May 13 00:41:12.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.560888 systemd[1]: Starting systemd-udevd.service... May 13 00:41:12.576984 systemd-udevd[1069]: Using default interface naming scheme 'v252'. May 13 00:41:12.588544 systemd[1]: Started systemd-udevd.service. May 13 00:41:12.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.591905 systemd[1]: Starting systemd-networkd.service... May 13 00:41:12.597982 systemd[1]: Starting systemd-userdbd.service... May 13 00:41:12.632592 systemd[1]: Started systemd-userdbd.service. May 13 00:41:12.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.643931 systemd[1]: Found device dev-ttyS0.device. May 13 00:41:12.649448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 13 00:41:12.666582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 13 00:41:12.672909 kernel: ACPI: button: Power Button [PWRF] May 13 00:41:12.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.682320 systemd-networkd[1081]: lo: Link UP May 13 00:41:12.682326 systemd-networkd[1081]: lo: Gained carrier May 13 00:41:12.682666 systemd-networkd[1081]: Enumeration completed May 13 00:41:12.682769 systemd[1]: Started systemd-networkd.service. May 13 00:41:12.684352 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:41:12.685295 systemd-networkd[1081]: eth0: Link UP May 13 00:41:12.685369 systemd-networkd[1081]: eth0: Gained carrier May 13 00:41:12.681000 audit[1075]: AVC avc: denied { confidentiality } for pid=1075 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 13 00:41:12.681000 audit[1075]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55bfc9e0c240 a1=338ac a2=7f67b6084bc5 a3=5 items=110 ppid=1069 pid=1075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:12.681000 audit: CWD cwd="/" May 13 00:41:12.681000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=1 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=2 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=3 name=(null) inode=1980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=4 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=5 name=(null) inode=1981 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=6 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=7 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=8 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=9 name=(null) inode=1983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=10 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=11 name=(null) inode=1984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=12 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=13 name=(null) inode=1985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=14 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=15 name=(null) inode=1986 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=16 name=(null) inode=1982 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=17 name=(null) inode=1987 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=18 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=19 name=(null) inode=1988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=20 name=(null) inode=1988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=21 name=(null) inode=1989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=22 name=(null) inode=1988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=23 name=(null) inode=1990 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=24 name=(null) inode=1988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=25 name=(null) inode=1991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=26 name=(null) inode=1988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=27 name=(null) inode=1992 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=28 name=(null) inode=1988 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=29 name=(null) inode=1993 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=30 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=31 name=(null) inode=1994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=32 name=(null) inode=1994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=33 name=(null) inode=1995 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=34 name=(null) inode=1994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=35 name=(null) inode=1996 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=36 name=(null) inode=1994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=37 name=(null) inode=1997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=38 name=(null) inode=1994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=39 name=(null) inode=1998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=40 name=(null) inode=1994 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=41 name=(null) inode=1999 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=42 name=(null) inode=1979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=43 name=(null) inode=2000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=44 name=(null) inode=2000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=45 name=(null) inode=2001 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=46 name=(null) inode=2000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=47 name=(null) inode=2002 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=48 name=(null) inode=2000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=49 name=(null) inode=2003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=50 name=(null) inode=2000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=51 name=(null) inode=2004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=52 name=(null) inode=2000 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=53 name=(null) inode=2005 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=55 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=56 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=57 name=(null) inode=2007 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=58 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=59 name=(null) inode=2008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=60 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=61 name=(null) inode=2009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=62 name=(null) inode=2009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=63 name=(null) inode=2010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=64 name=(null) inode=2009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=65 name=(null) inode=2011 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=66 name=(null) inode=2009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=67 name=(null) inode=2012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=68 name=(null) inode=2009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=69 name=(null) inode=2013 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=70 name=(null) inode=2009 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=71 name=(null) inode=2014 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=72 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=73 name=(null) inode=2015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=74 name=(null) inode=2015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=75 name=(null) inode=2016 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=76 name=(null) inode=2015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=77 name=(null) inode=2017 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=78 name=(null) inode=2015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=79 name=(null) inode=2018 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=80 name=(null) inode=2015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=81 name=(null) inode=2019 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=82 name=(null) inode=2015 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=83 name=(null) inode=2020 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=84 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=85 name=(null) inode=2021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=86 name=(null) inode=2021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=87 name=(null) inode=2022 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=88 name=(null) inode=2021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=89 name=(null) inode=2023 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=90 name=(null) inode=2021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=91 name=(null) inode=2024 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=92 name=(null) inode=2021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=93 name=(null) inode=2025 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=94 name=(null) inode=2021 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=95 name=(null) inode=2026 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=96 name=(null) inode=2006 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=97 name=(null) inode=2027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=98 name=(null) inode=2027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=99 name=(null) inode=2028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=100 name=(null) inode=2027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=101 name=(null) inode=2029 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=102 name=(null) inode=2027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=103 name=(null) inode=2030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=104 name=(null) inode=2027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=105 name=(null) inode=2031 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=106 name=(null) inode=2027 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=107 name=(null) inode=2032 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PATH item=109 name=(null) inode=2033 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 13 00:41:12.681000 audit: PROCTITLE proctitle="(udev-worker)" May 13 00:41:12.699680 systemd-networkd[1081]: eth0: DHCPv4 address 10.0.0.51/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:41:12.709313 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 13 00:41:12.709529 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 13 00:41:12.709668 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 13 00:41:12.723573 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 13 00:41:12.737566 kernel: mousedev: PS/2 mouse device common for all mice May 13 00:41:12.771982 kernel: kvm: Nested Virtualization enabled May 13 00:41:12.772051 kernel: SVM: kvm: Nested Paging enabled May 13 00:41:12.772066 kernel: SVM: Virtual VMLOAD VMSAVE supported May 13 00:41:12.772658 kernel: SVM: Virtual GIF supported May 13 00:41:12.787576 kernel: EDAC MC: Ver: 3.0.0 May 13 00:41:12.812893 systemd[1]: Finished systemd-udev-settle.service. May 13 00:41:12.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.814885 systemd[1]: Starting lvm2-activation-early.service... May 13 00:41:12.821923 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:12.845246 systemd[1]: Finished lvm2-activation-early.service. May 13 00:41:12.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.846242 systemd[1]: Reached target cryptsetup.target. May 13 00:41:12.847936 systemd[1]: Starting lvm2-activation.service... May 13 00:41:12.850686 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:41:12.875225 systemd[1]: Finished lvm2-activation.service. May 13 00:41:12.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.876226 systemd[1]: Reached target local-fs-pre.target. May 13 00:41:12.877141 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:41:12.877163 systemd[1]: Reached target local-fs.target. May 13 00:41:12.878008 systemd[1]: Reached target machines.target. May 13 00:41:12.879919 systemd[1]: Starting ldconfig.service... May 13 00:41:12.880966 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:12.881000 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:12.881740 systemd[1]: Starting systemd-boot-update.service... May 13 00:41:12.883640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 13 00:41:12.885904 systemd[1]: Starting systemd-machine-id-commit.service... May 13 00:41:12.888114 systemd[1]: Starting systemd-sysext.service... May 13 00:41:12.889429 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) May 13 00:41:12.890423 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 13 00:41:12.893076 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 13 00:41:12.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.899374 systemd[1]: Unmounting usr-share-oem.mount... May 13 00:41:12.902606 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 13 00:41:12.902807 systemd[1]: Unmounted usr-share-oem.mount. May 13 00:41:12.914590 kernel: loop0: detected capacity change from 0 to 210664 May 13 00:41:12.929719 systemd-fsck[1119]: fsck.fat 4.2 (2021-01-31) May 13 00:41:12.929719 systemd-fsck[1119]: /dev/vda1: 790 files, 120692/258078 clusters May 13 00:41:12.931925 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 13 00:41:12.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:12.934951 systemd[1]: Mounting boot.mount... May 13 00:41:12.946035 systemd[1]: Mounted boot.mount. May 13 00:41:13.164606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:41:13.165667 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:41:13.166457 systemd[1]: Finished systemd-machine-id-commit.service. May 13 00:41:13.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.168245 systemd[1]: Finished systemd-boot-update.service. May 13 00:41:13.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.183572 kernel: loop1: detected capacity change from 0 to 210664 May 13 00:41:13.187560 (sd-sysext)[1131]: Using extensions 'kubernetes'. May 13 00:41:13.187878 (sd-sysext)[1131]: Merged extensions into '/usr'. May 13 00:41:13.202881 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.204272 systemd[1]: Mounting usr-share-oem.mount... May 13 00:41:13.205197 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:13.206257 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:13.208298 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:13.210320 systemd[1]: Starting modprobe@loop.service... May 13 00:41:13.211142 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:13.211239 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.211329 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.213881 systemd[1]: Mounted usr-share-oem.mount. May 13 00:41:13.215503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:13.215786 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:13.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.217284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:13.217412 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:13.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.218847 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:13.219000 systemd[1]: Finished modprobe@loop.service. May 13 00:41:13.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.220254 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:13.220352 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:13.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.221692 systemd[1]: Finished systemd-sysext.service. May 13 00:41:13.224113 systemd[1]: Starting ensure-sysext.service... May 13 00:41:13.225399 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:41:13.225770 systemd[1]: Starting systemd-tmpfiles-setup.service... May 13 00:41:13.230451 systemd[1]: Reloading. May 13 00:41:13.235068 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 13 00:41:13.236084 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:41:13.237382 systemd-tmpfiles[1145]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:41:13.272306 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-05-13T00:41:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:13.272669 /usr/lib/systemd/system-generators/torcx-generator[1165]: time="2025-05-13T00:41:13Z" level=info msg="torcx already run" May 13 00:41:13.333439 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:13.333455 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:13.350164 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:13.405156 systemd[1]: Finished ldconfig.service. May 13 00:41:13.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.407022 systemd[1]: Finished systemd-tmpfiles-setup.service. May 13 00:41:13.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.409837 systemd[1]: Starting audit-rules.service... May 13 00:41:13.411700 systemd[1]: Starting clean-ca-certificates.service... May 13 00:41:13.413752 systemd[1]: Starting systemd-journal-catalog-update.service... May 13 00:41:13.416087 systemd[1]: Starting systemd-resolved.service... May 13 00:41:13.418106 systemd[1]: Starting systemd-timesyncd.service... May 13 00:41:13.419872 systemd[1]: Starting systemd-update-utmp.service... May 13 00:41:13.423153 systemd[1]: Finished clean-ca-certificates.service. May 13 00:41:13.423000 audit[1225]: SYSTEM_BOOT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 13 00:41:13.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.430710 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.430953 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:13.434067 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:13.436438 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:13.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.438459 systemd[1]: Starting modprobe@loop.service... May 13 00:41:13.439366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:13.439533 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.439674 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:13.439756 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.440964 systemd[1]: Finished systemd-journal-catalog-update.service. May 13 00:41:13.442502 systemd[1]: Finished systemd-update-utmp.service. May 13 00:41:13.443798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:13.443947 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:13.445170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:13.445290 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:13.446719 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:13.447160 systemd[1]: Finished modprobe@loop.service. May 13 00:41:13.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 13 00:41:13.450245 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.450423 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:13.451000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 13 00:41:13.451000 audit[1245]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdaf4fe730 a2=420 a3=0 items=0 ppid=1215 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 13 00:41:13.451000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 13 00:41:13.451677 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:13.452651 augenrules[1245]: No rules May 13 00:41:13.453914 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:13.455816 systemd[1]: Starting modprobe@loop.service... May 13 00:41:13.456763 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:13.456857 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.458285 systemd[1]: Starting systemd-update-done.service... May 13 00:41:13.459410 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:13.459527 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.460524 systemd[1]: Finished audit-rules.service. May 13 00:41:13.462050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:13.462173 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:13.463496 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:13.463652 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:13.464860 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:13.465109 systemd[1]: Finished modprobe@loop.service. May 13 00:41:13.466575 systemd[1]: Finished systemd-update-done.service. May 13 00:41:13.467809 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:13.467889 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:13.470777 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.471021 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 13 00:41:13.472042 systemd[1]: Starting modprobe@dm_mod.service... May 13 00:41:13.474039 systemd[1]: Starting modprobe@drm.service... May 13 00:41:13.475920 systemd[1]: Starting modprobe@efi_pstore.service... May 13 00:41:13.482412 systemd[1]: Starting modprobe@loop.service... May 13 00:41:13.483280 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 13 00:41:13.483371 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.484425 systemd[1]: Starting systemd-networkd-wait-online.service... May 13 00:41:13.485889 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:41:13.485986 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 00:41:13.486975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:41:13.487105 systemd[1]: Finished modprobe@dm_mod.service. May 13 00:41:13.488395 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:41:13.488514 systemd[1]: Finished modprobe@drm.service. May 13 00:41:13.489650 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:41:13.489766 systemd[1]: Finished modprobe@efi_pstore.service. May 13 00:41:13.490994 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:41:13.491123 systemd[1]: Finished modprobe@loop.service. May 13 00:41:13.492401 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:41:13.492483 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 13 00:41:13.493055 systemd-resolved[1221]: Positive Trust Anchors: May 13 00:41:13.493277 systemd-resolved[1221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:41:13.493363 systemd-resolved[1221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 13 00:41:13.493391 systemd[1]: Finished ensure-sysext.service. May 13 00:41:13.500579 systemd-resolved[1221]: Defaulting to hostname 'linux'. May 13 00:41:13.502078 systemd[1]: Started systemd-resolved.service. May 13 00:41:13.503034 systemd[1]: Reached target network.target. May 13 00:41:13.503834 systemd[1]: Reached target nss-lookup.target. May 13 00:41:13.514140 systemd[1]: Started systemd-timesyncd.service. May 13 00:41:13.515147 systemd[1]: Reached target sysinit.target. May 13 00:41:13.516013 systemd[1]: Started motdgen.path. May 13 00:41:13.516735 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 13 00:41:13.517947 systemd[1]: Started systemd-tmpfiles-clean.timer. May 13 00:41:13.518801 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:41:13.518821 systemd[1]: Reached target paths.target. May 13 00:41:13.519572 systemd-timesyncd[1222]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:41:13.519573 systemd[1]: Reached target time-set.target. May 13 00:41:13.520367 systemd-timesyncd[1222]: Initial clock synchronization to Tue 2025-05-13 00:41:13.422111 UTC. May 13 00:41:13.520497 systemd[1]: Started logrotate.timer. May 13 00:41:13.521295 systemd[1]: Started mdadm.timer. May 13 00:41:13.521961 systemd[1]: Reached target timers.target. May 13 00:41:13.522972 systemd[1]: Listening on dbus.socket. May 13 00:41:13.524762 systemd[1]: Starting docker.socket... May 13 00:41:13.526255 systemd[1]: Listening on sshd.socket. May 13 00:41:13.527083 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.527437 systemd[1]: Listening on docker.socket. May 13 00:41:13.528209 systemd[1]: Reached target sockets.target. May 13 00:41:13.529013 systemd[1]: Reached target basic.target. May 13 00:41:13.529847 systemd[1]: System is tainted: cgroupsv1 May 13 00:41:13.529884 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:13.529903 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 13 00:41:13.530786 systemd[1]: Starting containerd.service... May 13 00:41:13.532359 systemd[1]: Starting dbus.service... May 13 00:41:13.533810 systemd[1]: Starting enable-oem-cloudinit.service... May 13 00:41:13.535594 systemd[1]: Starting extend-filesystems.service... May 13 00:41:13.538646 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 13 00:41:13.539071 jq[1277]: false May 13 00:41:13.539689 systemd[1]: Starting motdgen.service... May 13 00:41:13.541810 systemd[1]: Starting prepare-helm.service... May 13 00:41:13.543460 systemd[1]: Starting ssh-key-proc-cmdline.service... May 13 00:41:13.545474 systemd[1]: Starting sshd-keygen.service... May 13 00:41:13.548149 systemd[1]: Starting systemd-logind.service... May 13 00:41:13.549105 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 13 00:41:13.549154 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:41:13.550197 systemd[1]: Starting update-engine.service... May 13 00:41:13.551955 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 13 00:41:13.555984 jq[1295]: true May 13 00:41:13.554716 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:41:13.554935 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 13 00:41:13.555660 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:41:13.557788 systemd[1]: Finished ssh-key-proc-cmdline.service. May 13 00:41:13.564069 tar[1301]: linux-amd64/helm May 13 00:41:13.565296 jq[1304]: true May 13 00:41:13.566890 systemd[1]: Started dbus.service. May 13 00:41:13.566655 dbus-daemon[1276]: [system] SELinux support is enabled May 13 00:41:13.569594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:41:13.569619 systemd[1]: Reached target system-config.target. May 13 00:41:13.570720 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:41:13.570732 systemd[1]: Reached target user-config.target. May 13 00:41:13.576163 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:41:13.576379 systemd[1]: Finished motdgen.service. May 13 00:41:13.579424 extend-filesystems[1278]: Found loop1 May 13 00:41:13.579424 extend-filesystems[1278]: Found sr0 May 13 00:41:13.579424 extend-filesystems[1278]: Found vda May 13 00:41:13.579424 extend-filesystems[1278]: Found vda1 May 13 00:41:13.579424 extend-filesystems[1278]: Found vda2 May 13 00:41:13.579424 extend-filesystems[1278]: Found vda3 May 13 00:41:13.579424 extend-filesystems[1278]: Found usr May 13 00:41:13.579424 extend-filesystems[1278]: Found vda4 May 13 00:41:13.579424 extend-filesystems[1278]: Found vda6 May 13 00:41:13.579424 extend-filesystems[1278]: Found vda7 May 13 00:41:13.579424 extend-filesystems[1278]: Found vda9 May 13 00:41:13.579424 extend-filesystems[1278]: Checking size of /dev/vda9 May 13 00:41:13.599143 extend-filesystems[1278]: Resized partition /dev/vda9 May 13 00:41:13.602571 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) May 13 00:41:13.607581 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:41:13.607641 env[1305]: time="2025-05-13T00:41:13.607094029Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 13 00:41:13.611678 update_engine[1292]: I0513 00:41:13.611306 1292 main.cc:92] Flatcar Update Engine starting May 13 00:41:13.615365 update_engine[1292]: I0513 00:41:13.615283 1292 update_check_scheduler.cc:74] Next update check in 7m57s May 13 00:41:13.615583 systemd[1]: Started update-engine.service. May 13 00:41:13.618155 systemd[1]: Started locksmithd.service. May 13 00:41:13.629962 systemd-logind[1290]: Watching system buttons on /dev/input/event1 (Power Button) May 13 00:41:13.629983 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 00:41:13.630447 systemd-logind[1290]: New seat seat0. May 13 00:41:13.632843 systemd[1]: Started systemd-logind.service. May 13 00:41:13.636578 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:41:13.637194 env[1305]: time="2025-05-13T00:41:13.637150137Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:41:13.669121 env[1305]: time="2025-05-13T00:41:13.668742317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.670450 env[1305]: time="2025-05-13T00:41:13.670412241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:13.670574 env[1305]: time="2025-05-13T00:41:13.670532917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.671009 env[1305]: time="2025-05-13T00:41:13.670984374Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:13.671107 env[1305]: time="2025-05-13T00:41:13.671084192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.671229 env[1305]: time="2025-05-13T00:41:13.671203966Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 13 00:41:13.671322 env[1305]: time="2025-05-13T00:41:13.671300918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.671485 env[1305]: time="2025-05-13T00:41:13.671463994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.671949 bash[1336]: Updated "/home/core/.ssh/authorized_keys" May 13 00:41:13.672040 env[1305]: time="2025-05-13T00:41:13.671830652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:41:13.672373 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 13 00:41:13.672577 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:41:13.672577 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:41:13.672577 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:41:13.682000 extend-filesystems[1278]: Resized filesystem in /dev/vda9 May 13 00:41:13.674249 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:41:13.674492 systemd[1]: Finished extend-filesystems.service. May 13 00:41:13.683981 env[1305]: time="2025-05-13T00:41:13.683634371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:41:13.683981 env[1305]: time="2025-05-13T00:41:13.683679095Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:41:13.683981 env[1305]: time="2025-05-13T00:41:13.683779293Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 13 00:41:13.683981 env[1305]: time="2025-05-13T00:41:13.683797347Z" level=info msg="metadata content store policy set" policy=shared May 13 00:41:13.684649 locksmithd[1337]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689020959Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689058930Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689070872Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689099346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689111899Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689125184Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689136005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689148228Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689158968Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689170990Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689183163Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689193643Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689272431Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:41:13.689620 env[1305]: time="2025-05-13T00:41:13.689329598Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.689983565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690015635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690026916Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690077151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690088532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690098962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690111144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690121484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690132324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690142173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690151600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690162531Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690257760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690270744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691188 env[1305]: time="2025-05-13T00:41:13.690280893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691517 env[1305]: time="2025-05-13T00:41:13.690291723Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:41:13.691517 env[1305]: time="2025-05-13T00:41:13.690303716Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 13 00:41:13.691517 env[1305]: time="2025-05-13T00:41:13.690312633Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:41:13.691517 env[1305]: time="2025-05-13T00:41:13.690333061Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 13 00:41:13.691517 env[1305]: time="2025-05-13T00:41:13.690364450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:41:13.691627 env[1305]: time="2025-05-13T00:41:13.690527776Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:41:13.691627 env[1305]: time="2025-05-13T00:41:13.690588580Z" level=info msg="Connect containerd service" May 13 00:41:13.691627 env[1305]: time="2025-05-13T00:41:13.690615290Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:41:13.691627 env[1305]: time="2025-05-13T00:41:13.691062349Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:41:13.692405 env[1305]: time="2025-05-13T00:41:13.692378840Z" level=info msg="Start subscribing containerd event" May 13 00:41:13.693717 env[1305]: time="2025-05-13T00:41:13.693699588Z" level=info msg="Start recovering state" May 13 00:41:13.693854 env[1305]: time="2025-05-13T00:41:13.693835072Z" level=info msg="Start event monitor" May 13 00:41:13.693948 env[1305]: time="2025-05-13T00:41:13.693919360Z" level=info msg="Start snapshots syncer" May 13 00:41:13.694025 env[1305]: time="2025-05-13T00:41:13.694007476Z" level=info msg="Start cni network conf syncer for default" May 13 00:41:13.694099 env[1305]: time="2025-05-13T00:41:13.694082015Z" level=info msg="Start streaming server" May 13 00:41:13.694459 env[1305]: time="2025-05-13T00:41:13.694443584Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:41:13.694581 env[1305]: time="2025-05-13T00:41:13.694544904Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:41:13.694689 env[1305]: time="2025-05-13T00:41:13.694675169Z" level=info msg="containerd successfully booted in 0.088204s" May 13 00:41:13.694766 systemd[1]: Started containerd.service. May 13 00:41:13.974891 tar[1301]: linux-amd64/LICENSE May 13 00:41:13.974891 tar[1301]: linux-amd64/README.md May 13 00:41:13.979419 systemd[1]: Finished prepare-helm.service. May 13 00:41:14.435881 sshd_keygen[1303]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:41:14.453267 systemd[1]: Finished sshd-keygen.service. May 13 00:41:14.455604 systemd[1]: Starting issuegen.service... May 13 00:41:14.460110 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:41:14.460313 systemd[1]: Finished issuegen.service. May 13 00:41:14.462323 systemd[1]: Starting systemd-user-sessions.service... May 13 00:41:14.467167 systemd[1]: Finished systemd-user-sessions.service. May 13 00:41:14.469280 systemd[1]: Started getty@tty1.service. May 13 00:41:14.471212 systemd[1]: Started serial-getty@ttyS0.service. May 13 00:41:14.472238 systemd[1]: Reached target getty.target. May 13 00:41:14.636659 systemd-networkd[1081]: eth0: Gained IPv6LL May 13 00:41:14.638148 systemd[1]: Finished systemd-networkd-wait-online.service. May 13 00:41:14.639459 systemd[1]: Reached target network-online.target. May 13 00:41:14.641959 systemd[1]: Starting kubelet.service... May 13 00:41:15.206122 systemd[1]: Started kubelet.service. May 13 00:41:15.207386 systemd[1]: Reached target multi-user.target. May 13 00:41:15.209572 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 13 00:41:15.215457 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 13 00:41:15.215760 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 13 00:41:15.218045 systemd[1]: Startup finished in 5.161s (kernel) + 5.982s (userspace) = 11.144s. May 13 00:41:15.631196 kubelet[1376]: E0513 00:41:15.631057 1376 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:15.633189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:15.633323 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:17.576188 systemd[1]: Created slice system-sshd.slice. May 13 00:41:17.577180 systemd[1]: Started sshd@0-10.0.0.51:22-10.0.0.1:47328.service. May 13 00:41:17.610664 sshd[1387]: Accepted publickey for core from 10.0.0.1 port 47328 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.611996 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.619984 systemd-logind[1290]: New session 1 of user core. May 13 00:41:17.620673 systemd[1]: Created slice user-500.slice. May 13 00:41:17.621502 systemd[1]: Starting user-runtime-dir@500.service... May 13 00:41:17.630359 systemd[1]: Finished user-runtime-dir@500.service. May 13 00:41:17.631729 systemd[1]: Starting user@500.service... May 13 00:41:17.634105 (systemd)[1392]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.700249 systemd[1392]: Queued start job for default target default.target. May 13 00:41:17.700444 systemd[1392]: Reached target paths.target. May 13 00:41:17.700458 systemd[1392]: Reached target sockets.target. May 13 00:41:17.700469 systemd[1392]: Reached target timers.target. May 13 00:41:17.700479 systemd[1392]: Reached target basic.target. May 13 00:41:17.700512 systemd[1392]: Reached target default.target. May 13 00:41:17.700531 systemd[1392]: Startup finished in 60ms. May 13 00:41:17.700643 systemd[1]: Started user@500.service. May 13 00:41:17.701497 systemd[1]: Started session-1.scope. May 13 00:41:17.751086 systemd[1]: Started sshd@1-10.0.0.51:22-10.0.0.1:47344.service. May 13 00:41:17.782651 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 47344 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.783707 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.787056 systemd-logind[1290]: New session 2 of user core. May 13 00:41:17.787791 systemd[1]: Started session-2.scope. May 13 00:41:17.840421 sshd[1401]: pam_unix(sshd:session): session closed for user core May 13 00:41:17.842696 systemd[1]: Started sshd@2-10.0.0.51:22-10.0.0.1:47354.service. May 13 00:41:17.843096 systemd[1]: sshd@1-10.0.0.51:22-10.0.0.1:47344.service: Deactivated successfully. May 13 00:41:17.843875 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. May 13 00:41:17.843932 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:41:17.844759 systemd-logind[1290]: Removed session 2. May 13 00:41:17.873290 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 47354 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.874354 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.877635 systemd-logind[1290]: New session 3 of user core. May 13 00:41:17.878357 systemd[1]: Started session-3.scope. May 13 00:41:17.926490 sshd[1406]: pam_unix(sshd:session): session closed for user core May 13 00:41:17.928661 systemd[1]: Started sshd@3-10.0.0.51:22-10.0.0.1:47368.service. May 13 00:41:17.929051 systemd[1]: sshd@2-10.0.0.51:22-10.0.0.1:47354.service: Deactivated successfully. May 13 00:41:17.929890 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:41:17.929984 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. May 13 00:41:17.930699 systemd-logind[1290]: Removed session 3. May 13 00:41:17.961441 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 47368 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:17.962591 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:17.965780 systemd-logind[1290]: New session 4 of user core. May 13 00:41:17.966366 systemd[1]: Started session-4.scope. May 13 00:41:18.019828 sshd[1413]: pam_unix(sshd:session): session closed for user core May 13 00:41:18.022367 systemd[1]: Started sshd@4-10.0.0.51:22-10.0.0.1:47384.service. May 13 00:41:18.022870 systemd[1]: sshd@3-10.0.0.51:22-10.0.0.1:47368.service: Deactivated successfully. May 13 00:41:18.023693 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. May 13 00:41:18.023730 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:41:18.024776 systemd-logind[1290]: Removed session 4. May 13 00:41:18.053732 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 47384 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:41:18.054684 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:41:18.057750 systemd-logind[1290]: New session 5 of user core. May 13 00:41:18.058401 systemd[1]: Started session-5.scope. May 13 00:41:18.112713 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:41:18.112896 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 13 00:41:18.132311 systemd[1]: Starting docker.service... May 13 00:41:18.164366 env[1439]: time="2025-05-13T00:41:18.164313546Z" level=info msg="Starting up" May 13 00:41:18.166002 env[1439]: time="2025-05-13T00:41:18.165962810Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:18.166002 env[1439]: time="2025-05-13T00:41:18.165989681Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:18.166079 env[1439]: time="2025-05-13T00:41:18.166010941Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:18.166079 env[1439]: time="2025-05-13T00:41:18.166020919Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:18.169753 env[1439]: time="2025-05-13T00:41:18.169728437Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 13 00:41:18.169753 env[1439]: time="2025-05-13T00:41:18.169749160Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 13 00:41:18.169836 env[1439]: time="2025-05-13T00:41:18.169763993Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 13 00:41:18.169836 env[1439]: time="2025-05-13T00:41:18.169775861Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 13 00:41:18.640374 env[1439]: time="2025-05-13T00:41:18.640329105Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 13 00:41:18.640374 env[1439]: time="2025-05-13T00:41:18.640353399Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 13 00:41:18.640614 env[1439]: time="2025-05-13T00:41:18.640468443Z" level=info msg="Loading containers: start." May 13 00:41:18.744581 kernel: Initializing XFRM netlink socket May 13 00:41:18.772068 env[1439]: time="2025-05-13T00:41:18.772027408Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 13 00:41:18.816369 systemd-networkd[1081]: docker0: Link UP May 13 00:41:18.830825 env[1439]: time="2025-05-13T00:41:18.830780641Z" level=info msg="Loading containers: done." May 13 00:41:18.840874 env[1439]: time="2025-05-13T00:41:18.840829598Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:41:18.841002 env[1439]: time="2025-05-13T00:41:18.840977362Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 13 00:41:18.841070 env[1439]: time="2025-05-13T00:41:18.841050722Z" level=info msg="Daemon has completed initialization" May 13 00:41:18.856033 systemd[1]: Started docker.service. May 13 00:41:18.862483 env[1439]: time="2025-05-13T00:41:18.862426016Z" level=info msg="API listen on /run/docker.sock" May 13 00:41:19.812502 env[1305]: time="2025-05-13T00:41:19.812445584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:41:20.867230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597610853.mount: Deactivated successfully. May 13 00:41:22.454604 env[1305]: time="2025-05-13T00:41:22.454539953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.456754 env[1305]: time="2025-05-13T00:41:22.456720699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.458490 env[1305]: time="2025-05-13T00:41:22.458437547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.460105 env[1305]: time="2025-05-13T00:41:22.460074486Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:22.460793 env[1305]: time="2025-05-13T00:41:22.460762299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 00:41:22.469410 env[1305]: time="2025-05-13T00:41:22.469377014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:41:24.597389 env[1305]: time="2025-05-13T00:41:24.597322270Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.599147 env[1305]: time="2025-05-13T00:41:24.599114957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.600921 env[1305]: time="2025-05-13T00:41:24.600878372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.602443 env[1305]: time="2025-05-13T00:41:24.602418901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:24.603061 env[1305]: time="2025-05-13T00:41:24.603024148Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 00:41:24.612135 env[1305]: time="2025-05-13T00:41:24.612099220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:41:25.884219 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:41:25.884445 systemd[1]: Stopped kubelet.service. May 13 00:41:25.885705 systemd[1]: Starting kubelet.service... May 13 00:41:25.958873 systemd[1]: Started kubelet.service. May 13 00:41:26.718671 env[1305]: time="2025-05-13T00:41:26.718616276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.723913 env[1305]: time="2025-05-13T00:41:26.723844385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.725646 env[1305]: time="2025-05-13T00:41:26.725610145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.727439 env[1305]: time="2025-05-13T00:41:26.727387959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:26.728067 env[1305]: time="2025-05-13T00:41:26.728042328Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 00:41:26.731750 kubelet[1596]: E0513 00:41:26.731721 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:26.734542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:26.734681 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:26.737247 env[1305]: time="2025-05-13T00:41:26.737204621Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:41:27.805524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047449605.mount: Deactivated successfully. May 13 00:41:29.237193 env[1305]: time="2025-05-13T00:41:29.237129120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.239178 env[1305]: time="2025-05-13T00:41:29.239130681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.240622 env[1305]: time="2025-05-13T00:41:29.240591482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.242205 env[1305]: time="2025-05-13T00:41:29.242168592Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:29.244670 env[1305]: time="2025-05-13T00:41:29.244623472Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 00:41:29.252592 env[1305]: time="2025-05-13T00:41:29.252562149Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:41:29.774367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674478853.mount: Deactivated successfully. May 13 00:41:30.623524 env[1305]: time="2025-05-13T00:41:30.623453795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:30.625406 env[1305]: time="2025-05-13T00:41:30.625349792Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:30.627167 env[1305]: time="2025-05-13T00:41:30.627120742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:30.628941 env[1305]: time="2025-05-13T00:41:30.628883198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:30.629494 env[1305]: time="2025-05-13T00:41:30.629459604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 00:41:30.637918 env[1305]: time="2025-05-13T00:41:30.637880211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:41:31.507353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374982161.mount: Deactivated successfully. May 13 00:41:31.512542 env[1305]: time="2025-05-13T00:41:31.512490918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.514439 env[1305]: time="2025-05-13T00:41:31.514397267Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.515768 env[1305]: time="2025-05-13T00:41:31.515737644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.516929 env[1305]: time="2025-05-13T00:41:31.516890703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:31.517399 env[1305]: time="2025-05-13T00:41:31.517364725Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 00:41:31.527467 env[1305]: time="2025-05-13T00:41:31.527116684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:41:32.027287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302679563.mount: Deactivated successfully. May 13 00:41:35.319117 env[1305]: time="2025-05-13T00:41:35.319049245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.321286 env[1305]: time="2025-05-13T00:41:35.321241891Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.323547 env[1305]: time="2025-05-13T00:41:35.323493755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.325479 env[1305]: time="2025-05-13T00:41:35.325431808Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:35.326343 env[1305]: time="2025-05-13T00:41:35.326301278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 00:41:36.985456 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:41:36.985656 systemd[1]: Stopped kubelet.service. May 13 00:41:36.986861 systemd[1]: Starting kubelet.service... May 13 00:41:37.059332 systemd[1]: Started kubelet.service. May 13 00:41:37.096142 kubelet[1713]: E0513 00:41:37.096083 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:41:37.098085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:41:37.098224 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:41:37.394318 systemd[1]: Stopped kubelet.service. May 13 00:41:37.396609 systemd[1]: Starting kubelet.service... May 13 00:41:37.410943 systemd[1]: Reloading. May 13 00:41:37.475732 /usr/lib/systemd/system-generators/torcx-generator[1750]: time="2025-05-13T00:41:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:37.475764 /usr/lib/systemd/system-generators/torcx-generator[1750]: time="2025-05-13T00:41:37Z" level=info msg="torcx already run" May 13 00:41:37.897016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:37.897031 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:37.914789 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:37.990269 systemd[1]: Started kubelet.service. May 13 00:41:37.992091 systemd[1]: Stopping kubelet.service... May 13 00:41:37.992454 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:41:37.992759 systemd[1]: Stopped kubelet.service. May 13 00:41:37.994195 systemd[1]: Starting kubelet.service... May 13 00:41:38.068541 systemd[1]: Started kubelet.service. May 13 00:41:38.112260 kubelet[1813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:38.112260 kubelet[1813]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:41:38.112260 kubelet[1813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:38.112748 kubelet[1813]: I0513 00:41:38.112291 1813 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:41:38.495698 kubelet[1813]: I0513 00:41:38.495653 1813 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:41:38.495698 kubelet[1813]: I0513 00:41:38.495684 1813 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:41:38.495897 kubelet[1813]: I0513 00:41:38.495883 1813 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:41:38.514277 kubelet[1813]: I0513 00:41:38.514230 1813 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:41:38.515730 kubelet[1813]: E0513 00:41:38.515711 1813 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.51:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.522990 kubelet[1813]: I0513 00:41:38.522948 1813 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:41:38.524247 kubelet[1813]: I0513 00:41:38.524201 1813 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:41:38.524405 kubelet[1813]: I0513 00:41:38.524236 1813 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:41:38.524789 kubelet[1813]: I0513 00:41:38.524765 1813 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:41:38.524789 kubelet[1813]: I0513 00:41:38.524781 1813 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:41:38.524900 kubelet[1813]: I0513 00:41:38.524879 1813 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:38.525456 kubelet[1813]: I0513 00:41:38.525433 1813 kubelet.go:400] "Attempting to sync node with API server" May 13 00:41:38.525456 kubelet[1813]: I0513 00:41:38.525451 1813 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:41:38.525540 kubelet[1813]: I0513 00:41:38.525470 1813 kubelet.go:312] "Adding apiserver pod source" May 13 00:41:38.525540 kubelet[1813]: I0513 00:41:38.525484 1813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:41:38.525893 kubelet[1813]: W0513 00:41:38.525848 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.525929 kubelet[1813]: E0513 00:41:38.525896 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.526097 kubelet[1813]: W0513 00:41:38.526067 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.526097 kubelet[1813]: E0513 00:41:38.526097 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.530811 kubelet[1813]: I0513 00:41:38.530766 1813 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:41:38.533939 kubelet[1813]: I0513 00:41:38.533898 1813 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:41:38.534026 kubelet[1813]: W0513 00:41:38.534003 1813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:41:38.535009 kubelet[1813]: I0513 00:41:38.534992 1813 server.go:1264] "Started kubelet" May 13 00:41:38.536092 kubelet[1813]: I0513 00:41:38.535991 1813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:41:38.536476 kubelet[1813]: I0513 00:41:38.536451 1813 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:41:38.536649 kubelet[1813]: I0513 00:41:38.536508 1813 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:41:38.538632 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 13 00:41:38.538965 kubelet[1813]: I0513 00:41:38.538950 1813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:41:38.539586 kubelet[1813]: E0513 00:41:38.539458 1813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.51:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.51:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eef5f695d1975 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:41:38.534971765 +0000 UTC m=+0.463089074,LastTimestamp:2025-05-13 00:41:38.534971765 +0000 UTC m=+0.463089074,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:41:38.540159 kubelet[1813]: E0513 00:41:38.540131 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:38.540215 kubelet[1813]: I0513 00:41:38.540200 1813 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:41:38.540272 kubelet[1813]: I0513 00:41:38.538992 1813 server.go:455] "Adding debug handlers to kubelet server" May 13 00:41:38.540737 kubelet[1813]: W0513 00:41:38.540688 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.540805 kubelet[1813]: E0513 00:41:38.540744 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.540836 kubelet[1813]: I0513 00:41:38.540823 1813 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:41:38.540893 kubelet[1813]: I0513 00:41:38.540876 1813 reconciler.go:26] "Reconciler: start to sync state" May 13 00:41:38.541425 kubelet[1813]: E0513 00:41:38.541290 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="200ms" May 13 00:41:38.542100 kubelet[1813]: E0513 00:41:38.542071 1813 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:41:38.543288 kubelet[1813]: I0513 00:41:38.543269 1813 factory.go:221] Registration of the containerd container factory successfully May 13 00:41:38.543288 kubelet[1813]: I0513 00:41:38.543286 1813 factory.go:221] Registration of the systemd container factory successfully May 13 00:41:38.543383 kubelet[1813]: I0513 00:41:38.543354 1813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:41:38.552368 kubelet[1813]: I0513 00:41:38.552322 1813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:41:38.553390 kubelet[1813]: I0513 00:41:38.553367 1813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:41:38.553444 kubelet[1813]: I0513 00:41:38.553411 1813 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:41:38.553444 kubelet[1813]: I0513 00:41:38.553436 1813 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:41:38.553573 kubelet[1813]: E0513 00:41:38.553480 1813 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:41:38.554038 kubelet[1813]: W0513 00:41:38.553988 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.554088 kubelet[1813]: E0513 00:41:38.554044 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:38.560480 kubelet[1813]: I0513 00:41:38.560463 1813 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:41:38.560614 kubelet[1813]: I0513 00:41:38.560600 1813 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:41:38.560686 kubelet[1813]: I0513 00:41:38.560674 1813 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:38.641453 kubelet[1813]: I0513 00:41:38.641406 1813 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:41:38.641784 kubelet[1813]: E0513 00:41:38.641752 1813 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 13 00:41:38.653918 kubelet[1813]: E0513 00:41:38.653880 1813 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 00:41:38.742600 kubelet[1813]: E0513 00:41:38.742567 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="400ms" May 13 00:41:38.751851 kubelet[1813]: I0513 00:41:38.751757 1813 policy_none.go:49] "None policy: Start" May 13 00:41:38.752651 kubelet[1813]: I0513 00:41:38.752616 1813 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:41:38.752708 kubelet[1813]: I0513 00:41:38.752657 1813 state_mem.go:35] "Initializing new in-memory state store" May 13 00:41:38.759957 kubelet[1813]: I0513 00:41:38.759931 1813 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:41:38.760115 kubelet[1813]: I0513 00:41:38.760067 1813 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:41:38.760241 kubelet[1813]: I0513 00:41:38.760166 1813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:41:38.761631 kubelet[1813]: E0513 00:41:38.761614 1813 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:41:38.842716 kubelet[1813]: I0513 00:41:38.842679 1813 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:41:38.843039 kubelet[1813]: E0513 00:41:38.843017 1813 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 13 00:41:38.854211 kubelet[1813]: I0513 00:41:38.854162 1813 topology_manager.go:215] "Topology Admit Handler" podUID="a39445177ac64c0a1afd30ce3ed5ffd8" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:41:38.854835 kubelet[1813]: I0513 00:41:38.854794 1813 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:41:38.855537 kubelet[1813]: I0513 00:41:38.855516 1813 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:41:38.942119 kubelet[1813]: I0513 00:41:38.942054 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a39445177ac64c0a1afd30ce3ed5ffd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a39445177ac64c0a1afd30ce3ed5ffd8\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:38.942119 kubelet[1813]: I0513 00:41:38.942100 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:38.942119 kubelet[1813]: I0513 00:41:38.942125 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:38.942332 kubelet[1813]: I0513 00:41:38.942145 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:38.942332 kubelet[1813]: I0513 00:41:38.942169 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a39445177ac64c0a1afd30ce3ed5ffd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a39445177ac64c0a1afd30ce3ed5ffd8\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:38.942332 kubelet[1813]: I0513 00:41:38.942198 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a39445177ac64c0a1afd30ce3ed5ffd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a39445177ac64c0a1afd30ce3ed5ffd8\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:38.942332 kubelet[1813]: I0513 00:41:38.942217 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:38.942332 kubelet[1813]: I0513 00:41:38.942237 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:38.942445 kubelet[1813]: I0513 00:41:38.942257 1813 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:41:39.143504 kubelet[1813]: E0513 00:41:39.143392 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="800ms" May 13 00:41:39.158631 kubelet[1813]: E0513 00:41:39.158597 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.159160 env[1305]: time="2025-05-13T00:41:39.159126489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a39445177ac64c0a1afd30ce3ed5ffd8,Namespace:kube-system,Attempt:0,}" May 13 00:41:39.159443 kubelet[1813]: E0513 00:41:39.159249 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.159720 kubelet[1813]: E0513 00:41:39.159698 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.159765 env[1305]: time="2025-05-13T00:41:39.159716726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:41:39.160020 env[1305]: time="2025-05-13T00:41:39.159996863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:41:39.244478 kubelet[1813]: I0513 00:41:39.244445 1813 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:41:39.244850 kubelet[1813]: E0513 00:41:39.244810 1813 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 13 00:41:39.336351 kubelet[1813]: W0513 00:41:39.336259 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:39.336351 kubelet[1813]: E0513 00:41:39.336330 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.51:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:39.833818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472910804.mount: Deactivated successfully. May 13 00:41:39.838066 env[1305]: time="2025-05-13T00:41:39.838012463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.840897 env[1305]: time="2025-05-13T00:41:39.840863963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.842707 env[1305]: time="2025-05-13T00:41:39.842677806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.843647 env[1305]: time="2025-05-13T00:41:39.843603340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.845703 env[1305]: time="2025-05-13T00:41:39.845647538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.850508 env[1305]: time="2025-05-13T00:41:39.850476698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.851753 env[1305]: time="2025-05-13T00:41:39.851729106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.853069 env[1305]: time="2025-05-13T00:41:39.853036595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.854224 env[1305]: time="2025-05-13T00:41:39.854201997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.856348 env[1305]: time="2025-05-13T00:41:39.856316457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.857387 env[1305]: time="2025-05-13T00:41:39.857363127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.858949 env[1305]: time="2025-05-13T00:41:39.858914178Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:41:39.878798 env[1305]: time="2025-05-13T00:41:39.878736547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:39.878911 env[1305]: time="2025-05-13T00:41:39.878804976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:39.878911 env[1305]: time="2025-05-13T00:41:39.878838084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:39.879126 env[1305]: time="2025-05-13T00:41:39.879084271Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8dcf19c565b01e48b1645e4ff51b3c4ca40857496d0013f013d37dfadcbf9e81 pid=1859 runtime=io.containerd.runc.v2 May 13 00:41:39.882664 env[1305]: time="2025-05-13T00:41:39.880011708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:39.882664 env[1305]: time="2025-05-13T00:41:39.880080738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:39.882664 env[1305]: time="2025-05-13T00:41:39.880109680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:39.882757 kubelet[1813]: W0513 00:41:39.881907 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:39.882757 kubelet[1813]: E0513 00:41:39.881972 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.51:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:39.882938 env[1305]: time="2025-05-13T00:41:39.882875245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/810041672e1dfa12340c4f414aa3cc0069bb555063e9700b709327a980210dc5 pid=1866 runtime=io.containerd.runc.v2 May 13 00:41:39.893669 env[1305]: time="2025-05-13T00:41:39.891864446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:41:39.893669 env[1305]: time="2025-05-13T00:41:39.891913107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:41:39.893669 env[1305]: time="2025-05-13T00:41:39.891930682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:41:39.893669 env[1305]: time="2025-05-13T00:41:39.892058448Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d0961ce5640c260a4a90272c4e9215352027a8d9588ed900dcbb26e9ca53e7b pid=1907 runtime=io.containerd.runc.v2 May 13 00:41:39.923930 env[1305]: time="2025-05-13T00:41:39.923861760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"8dcf19c565b01e48b1645e4ff51b3c4ca40857496d0013f013d37dfadcbf9e81\"" May 13 00:41:39.925114 kubelet[1813]: E0513 00:41:39.925075 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.928505 env[1305]: time="2025-05-13T00:41:39.928376925Z" level=info msg="CreateContainer within sandbox \"8dcf19c565b01e48b1645e4ff51b3c4ca40857496d0013f013d37dfadcbf9e81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:41:39.935103 env[1305]: time="2025-05-13T00:41:39.935056883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"810041672e1dfa12340c4f414aa3cc0069bb555063e9700b709327a980210dc5\"" May 13 00:41:39.935687 kubelet[1813]: E0513 00:41:39.935666 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.938509 env[1305]: time="2025-05-13T00:41:39.938468937Z" level=info msg="CreateContainer within sandbox \"810041672e1dfa12340c4f414aa3cc0069bb555063e9700b709327a980210dc5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:41:39.942748 env[1305]: time="2025-05-13T00:41:39.942695683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a39445177ac64c0a1afd30ce3ed5ffd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d0961ce5640c260a4a90272c4e9215352027a8d9588ed900dcbb26e9ca53e7b\"" May 13 00:41:39.943493 kubelet[1813]: E0513 00:41:39.943458 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:39.943929 kubelet[1813]: E0513 00:41:39.943891 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.51:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.51:6443: connect: connection refused" interval="1.6s" May 13 00:41:39.944476 env[1305]: time="2025-05-13T00:41:39.944436681Z" level=info msg="CreateContainer within sandbox \"8dcf19c565b01e48b1645e4ff51b3c4ca40857496d0013f013d37dfadcbf9e81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9eec2b5803185871211cc0fadc65fb975e3574c8a8f13afa33ab47d6402b75f1\"" May 13 00:41:39.944958 env[1305]: time="2025-05-13T00:41:39.944918761Z" level=info msg="StartContainer for \"9eec2b5803185871211cc0fadc65fb975e3574c8a8f13afa33ab47d6402b75f1\"" May 13 00:41:39.945870 env[1305]: time="2025-05-13T00:41:39.945833609Z" level=info msg="CreateContainer within sandbox \"4d0961ce5640c260a4a90272c4e9215352027a8d9588ed900dcbb26e9ca53e7b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:41:39.957672 kubelet[1813]: W0513 00:41:39.957616 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:39.957672 kubelet[1813]: E0513 00:41:39.957672 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.51:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:39.963454 env[1305]: time="2025-05-13T00:41:39.963383869Z" level=info msg="CreateContainer within sandbox \"810041672e1dfa12340c4f414aa3cc0069bb555063e9700b709327a980210dc5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38ab8b45926653c55a036affcd0720efd78017c7f827e32b82c4f727fd8e7d6a\"" May 13 00:41:39.963868 env[1305]: time="2025-05-13T00:41:39.963831308Z" level=info msg="StartContainer for \"38ab8b45926653c55a036affcd0720efd78017c7f827e32b82c4f727fd8e7d6a\"" May 13 00:41:39.978217 env[1305]: time="2025-05-13T00:41:39.978170115Z" level=info msg="CreateContainer within sandbox \"4d0961ce5640c260a4a90272c4e9215352027a8d9588ed900dcbb26e9ca53e7b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3770376c324f90ca4455879c8bd1e7c4535576233e3ba93610e8e6e4a8d9982\"" May 13 00:41:39.978954 env[1305]: time="2025-05-13T00:41:39.978932803Z" level=info msg="StartContainer for \"a3770376c324f90ca4455879c8bd1e7c4535576233e3ba93610e8e6e4a8d9982\"" May 13 00:41:39.998498 env[1305]: time="2025-05-13T00:41:39.998432652Z" level=info msg="StartContainer for \"9eec2b5803185871211cc0fadc65fb975e3574c8a8f13afa33ab47d6402b75f1\" returns successfully" May 13 00:41:40.027744 env[1305]: time="2025-05-13T00:41:40.027685132Z" level=info msg="StartContainer for \"38ab8b45926653c55a036affcd0720efd78017c7f827e32b82c4f727fd8e7d6a\" returns successfully" May 13 00:41:40.037502 env[1305]: time="2025-05-13T00:41:40.037450237Z" level=info msg="StartContainer for \"a3770376c324f90ca4455879c8bd1e7c4535576233e3ba93610e8e6e4a8d9982\" returns successfully" May 13 00:41:40.047518 kubelet[1813]: I0513 00:41:40.047492 1813 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:41:40.047867 kubelet[1813]: E0513 00:41:40.047845 1813 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.51:6443/api/v1/nodes\": dial tcp 10.0.0.51:6443: connect: connection refused" node="localhost" May 13 00:41:40.068569 kubelet[1813]: W0513 00:41:40.068490 1813 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:40.068642 kubelet[1813]: E0513 00:41:40.068583 1813 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.51:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.51:6443: connect: connection refused May 13 00:41:40.558628 kubelet[1813]: E0513 00:41:40.558576 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:40.560957 kubelet[1813]: E0513 00:41:40.560927 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:40.562306 kubelet[1813]: E0513 00:41:40.562283 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:41.563637 kubelet[1813]: E0513 00:41:41.563589 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:41.615070 kubelet[1813]: E0513 00:41:41.615035 1813 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:41:41.649669 kubelet[1813]: I0513 00:41:41.649630 1813 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:41:41.757205 kubelet[1813]: I0513 00:41:41.757147 1813 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:41:41.763800 kubelet[1813]: E0513 00:41:41.763765 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:41.864428 kubelet[1813]: E0513 00:41:41.864286 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:41.964661 kubelet[1813]: E0513 00:41:41.964612 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:42.065591 kubelet[1813]: E0513 00:41:42.065549 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:42.166213 kubelet[1813]: E0513 00:41:42.166098 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:42.266635 kubelet[1813]: E0513 00:41:42.266595 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:42.367091 kubelet[1813]: E0513 00:41:42.367042 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:42.467631 kubelet[1813]: E0513 00:41:42.467500 1813 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:41:42.528563 kubelet[1813]: I0513 00:41:42.528517 1813 apiserver.go:52] "Watching apiserver" May 13 00:41:42.541116 kubelet[1813]: I0513 00:41:42.541092 1813 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:41:42.570379 kubelet[1813]: E0513 00:41:42.570342 1813 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:41:42.570745 kubelet[1813]: E0513 00:41:42.570726 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:43.177884 kubelet[1813]: E0513 00:41:43.177824 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:43.508468 systemd[1]: Reloading. May 13 00:41:43.565838 kubelet[1813]: E0513 00:41:43.565814 1813 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:43.565857 /usr/lib/systemd/system-generators/torcx-generator[2117]: time="2025-05-13T00:41:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 13 00:41:43.565874 /usr/lib/systemd/system-generators/torcx-generator[2117]: time="2025-05-13T00:41:43Z" level=info msg="torcx already run" May 13 00:41:43.637465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 13 00:41:43.637485 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 13 00:41:43.654023 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:41:43.730145 systemd[1]: Stopping kubelet.service... May 13 00:41:43.748985 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:41:43.749259 systemd[1]: Stopped kubelet.service. May 13 00:41:43.750886 systemd[1]: Starting kubelet.service... May 13 00:41:43.835309 systemd[1]: Started kubelet.service. May 13 00:41:43.875976 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:43.876381 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:41:43.876381 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:41:43.876580 kubelet[2173]: I0513 00:41:43.876458 2173 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:41:43.881119 kubelet[2173]: I0513 00:41:43.881082 2173 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:41:43.881119 kubelet[2173]: I0513 00:41:43.881112 2173 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:41:43.881356 kubelet[2173]: I0513 00:41:43.881334 2173 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:41:43.882639 kubelet[2173]: I0513 00:41:43.882611 2173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:41:43.883770 kubelet[2173]: I0513 00:41:43.883730 2173 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:41:43.890835 kubelet[2173]: I0513 00:41:43.890812 2173 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:41:43.891219 kubelet[2173]: I0513 00:41:43.891174 2173 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:41:43.891391 kubelet[2173]: I0513 00:41:43.891207 2173 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:41:43.891391 kubelet[2173]: I0513 00:41:43.891381 2173 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:41:43.891391 kubelet[2173]: I0513 00:41:43.891391 2173 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:41:43.891624 kubelet[2173]: I0513 00:41:43.891615 2173 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:43.891722 kubelet[2173]: I0513 00:41:43.891688 2173 kubelet.go:400] "Attempting to sync node with API server" May 13 00:41:43.891722 kubelet[2173]: I0513 00:41:43.891701 2173 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:41:43.891722 kubelet[2173]: I0513 00:41:43.891718 2173 kubelet.go:312] "Adding apiserver pod source" May 13 00:41:43.891982 kubelet[2173]: I0513 00:41:43.891759 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:41:43.895386 kubelet[2173]: I0513 00:41:43.893008 2173 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 13 00:41:43.895386 kubelet[2173]: I0513 00:41:43.893210 2173 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:41:43.895386 kubelet[2173]: I0513 00:41:43.893656 2173 server.go:1264] "Started kubelet" May 13 00:41:43.895386 kubelet[2173]: I0513 00:41:43.893963 2173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:41:43.895690 kubelet[2173]: I0513 00:41:43.895607 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:41:43.896060 kubelet[2173]: I0513 00:41:43.896042 2173 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:41:43.897262 kubelet[2173]: I0513 00:41:43.897230 2173 server.go:455] "Adding debug handlers to kubelet server" May 13 00:41:43.899063 kubelet[2173]: I0513 00:41:43.899046 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:41:43.902825 kubelet[2173]: I0513 00:41:43.901008 2173 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:41:43.902825 kubelet[2173]: I0513 00:41:43.901411 2173 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:41:43.902825 kubelet[2173]: I0513 00:41:43.901651 2173 reconciler.go:26] "Reconciler: start to sync state" May 13 00:41:43.913217 kubelet[2173]: I0513 00:41:43.913170 2173 factory.go:221] Registration of the systemd container factory successfully May 13 00:41:43.913380 kubelet[2173]: I0513 00:41:43.913341 2173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:41:43.913748 kubelet[2173]: E0513 00:41:43.913683 2173 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:41:43.915293 kubelet[2173]: I0513 00:41:43.915251 2173 factory.go:221] Registration of the containerd container factory successfully May 13 00:41:43.928185 kubelet[2173]: I0513 00:41:43.928148 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:41:43.928920 kubelet[2173]: I0513 00:41:43.928900 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:41:43.929037 kubelet[2173]: I0513 00:41:43.929025 2173 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:41:43.929163 kubelet[2173]: I0513 00:41:43.929125 2173 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:41:43.929330 kubelet[2173]: E0513 00:41:43.929280 2173 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:41:43.959351 kubelet[2173]: I0513 00:41:43.959326 2173 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:41:43.959564 kubelet[2173]: I0513 00:41:43.959513 2173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:41:43.959676 kubelet[2173]: I0513 00:41:43.959651 2173 state_mem.go:36] "Initialized new in-memory state store" May 13 00:41:43.959834 kubelet[2173]: I0513 00:41:43.959795 2173 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:41:43.959834 kubelet[2173]: I0513 00:41:43.959804 2173 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:41:43.959834 kubelet[2173]: I0513 00:41:43.959823 2173 policy_none.go:49] "None policy: Start" May 13 00:41:43.960380 kubelet[2173]: I0513 00:41:43.960360 2173 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:41:43.960445 kubelet[2173]: I0513 00:41:43.960385 2173 state_mem.go:35] "Initializing new in-memory state store" May 13 00:41:43.960595 kubelet[2173]: I0513 00:41:43.960580 2173 state_mem.go:75] "Updated machine memory state" May 13 00:41:43.961651 kubelet[2173]: I0513 00:41:43.961618 2173 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:41:43.961825 kubelet[2173]: I0513 00:41:43.961784 2173 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:41:43.961913 kubelet[2173]: I0513 00:41:43.961892 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:41:44.008314 kubelet[2173]: I0513 00:41:44.008290 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:41:44.013933 kubelet[2173]: I0513 00:41:44.013904 2173 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:41:44.014030 kubelet[2173]: I0513 00:41:44.013997 2173 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:41:44.029812 kubelet[2173]: I0513 00:41:44.029761 2173 topology_manager.go:215] "Topology Admit Handler" podUID="a39445177ac64c0a1afd30ce3ed5ffd8" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:41:44.029932 kubelet[2173]: I0513 00:41:44.029852 2173 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:41:44.029932 kubelet[2173]: I0513 00:41:44.029910 2173 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:41:44.035971 kubelet[2173]: E0513 00:41:44.035881 2173 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:41:44.202712 kubelet[2173]: I0513 00:41:44.202607 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:41:44.202712 kubelet[2173]: I0513 00:41:44.202674 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a39445177ac64c0a1afd30ce3ed5ffd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a39445177ac64c0a1afd30ce3ed5ffd8\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:44.202712 kubelet[2173]: I0513 00:41:44.202700 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:44.202712 kubelet[2173]: I0513 00:41:44.202714 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:44.202884 kubelet[2173]: I0513 00:41:44.202728 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:44.202884 kubelet[2173]: I0513 00:41:44.202742 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:44.202884 kubelet[2173]: I0513 00:41:44.202754 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a39445177ac64c0a1afd30ce3ed5ffd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a39445177ac64c0a1afd30ce3ed5ffd8\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:44.202884 kubelet[2173]: I0513 00:41:44.202768 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a39445177ac64c0a1afd30ce3ed5ffd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a39445177ac64c0a1afd30ce3ed5ffd8\") " pod="kube-system/kube-apiserver-localhost" May 13 00:41:44.202884 kubelet[2173]: I0513 00:41:44.202782 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:41:44.337604 kubelet[2173]: E0513 00:41:44.337577 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:44.337604 kubelet[2173]: E0513 00:41:44.337610 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:44.337764 kubelet[2173]: E0513 00:41:44.337668 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:44.507015 sudo[2208]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:41:44.507194 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 13 00:41:44.893060 kubelet[2173]: I0513 00:41:44.892947 2173 apiserver.go:52] "Watching apiserver" May 13 00:41:44.902226 kubelet[2173]: I0513 00:41:44.902191 2173 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:41:44.937633 kubelet[2173]: E0513 00:41:44.937616 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:44.960494 kubelet[2173]: E0513 00:41:44.960459 2173 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:41:44.961100 kubelet[2173]: E0513 00:41:44.961073 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:44.962527 kubelet[2173]: E0513 00:41:44.962461 2173 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:41:44.962812 kubelet[2173]: E0513 00:41:44.962788 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:44.997906 kubelet[2173]: I0513 00:41:44.997838 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.997818325 podStartE2EDuration="997.818325ms" podCreationTimestamp="2025-05-13 00:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:44.98824168 +0000 UTC m=+1.148201322" watchObservedRunningTime="2025-05-13 00:41:44.997818325 +0000 UTC m=+1.157777967" May 13 00:41:45.009146 kubelet[2173]: I0513 00:41:45.009096 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.00907602 podStartE2EDuration="1.00907602s" podCreationTimestamp="2025-05-13 00:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:44.998307736 +0000 UTC m=+1.158267379" watchObservedRunningTime="2025-05-13 00:41:45.00907602 +0000 UTC m=+1.169035662" May 13 00:41:45.017625 kubelet[2173]: I0513 00:41:45.017589 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.017576301 podStartE2EDuration="2.017576301s" podCreationTimestamp="2025-05-13 00:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:41:45.010057615 +0000 UTC m=+1.170017247" watchObservedRunningTime="2025-05-13 00:41:45.017576301 +0000 UTC m=+1.177535943" May 13 00:41:45.031165 sudo[2208]: pam_unix(sudo:session): session closed for user root May 13 00:41:45.940227 kubelet[2173]: E0513 00:41:45.940177 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:45.940763 kubelet[2173]: E0513 00:41:45.940350 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:46.424385 sudo[1426]: pam_unix(sudo:session): session closed for user root May 13 00:41:46.426180 sshd[1421]: pam_unix(sshd:session): session closed for user core May 13 00:41:46.429408 systemd[1]: sshd@4-10.0.0.51:22-10.0.0.1:47384.service: Deactivated successfully. May 13 00:41:46.431222 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:41:46.431724 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. May 13 00:41:46.432628 systemd-logind[1290]: Removed session 5. May 13 00:41:46.942528 kubelet[2173]: E0513 00:41:46.941600 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:47.945734 kubelet[2173]: E0513 00:41:47.944111 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:48.046591 kubelet[2173]: E0513 00:41:48.045329 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:54.388372 kubelet[2173]: E0513 00:41:54.388333 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:56.989725 kubelet[2173]: E0513 00:41:56.987591 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:57.965452 kubelet[2173]: E0513 00:41:57.965412 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:58.051148 kubelet[2173]: E0513 00:41:58.051107 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:41:59.074666 update_engine[1292]: I0513 00:41:59.074616 1292 update_attempter.cc:509] Updating boot flags... May 13 00:41:59.160538 kubelet[2173]: I0513 00:41:59.160352 2173 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:41:59.162814 env[1305]: time="2025-05-13T00:41:59.161174300Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:41:59.163068 kubelet[2173]: I0513 00:41:59.162306 2173 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:41:59.898356 kubelet[2173]: I0513 00:41:59.898311 2173 topology_manager.go:215] "Topology Admit Handler" podUID="1e9d8d84-334d-4911-95a2-5804f41c69b5" podNamespace="kube-system" podName="kube-proxy-sdkhk" May 13 00:41:59.898580 kubelet[2173]: I0513 00:41:59.898478 2173 topology_manager.go:215] "Topology Admit Handler" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" podNamespace="kube-system" podName="cilium-spctd" May 13 00:42:00.030030 kubelet[2173]: I0513 00:42:00.029955 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-bpf-maps\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030030 kubelet[2173]: I0513 00:42:00.030019 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-lib-modules\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030262 kubelet[2173]: I0513 00:42:00.030056 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjbc9\" (UniqueName: \"kubernetes.io/projected/1e9d8d84-334d-4911-95a2-5804f41c69b5-kube-api-access-kjbc9\") pod \"kube-proxy-sdkhk\" (UID: \"1e9d8d84-334d-4911-95a2-5804f41c69b5\") " pod="kube-system/kube-proxy-sdkhk" May 13 00:42:00.030262 kubelet[2173]: I0513 00:42:00.030088 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e9d8d84-334d-4911-95a2-5804f41c69b5-kube-proxy\") pod \"kube-proxy-sdkhk\" (UID: \"1e9d8d84-334d-4911-95a2-5804f41c69b5\") " pod="kube-system/kube-proxy-sdkhk" May 13 00:42:00.030262 kubelet[2173]: I0513 00:42:00.030116 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-xtables-lock\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030262 kubelet[2173]: I0513 00:42:00.030147 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e9d8d84-334d-4911-95a2-5804f41c69b5-xtables-lock\") pod \"kube-proxy-sdkhk\" (UID: \"1e9d8d84-334d-4911-95a2-5804f41c69b5\") " pod="kube-system/kube-proxy-sdkhk" May 13 00:42:00.030262 kubelet[2173]: I0513 00:42:00.030173 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-etc-cni-netd\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030262 kubelet[2173]: I0513 00:42:00.030200 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-hubble-tls\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030444 kubelet[2173]: I0513 00:42:00.030229 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-cgroup\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030444 kubelet[2173]: I0513 00:42:00.030274 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-config-path\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030444 kubelet[2173]: I0513 00:42:00.030300 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-run\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030444 kubelet[2173]: I0513 00:42:00.030326 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-kernel\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030444 kubelet[2173]: I0513 00:42:00.030348 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cni-path\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030444 kubelet[2173]: I0513 00:42:00.030369 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-hostproc\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030651 kubelet[2173]: I0513 00:42:00.030391 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zhhp\" (UniqueName: \"kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-kube-api-access-9zhhp\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030651 kubelet[2173]: I0513 00:42:00.030413 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-net\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.030651 kubelet[2173]: I0513 00:42:00.030438 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e9d8d84-334d-4911-95a2-5804f41c69b5-lib-modules\") pod \"kube-proxy-sdkhk\" (UID: \"1e9d8d84-334d-4911-95a2-5804f41c69b5\") " pod="kube-system/kube-proxy-sdkhk" May 13 00:42:00.030651 kubelet[2173]: I0513 00:42:00.030474 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5651aa4-975e-4270-bea2-7b9e9efccea2-clustermesh-secrets\") pod \"cilium-spctd\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " pod="kube-system/cilium-spctd" May 13 00:42:00.154308 kubelet[2173]: I0513 00:42:00.153207 2173 topology_manager.go:215] "Topology Admit Handler" podUID="aa0db833-e8cd-47ef-bdf0-450daea3948c" podNamespace="kube-system" podName="cilium-operator-599987898-gd579" May 13 00:42:00.203036 kubelet[2173]: E0513 00:42:00.202963 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:00.203628 env[1305]: time="2025-05-13T00:42:00.203579349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdkhk,Uid:1e9d8d84-334d-4911-95a2-5804f41c69b5,Namespace:kube-system,Attempt:0,}" May 13 00:42:00.207932 kubelet[2173]: E0513 00:42:00.207894 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:00.208396 env[1305]: time="2025-05-13T00:42:00.208353932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spctd,Uid:a5651aa4-975e-4270-bea2-7b9e9efccea2,Namespace:kube-system,Attempt:0,}" May 13 00:42:00.332025 kubelet[2173]: I0513 00:42:00.331977 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa0db833-e8cd-47ef-bdf0-450daea3948c-cilium-config-path\") pod \"cilium-operator-599987898-gd579\" (UID: \"aa0db833-e8cd-47ef-bdf0-450daea3948c\") " pod="kube-system/cilium-operator-599987898-gd579" May 13 00:42:00.332025 kubelet[2173]: I0513 00:42:00.332025 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsjxt\" (UniqueName: \"kubernetes.io/projected/aa0db833-e8cd-47ef-bdf0-450daea3948c-kube-api-access-gsjxt\") pod \"cilium-operator-599987898-gd579\" (UID: \"aa0db833-e8cd-47ef-bdf0-450daea3948c\") " pod="kube-system/cilium-operator-599987898-gd579" May 13 00:42:00.461777 kubelet[2173]: E0513 00:42:00.461659 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:00.462595 env[1305]: time="2025-05-13T00:42:00.462539032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gd579,Uid:aa0db833-e8cd-47ef-bdf0-450daea3948c,Namespace:kube-system,Attempt:0,}" May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.761942647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.761990316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.761999322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.762147007Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d pid=2318 runtime=io.containerd.runc.v2 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.755753390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.755813963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.755827137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.755993407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a pid=2291 runtime=io.containerd.runc.v2 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.757024337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.757063731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.757073078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:00.764607 env[1305]: time="2025-05-13T00:42:00.757172713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72afe41bb70a257aa0b875d733168cd38144cdc594b93dfd4a11c224a2fc29a0 pid=2290 runtime=io.containerd.runc.v2 May 13 00:42:00.800260 env[1305]: time="2025-05-13T00:42:00.800182494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdkhk,Uid:1e9d8d84-334d-4911-95a2-5804f41c69b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"72afe41bb70a257aa0b875d733168cd38144cdc594b93dfd4a11c224a2fc29a0\"" May 13 00:42:00.801107 kubelet[2173]: E0513 00:42:00.801077 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:00.804898 env[1305]: time="2025-05-13T00:42:00.804436066Z" level=info msg="CreateContainer within sandbox \"72afe41bb70a257aa0b875d733168cd38144cdc594b93dfd4a11c224a2fc29a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:42:00.808598 env[1305]: time="2025-05-13T00:42:00.808540491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spctd,Uid:a5651aa4-975e-4270-bea2-7b9e9efccea2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\"" May 13 00:42:00.809403 kubelet[2173]: E0513 00:42:00.809379 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:00.811595 env[1305]: time="2025-05-13T00:42:00.811123551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:42:00.823618 env[1305]: time="2025-05-13T00:42:00.823539755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-gd579,Uid:aa0db833-e8cd-47ef-bdf0-450daea3948c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d\"" May 13 00:42:00.824196 kubelet[2173]: E0513 00:42:00.824175 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:00.835835 env[1305]: time="2025-05-13T00:42:00.835776644Z" level=info msg="CreateContainer within sandbox \"72afe41bb70a257aa0b875d733168cd38144cdc594b93dfd4a11c224a2fc29a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7356ba41470f19d8045bb533268886f60f6a0e9690dc21175f98f22fd0637056\"" May 13 00:42:00.836404 env[1305]: time="2025-05-13T00:42:00.836378786Z" level=info msg="StartContainer for \"7356ba41470f19d8045bb533268886f60f6a0e9690dc21175f98f22fd0637056\"" May 13 00:42:00.886106 env[1305]: time="2025-05-13T00:42:00.881976937Z" level=info msg="StartContainer for \"7356ba41470f19d8045bb533268886f60f6a0e9690dc21175f98f22fd0637056\" returns successfully" May 13 00:42:00.971890 kubelet[2173]: E0513 00:42:00.971852 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:01.081053 kubelet[2173]: I0513 00:42:01.080985 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sdkhk" podStartSLOduration=2.080965976 podStartE2EDuration="2.080965976s" podCreationTimestamp="2025-05-13 00:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:01.080670866 +0000 UTC m=+17.240630498" watchObservedRunningTime="2025-05-13 00:42:01.080965976 +0000 UTC m=+17.240925618" May 13 00:42:06.102480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881488580.mount: Deactivated successfully. May 13 00:42:09.461007 systemd[1]: Started sshd@5-10.0.0.51:22-10.0.0.1:53316.service. May 13 00:42:09.494088 sshd[2559]: Accepted publickey for core from 10.0.0.1 port 53316 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:09.495351 sshd[2559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:09.499264 systemd-logind[1290]: New session 6 of user core. May 13 00:42:09.500016 systemd[1]: Started session-6.scope. May 13 00:42:09.621331 sshd[2559]: pam_unix(sshd:session): session closed for user core May 13 00:42:09.624175 systemd[1]: sshd@5-10.0.0.51:22-10.0.0.1:53316.service: Deactivated successfully. May 13 00:42:09.625382 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:42:09.625896 systemd-logind[1290]: Session 6 logged out. Waiting for processes to exit. May 13 00:42:09.627370 systemd-logind[1290]: Removed session 6. May 13 00:42:12.580733 env[1305]: time="2025-05-13T00:42:12.580660224Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:12.587076 env[1305]: time="2025-05-13T00:42:12.586997693Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:12.589730 env[1305]: time="2025-05-13T00:42:12.589689693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:12.590619 env[1305]: time="2025-05-13T00:42:12.590571290Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 13 00:42:12.592018 env[1305]: time="2025-05-13T00:42:12.591987225Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:42:12.594075 env[1305]: time="2025-05-13T00:42:12.594026324Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:42:12.612239 env[1305]: time="2025-05-13T00:42:12.612185288Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\"" May 13 00:42:12.612820 env[1305]: time="2025-05-13T00:42:12.612782845Z" level=info msg="StartContainer for \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\"" May 13 00:42:13.507864 env[1305]: time="2025-05-13T00:42:13.507811728Z" level=info msg="StartContainer for \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\" returns successfully" May 13 00:42:13.561819 env[1305]: time="2025-05-13T00:42:13.561766722Z" level=info msg="shim disconnected" id=565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04 May 13 00:42:13.561819 env[1305]: time="2025-05-13T00:42:13.561810143Z" level=warning msg="cleaning up after shim disconnected" id=565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04 namespace=k8s.io May 13 00:42:13.561819 env[1305]: time="2025-05-13T00:42:13.561819420Z" level=info msg="cleaning up dead shim" May 13 00:42:13.568678 env[1305]: time="2025-05-13T00:42:13.568631458Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2623 runtime=io.containerd.runc.v2\n" May 13 00:42:13.609215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04-rootfs.mount: Deactivated successfully. May 13 00:42:14.514303 kubelet[2173]: E0513 00:42:14.514255 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:14.516045 env[1305]: time="2025-05-13T00:42:14.516008322Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:42:14.544968 env[1305]: time="2025-05-13T00:42:14.544909511Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\"" May 13 00:42:14.545491 env[1305]: time="2025-05-13T00:42:14.545462565Z" level=info msg="StartContainer for \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\"" May 13 00:42:14.607453 env[1305]: time="2025-05-13T00:42:14.605761770Z" level=info msg="StartContainer for \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\" returns successfully" May 13 00:42:14.612777 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:42:14.613137 systemd[1]: Stopped systemd-sysctl.service. May 13 00:42:14.613367 systemd[1]: Stopping systemd-sysctl.service... May 13 00:42:14.616269 systemd[1]: Starting systemd-sysctl.service... May 13 00:42:14.619250 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 00:42:14.627543 systemd[1]: Started sshd@6-10.0.0.51:22-10.0.0.1:53320.service. May 13 00:42:14.637477 systemd[1]: Finished systemd-sysctl.service. May 13 00:42:14.646923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83-rootfs.mount: Deactivated successfully. May 13 00:42:14.780922 sshd[2681]: Accepted publickey for core from 10.0.0.1 port 53320 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:14.782480 sshd[2681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:14.826685 systemd-logind[1290]: New session 7 of user core. May 13 00:42:14.827782 systemd[1]: Started session-7.scope. May 13 00:42:14.915688 env[1305]: time="2025-05-13T00:42:14.915641904Z" level=info msg="shim disconnected" id=04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83 May 13 00:42:14.915889 env[1305]: time="2025-05-13T00:42:14.915869160Z" level=warning msg="cleaning up after shim disconnected" id=04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83 namespace=k8s.io May 13 00:42:14.916002 env[1305]: time="2025-05-13T00:42:14.915945222Z" level=info msg="cleaning up dead shim" May 13 00:42:14.923977 env[1305]: time="2025-05-13T00:42:14.923930985Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2702 runtime=io.containerd.runc.v2\n" May 13 00:42:14.936251 sshd[2681]: pam_unix(sshd:session): session closed for user core May 13 00:42:14.938653 systemd[1]: sshd@6-10.0.0.51:22-10.0.0.1:53320.service: Deactivated successfully. May 13 00:42:14.939823 systemd-logind[1290]: Session 7 logged out. Waiting for processes to exit. May 13 00:42:14.939888 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:42:14.940733 systemd-logind[1290]: Removed session 7. May 13 00:42:15.517968 kubelet[2173]: E0513 00:42:15.517708 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:15.522829 env[1305]: time="2025-05-13T00:42:15.522610200Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:42:15.776897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645596559.mount: Deactivated successfully. May 13 00:42:15.782279 env[1305]: time="2025-05-13T00:42:15.782212548Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\"" May 13 00:42:15.783031 env[1305]: time="2025-05-13T00:42:15.782987857Z" level=info msg="StartContainer for \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\"" May 13 00:42:15.811451 env[1305]: time="2025-05-13T00:42:15.811389471Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:15.814652 env[1305]: time="2025-05-13T00:42:15.814619869Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:15.816083 env[1305]: time="2025-05-13T00:42:15.816053818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 13 00:42:15.816254 env[1305]: time="2025-05-13T00:42:15.816224768Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 13 00:42:15.818902 env[1305]: time="2025-05-13T00:42:15.818859423Z" level=info msg="CreateContainer within sandbox \"1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:42:16.004173 env[1305]: time="2025-05-13T00:42:16.004106417Z" level=info msg="StartContainer for \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\" returns successfully" May 13 00:42:16.166860 env[1305]: time="2025-05-13T00:42:16.166784050Z" level=info msg="CreateContainer within sandbox \"1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\"" May 13 00:42:16.167704 env[1305]: time="2025-05-13T00:42:16.167589035Z" level=info msg="shim disconnected" id=ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000 May 13 00:42:16.167829 env[1305]: time="2025-05-13T00:42:16.167803947Z" level=warning msg="cleaning up after shim disconnected" id=ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000 namespace=k8s.io May 13 00:42:16.167829 env[1305]: time="2025-05-13T00:42:16.167824796Z" level=info msg="cleaning up dead shim" May 13 00:42:16.168850 env[1305]: time="2025-05-13T00:42:16.168790992Z" level=info msg="StartContainer for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\"" May 13 00:42:16.174979 env[1305]: time="2025-05-13T00:42:16.174911603Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2760 runtime=io.containerd.runc.v2\n" May 13 00:42:16.213341 env[1305]: time="2025-05-13T00:42:16.213287419Z" level=info msg="StartContainer for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" returns successfully" May 13 00:42:16.520897 kubelet[2173]: E0513 00:42:16.520782 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:16.523256 kubelet[2173]: E0513 00:42:16.523215 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:16.524874 env[1305]: time="2025-05-13T00:42:16.524843559Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:42:16.773838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000-rootfs.mount: Deactivated successfully. May 13 00:42:16.827882 env[1305]: time="2025-05-13T00:42:16.827818299Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\"" May 13 00:42:16.828616 env[1305]: time="2025-05-13T00:42:16.828590172Z" level=info msg="StartContainer for \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\"" May 13 00:42:16.888002 kubelet[2173]: I0513 00:42:16.887947 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-gd579" podStartSLOduration=1.8952048000000001 podStartE2EDuration="16.887905089s" podCreationTimestamp="2025-05-13 00:42:00 +0000 UTC" firstStartedPulling="2025-05-13 00:42:00.824700757 +0000 UTC m=+16.984660399" lastFinishedPulling="2025-05-13 00:42:15.817401046 +0000 UTC m=+31.977360688" observedRunningTime="2025-05-13 00:42:16.808327044 +0000 UTC m=+32.968286686" watchObservedRunningTime="2025-05-13 00:42:16.887905089 +0000 UTC m=+33.047864731" May 13 00:42:17.060412 env[1305]: time="2025-05-13T00:42:17.060278570Z" level=info msg="StartContainer for \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\" returns successfully" May 13 00:42:17.172715 env[1305]: time="2025-05-13T00:42:17.172643840Z" level=info msg="shim disconnected" id=bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5 May 13 00:42:17.172715 env[1305]: time="2025-05-13T00:42:17.172695277Z" level=warning msg="cleaning up after shim disconnected" id=bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5 namespace=k8s.io May 13 00:42:17.172715 env[1305]: time="2025-05-13T00:42:17.172704605Z" level=info msg="cleaning up dead shim" May 13 00:42:17.213726 env[1305]: time="2025-05-13T00:42:17.213656365Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:42:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2853 runtime=io.containerd.runc.v2\n" May 13 00:42:17.526320 kubelet[2173]: E0513 00:42:17.526295 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:17.526320 kubelet[2173]: E0513 00:42:17.526312 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:17.528146 env[1305]: time="2025-05-13T00:42:17.528100689Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:42:17.545534 env[1305]: time="2025-05-13T00:42:17.545477706Z" level=info msg="CreateContainer within sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\"" May 13 00:42:17.546008 env[1305]: time="2025-05-13T00:42:17.545972552Z" level=info msg="StartContainer for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\"" May 13 00:42:17.584032 env[1305]: time="2025-05-13T00:42:17.583987852Z" level=info msg="StartContainer for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" returns successfully" May 13 00:42:17.724759 kubelet[2173]: I0513 00:42:17.724722 2173 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:42:17.775273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5-rootfs.mount: Deactivated successfully. May 13 00:42:17.802605 kubelet[2173]: I0513 00:42:17.799873 2173 topology_manager.go:215] "Topology Admit Handler" podUID="4ea74986-d7b3-4e6c-9c99-66dd3033b9a1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rh8vx" May 13 00:42:17.805960 kubelet[2173]: I0513 00:42:17.805915 2173 topology_manager.go:215] "Topology Admit Handler" podUID="19c29888-797c-44e5-92a6-93bf267f34d1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ghf77" May 13 00:42:17.898280 kubelet[2173]: I0513 00:42:17.898235 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ea74986-d7b3-4e6c-9c99-66dd3033b9a1-config-volume\") pod \"coredns-7db6d8ff4d-rh8vx\" (UID: \"4ea74986-d7b3-4e6c-9c99-66dd3033b9a1\") " pod="kube-system/coredns-7db6d8ff4d-rh8vx" May 13 00:42:17.898280 kubelet[2173]: I0513 00:42:17.898277 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twt6d\" (UniqueName: \"kubernetes.io/projected/4ea74986-d7b3-4e6c-9c99-66dd3033b9a1-kube-api-access-twt6d\") pod \"coredns-7db6d8ff4d-rh8vx\" (UID: \"4ea74986-d7b3-4e6c-9c99-66dd3033b9a1\") " pod="kube-system/coredns-7db6d8ff4d-rh8vx" May 13 00:42:17.999435 kubelet[2173]: I0513 00:42:17.999399 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19c29888-797c-44e5-92a6-93bf267f34d1-config-volume\") pod \"coredns-7db6d8ff4d-ghf77\" (UID: \"19c29888-797c-44e5-92a6-93bf267f34d1\") " pod="kube-system/coredns-7db6d8ff4d-ghf77" May 13 00:42:17.999660 kubelet[2173]: I0513 00:42:17.999639 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjq2\" (UniqueName: \"kubernetes.io/projected/19c29888-797c-44e5-92a6-93bf267f34d1-kube-api-access-hrjq2\") pod \"coredns-7db6d8ff4d-ghf77\" (UID: \"19c29888-797c-44e5-92a6-93bf267f34d1\") " pod="kube-system/coredns-7db6d8ff4d-ghf77" May 13 00:42:18.105160 kubelet[2173]: E0513 00:42:18.105135 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:18.105757 env[1305]: time="2025-05-13T00:42:18.105728418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rh8vx,Uid:4ea74986-d7b3-4e6c-9c99-66dd3033b9a1,Namespace:kube-system,Attempt:0,}" May 13 00:42:18.109994 kubelet[2173]: E0513 00:42:18.109960 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:18.110289 env[1305]: time="2025-05-13T00:42:18.110253880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ghf77,Uid:19c29888-797c-44e5-92a6-93bf267f34d1,Namespace:kube-system,Attempt:0,}" May 13 00:42:18.530399 kubelet[2173]: E0513 00:42:18.529728 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:18.575357 kubelet[2173]: I0513 00:42:18.575281 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-spctd" podStartSLOduration=7.793611884 podStartE2EDuration="19.575261672s" podCreationTimestamp="2025-05-13 00:41:59 +0000 UTC" firstStartedPulling="2025-05-13 00:42:00.810133296 +0000 UTC m=+16.970092938" lastFinishedPulling="2025-05-13 00:42:12.591783084 +0000 UTC m=+28.751742726" observedRunningTime="2025-05-13 00:42:18.574911828 +0000 UTC m=+34.734871480" watchObservedRunningTime="2025-05-13 00:42:18.575261672 +0000 UTC m=+34.735221314" May 13 00:42:19.532644 kubelet[2173]: E0513 00:42:19.532603 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:19.939923 systemd[1]: Started sshd@7-10.0.0.51:22-10.0.0.1:34756.service. May 13 00:42:19.977126 sshd[3030]: Accepted publickey for core from 10.0.0.1 port 34756 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:19.978385 sshd[3030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:19.982659 systemd-logind[1290]: New session 8 of user core. May 13 00:42:19.983366 systemd[1]: Started session-8.scope. May 13 00:42:20.099592 sshd[3030]: pam_unix(sshd:session): session closed for user core May 13 00:42:20.102086 systemd[1]: sshd@7-10.0.0.51:22-10.0.0.1:34756.service: Deactivated successfully. May 13 00:42:20.103146 systemd-logind[1290]: Session 8 logged out. Waiting for processes to exit. May 13 00:42:20.103209 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:42:20.104002 systemd-logind[1290]: Removed session 8. May 13 00:42:20.382234 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 13 00:42:20.382357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 13 00:42:20.382372 systemd-networkd[1081]: cilium_host: Link UP May 13 00:42:20.382647 systemd-networkd[1081]: cilium_net: Link UP May 13 00:42:20.382845 systemd-networkd[1081]: cilium_net: Gained carrier May 13 00:42:20.383026 systemd-networkd[1081]: cilium_host: Gained carrier May 13 00:42:20.466704 systemd-networkd[1081]: cilium_vxlan: Link UP May 13 00:42:20.466713 systemd-networkd[1081]: cilium_vxlan: Gained carrier May 13 00:42:20.524696 systemd-networkd[1081]: cilium_host: Gained IPv6LL May 13 00:42:20.534645 kubelet[2173]: E0513 00:42:20.534613 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:20.650591 kernel: NET: Registered PF_ALG protocol family May 13 00:42:21.197689 systemd-networkd[1081]: cilium_net: Gained IPv6LL May 13 00:42:21.236396 systemd-networkd[1081]: lxc_health: Link UP May 13 00:42:21.251596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:42:21.252627 systemd-networkd[1081]: lxc_health: Gained carrier May 13 00:42:21.682914 systemd-networkd[1081]: lxc251275292f25: Link UP May 13 00:42:21.691587 kernel: eth0: renamed from tmpd7800 May 13 00:42:21.698592 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 13 00:42:21.698707 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc251275292f25: link becomes ready May 13 00:42:21.698789 systemd-networkd[1081]: lxc251275292f25: Gained carrier May 13 00:42:21.709309 systemd-networkd[1081]: lxcd8f322d0d54c: Link UP May 13 00:42:21.715700 kernel: eth0: renamed from tmp8b159 May 13 00:42:21.721707 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd8f322d0d54c: link becomes ready May 13 00:42:21.722272 systemd-networkd[1081]: lxcd8f322d0d54c: Gained carrier May 13 00:42:22.213044 kubelet[2173]: E0513 00:42:22.213015 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:22.294762 systemd-networkd[1081]: cilium_vxlan: Gained IPv6LL May 13 00:42:22.412778 systemd-networkd[1081]: lxc_health: Gained IPv6LL May 13 00:42:22.732726 systemd-networkd[1081]: lxc251275292f25: Gained IPv6LL May 13 00:42:23.756697 systemd-networkd[1081]: lxcd8f322d0d54c: Gained IPv6LL May 13 00:42:24.963527 env[1305]: time="2025-05-13T00:42:24.963407779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:24.963925 env[1305]: time="2025-05-13T00:42:24.963501234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:24.963925 env[1305]: time="2025-05-13T00:42:24.963514780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:24.963925 env[1305]: time="2025-05-13T00:42:24.963709284Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b1591846aa97b43524682d581bb4a90af30b641c0e0dd25c7967220992929dc pid=3441 runtime=io.containerd.runc.v2 May 13 00:42:24.965094 env[1305]: time="2025-05-13T00:42:24.965006580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:42:24.965094 env[1305]: time="2025-05-13T00:42:24.965054840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:42:24.965094 env[1305]: time="2025-05-13T00:42:24.965065951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:42:24.965298 env[1305]: time="2025-05-13T00:42:24.965248703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d78009521f4962ca7d63f127558a33fa966cd354b29d62b1eed19d63b77b8c43 pid=3449 runtime=io.containerd.runc.v2 May 13 00:42:24.993464 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:24.995060 systemd-resolved[1221]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:42:25.021925 env[1305]: time="2025-05-13T00:42:25.021858805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rh8vx,Uid:4ea74986-d7b3-4e6c-9c99-66dd3033b9a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d78009521f4962ca7d63f127558a33fa966cd354b29d62b1eed19d63b77b8c43\"" May 13 00:42:25.028571 env[1305]: time="2025-05-13T00:42:25.027181113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ghf77,Uid:19c29888-797c-44e5-92a6-93bf267f34d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b1591846aa97b43524682d581bb4a90af30b641c0e0dd25c7967220992929dc\"" May 13 00:42:25.028702 kubelet[2173]: E0513 00:42:25.027401 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:25.028702 kubelet[2173]: E0513 00:42:25.027708 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:25.029201 env[1305]: time="2025-05-13T00:42:25.029108498Z" level=info msg="CreateContainer within sandbox \"d78009521f4962ca7d63f127558a33fa966cd354b29d62b1eed19d63b77b8c43\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:25.030072 env[1305]: time="2025-05-13T00:42:25.030038168Z" level=info msg="CreateContainer within sandbox \"8b1591846aa97b43524682d581bb4a90af30b641c0e0dd25c7967220992929dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:42:25.079924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36217061.mount: Deactivated successfully. May 13 00:42:25.088299 env[1305]: time="2025-05-13T00:42:25.088247416Z" level=info msg="CreateContainer within sandbox \"d78009521f4962ca7d63f127558a33fa966cd354b29d62b1eed19d63b77b8c43\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"20e51d875baeb289d49361136c5cd7b6885fabf1099b3d29cec0ccaa6ea8acd2\"" May 13 00:42:25.089016 env[1305]: time="2025-05-13T00:42:25.088985026Z" level=info msg="StartContainer for \"20e51d875baeb289d49361136c5cd7b6885fabf1099b3d29cec0ccaa6ea8acd2\"" May 13 00:42:25.090453 env[1305]: time="2025-05-13T00:42:25.090403210Z" level=info msg="CreateContainer within sandbox \"8b1591846aa97b43524682d581bb4a90af30b641c0e0dd25c7967220992929dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea9da740db3ed7e7749b5945844f335d3e935cdb4a5ec5f2d451e16a78cfbd95\"" May 13 00:42:25.091021 env[1305]: time="2025-05-13T00:42:25.090997942Z" level=info msg="StartContainer for \"ea9da740db3ed7e7749b5945844f335d3e935cdb4a5ec5f2d451e16a78cfbd95\"" May 13 00:42:25.104137 systemd[1]: Started sshd@8-10.0.0.51:22-10.0.0.1:34758.service. May 13 00:42:25.175473 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 34758 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:25.176956 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:25.178691 env[1305]: time="2025-05-13T00:42:25.178640119Z" level=info msg="StartContainer for \"20e51d875baeb289d49361136c5cd7b6885fabf1099b3d29cec0ccaa6ea8acd2\" returns successfully" May 13 00:42:25.180568 env[1305]: time="2025-05-13T00:42:25.180480392Z" level=info msg="StartContainer for \"ea9da740db3ed7e7749b5945844f335d3e935cdb4a5ec5f2d451e16a78cfbd95\" returns successfully" May 13 00:42:25.182663 systemd[1]: Started session-9.scope. May 13 00:42:25.183746 systemd-logind[1290]: New session 9 of user core. May 13 00:42:25.318172 sshd[3528]: pam_unix(sshd:session): session closed for user core May 13 00:42:25.320861 systemd[1]: sshd@8-10.0.0.51:22-10.0.0.1:34758.service: Deactivated successfully. May 13 00:42:25.321797 systemd-logind[1290]: Session 9 logged out. Waiting for processes to exit. May 13 00:42:25.321809 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:42:25.322520 systemd-logind[1290]: Removed session 9. May 13 00:42:25.543876 kubelet[2173]: E0513 00:42:25.543844 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:25.547059 kubelet[2173]: E0513 00:42:25.546993 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:25.554305 kubelet[2173]: I0513 00:42:25.554249 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ghf77" podStartSLOduration=25.554230762 podStartE2EDuration="25.554230762s" podCreationTimestamp="2025-05-13 00:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:25.553507789 +0000 UTC m=+41.713467431" watchObservedRunningTime="2025-05-13 00:42:25.554230762 +0000 UTC m=+41.714190404" May 13 00:42:25.579031 kubelet[2173]: I0513 00:42:25.578753 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rh8vx" podStartSLOduration=25.578732364 podStartE2EDuration="25.578732364s" podCreationTimestamp="2025-05-13 00:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:42:25.566057108 +0000 UTC m=+41.726016740" watchObservedRunningTime="2025-05-13 00:42:25.578732364 +0000 UTC m=+41.738692006" May 13 00:42:26.548610 kubelet[2173]: E0513 00:42:26.548546 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:26.549022 kubelet[2173]: E0513 00:42:26.548796 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:27.550823 kubelet[2173]: E0513 00:42:27.550788 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:27.550823 kubelet[2173]: E0513 00:42:27.550788 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:30.187570 kubelet[2173]: I0513 00:42:30.187505 2173 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:42:30.188237 kubelet[2173]: E0513 00:42:30.188214 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:30.321404 systemd[1]: Started sshd@9-10.0.0.51:22-10.0.0.1:50224.service. May 13 00:42:30.356612 sshd[3606]: Accepted publickey for core from 10.0.0.1 port 50224 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:30.358020 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:30.361870 systemd-logind[1290]: New session 10 of user core. May 13 00:42:30.362803 systemd[1]: Started session-10.scope. May 13 00:42:30.464060 sshd[3606]: pam_unix(sshd:session): session closed for user core May 13 00:42:30.466701 systemd[1]: Started sshd@10-10.0.0.51:22-10.0.0.1:50232.service. May 13 00:42:30.468674 systemd[1]: sshd@9-10.0.0.51:22-10.0.0.1:50224.service: Deactivated successfully. May 13 00:42:30.469692 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:42:30.470160 systemd-logind[1290]: Session 10 logged out. Waiting for processes to exit. May 13 00:42:30.470975 systemd-logind[1290]: Removed session 10. May 13 00:42:30.497680 sshd[3619]: Accepted publickey for core from 10.0.0.1 port 50232 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:30.498607 sshd[3619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:30.501805 systemd-logind[1290]: New session 11 of user core. May 13 00:42:30.502629 systemd[1]: Started session-11.scope. May 13 00:42:30.555783 kubelet[2173]: E0513 00:42:30.555732 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:42:30.656023 sshd[3619]: pam_unix(sshd:session): session closed for user core May 13 00:42:30.658900 systemd[1]: Started sshd@11-10.0.0.51:22-10.0.0.1:50244.service. May 13 00:42:30.659760 systemd[1]: sshd@10-10.0.0.51:22-10.0.0.1:50232.service: Deactivated successfully. May 13 00:42:30.660852 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:42:30.661059 systemd-logind[1290]: Session 11 logged out. Waiting for processes to exit. May 13 00:42:30.662315 systemd-logind[1290]: Removed session 11. May 13 00:42:30.697715 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 50244 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:30.698808 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:30.702229 systemd-logind[1290]: New session 12 of user core. May 13 00:42:30.703193 systemd[1]: Started session-12.scope. May 13 00:42:30.821871 sshd[3631]: pam_unix(sshd:session): session closed for user core May 13 00:42:30.824176 systemd[1]: sshd@11-10.0.0.51:22-10.0.0.1:50244.service: Deactivated successfully. May 13 00:42:30.825002 systemd-logind[1290]: Session 12 logged out. Waiting for processes to exit. May 13 00:42:30.825044 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:42:30.825784 systemd-logind[1290]: Removed session 12. May 13 00:42:35.825327 systemd[1]: Started sshd@12-10.0.0.51:22-10.0.0.1:50260.service. May 13 00:42:35.856125 sshd[3649]: Accepted publickey for core from 10.0.0.1 port 50260 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:35.857308 sshd[3649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:35.861351 systemd-logind[1290]: New session 13 of user core. May 13 00:42:35.862060 systemd[1]: Started session-13.scope. May 13 00:42:35.974695 sshd[3649]: pam_unix(sshd:session): session closed for user core May 13 00:42:35.977758 systemd[1]: sshd@12-10.0.0.51:22-10.0.0.1:50260.service: Deactivated successfully. May 13 00:42:35.978874 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:42:35.978886 systemd-logind[1290]: Session 13 logged out. Waiting for processes to exit. May 13 00:42:35.979689 systemd-logind[1290]: Removed session 13. May 13 00:42:40.978052 systemd[1]: Started sshd@13-10.0.0.51:22-10.0.0.1:42534.service. May 13 00:42:41.031438 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 42534 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:41.032665 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:41.037171 systemd[1]: Started session-14.scope. May 13 00:42:41.037394 systemd-logind[1290]: New session 14 of user core. May 13 00:42:41.142713 sshd[3663]: pam_unix(sshd:session): session closed for user core May 13 00:42:41.145711 systemd[1]: Started sshd@14-10.0.0.51:22-10.0.0.1:42538.service. May 13 00:42:41.146251 systemd[1]: sshd@13-10.0.0.51:22-10.0.0.1:42534.service: Deactivated successfully. May 13 00:42:41.147780 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:42:41.148132 systemd-logind[1290]: Session 14 logged out. Waiting for processes to exit. May 13 00:42:41.148935 systemd-logind[1290]: Removed session 14. May 13 00:42:41.175509 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 42538 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:41.176656 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:41.180000 systemd-logind[1290]: New session 15 of user core. May 13 00:42:41.180886 systemd[1]: Started session-15.scope. May 13 00:42:41.445444 sshd[3676]: pam_unix(sshd:session): session closed for user core May 13 00:42:41.448224 systemd[1]: Started sshd@15-10.0.0.51:22-10.0.0.1:42544.service. May 13 00:42:41.448854 systemd[1]: sshd@14-10.0.0.51:22-10.0.0.1:42538.service: Deactivated successfully. May 13 00:42:41.449831 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:42:41.450186 systemd-logind[1290]: Session 15 logged out. Waiting for processes to exit. May 13 00:42:41.451011 systemd-logind[1290]: Removed session 15. May 13 00:42:41.482329 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 42544 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:41.483311 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:41.486652 systemd-logind[1290]: New session 16 of user core. May 13 00:42:41.487295 systemd[1]: Started session-16.scope. May 13 00:42:42.870423 sshd[3688]: pam_unix(sshd:session): session closed for user core May 13 00:42:42.873046 systemd[1]: Started sshd@16-10.0.0.51:22-10.0.0.1:42548.service. May 13 00:42:42.878285 systemd[1]: sshd@15-10.0.0.51:22-10.0.0.1:42544.service: Deactivated successfully. May 13 00:42:42.879481 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:42:42.879898 systemd-logind[1290]: Session 16 logged out. Waiting for processes to exit. May 13 00:42:42.882167 systemd-logind[1290]: Removed session 16. May 13 00:42:42.909462 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 42548 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:42.911018 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:42.915635 systemd-logind[1290]: New session 17 of user core. May 13 00:42:42.916853 systemd[1]: Started session-17.scope. May 13 00:42:43.189478 sshd[3704]: pam_unix(sshd:session): session closed for user core May 13 00:42:43.191866 systemd[1]: Started sshd@17-10.0.0.51:22-10.0.0.1:42558.service. May 13 00:42:43.193636 systemd[1]: sshd@16-10.0.0.51:22-10.0.0.1:42548.service: Deactivated successfully. May 13 00:42:43.195171 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:42:43.196039 systemd-logind[1290]: Session 17 logged out. Waiting for processes to exit. May 13 00:42:43.197268 systemd-logind[1290]: Removed session 17. May 13 00:42:43.231347 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 42558 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:43.232942 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:43.238208 systemd-logind[1290]: New session 18 of user core. May 13 00:42:43.239279 systemd[1]: Started session-18.scope. May 13 00:42:43.368373 sshd[3718]: pam_unix(sshd:session): session closed for user core May 13 00:42:43.371331 systemd[1]: sshd@17-10.0.0.51:22-10.0.0.1:42558.service: Deactivated successfully. May 13 00:42:43.372285 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:42:43.373341 systemd-logind[1290]: Session 18 logged out. Waiting for processes to exit. May 13 00:42:43.374657 systemd-logind[1290]: Removed session 18. May 13 00:42:48.371859 systemd[1]: Started sshd@18-10.0.0.51:22-10.0.0.1:54336.service. May 13 00:42:48.402099 sshd[3736]: Accepted publickey for core from 10.0.0.1 port 54336 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:48.403155 sshd[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:48.406588 systemd-logind[1290]: New session 19 of user core. May 13 00:42:48.407318 systemd[1]: Started session-19.scope. May 13 00:42:48.502494 sshd[3736]: pam_unix(sshd:session): session closed for user core May 13 00:42:48.504720 systemd[1]: sshd@18-10.0.0.51:22-10.0.0.1:54336.service: Deactivated successfully. May 13 00:42:48.505759 systemd-logind[1290]: Session 19 logged out. Waiting for processes to exit. May 13 00:42:48.505898 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:42:48.506584 systemd-logind[1290]: Removed session 19. May 13 00:42:53.506769 systemd[1]: Started sshd@19-10.0.0.51:22-10.0.0.1:54352.service. May 13 00:42:53.537015 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 54352 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:53.538049 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:53.541276 systemd-logind[1290]: New session 20 of user core. May 13 00:42:53.541946 systemd[1]: Started session-20.scope. May 13 00:42:53.638117 sshd[3754]: pam_unix(sshd:session): session closed for user core May 13 00:42:53.640446 systemd[1]: sshd@19-10.0.0.51:22-10.0.0.1:54352.service: Deactivated successfully. May 13 00:42:53.641492 systemd-logind[1290]: Session 20 logged out. Waiting for processes to exit. May 13 00:42:53.641540 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:42:53.642593 systemd-logind[1290]: Removed session 20. May 13 00:42:58.641717 systemd[1]: Started sshd@20-10.0.0.51:22-10.0.0.1:37666.service. May 13 00:42:58.675299 sshd[3768]: Accepted publickey for core from 10.0.0.1 port 37666 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:42:58.676414 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:42:58.679843 systemd-logind[1290]: New session 21 of user core. May 13 00:42:58.680883 systemd[1]: Started session-21.scope. May 13 00:42:58.782853 sshd[3768]: pam_unix(sshd:session): session closed for user core May 13 00:42:58.785505 systemd[1]: sshd@20-10.0.0.51:22-10.0.0.1:37666.service: Deactivated successfully. May 13 00:42:58.786850 systemd-logind[1290]: Session 21 logged out. Waiting for processes to exit. May 13 00:42:58.786892 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:42:58.787902 systemd-logind[1290]: Removed session 21. May 13 00:43:03.786077 systemd[1]: Started sshd@21-10.0.0.51:22-10.0.0.1:37676.service. May 13 00:43:03.815996 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 37676 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:03.816937 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:03.819887 systemd-logind[1290]: New session 22 of user core. May 13 00:43:03.820775 systemd[1]: Started session-22.scope. May 13 00:43:03.914500 sshd[3784]: pam_unix(sshd:session): session closed for user core May 13 00:43:03.917177 systemd[1]: Started sshd@22-10.0.0.51:22-10.0.0.1:37678.service. May 13 00:43:03.917952 systemd[1]: sshd@21-10.0.0.51:22-10.0.0.1:37676.service: Deactivated successfully. May 13 00:43:03.918809 systemd-logind[1290]: Session 22 logged out. Waiting for processes to exit. May 13 00:43:03.918872 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:43:03.919723 systemd-logind[1290]: Removed session 22. May 13 00:43:03.951648 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 37678 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:03.952695 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:03.955933 systemd-logind[1290]: New session 23 of user core. May 13 00:43:03.956872 systemd[1]: Started session-23.scope. May 13 00:43:05.341962 env[1305]: time="2025-05-13T00:43:05.341911728Z" level=info msg="StopContainer for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" with timeout 30 (s)" May 13 00:43:05.342453 env[1305]: time="2025-05-13T00:43:05.342248251Z" level=info msg="Stop container \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" with signal terminated" May 13 00:43:05.365161 env[1305]: time="2025-05-13T00:43:05.365096497Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:43:05.370070 env[1305]: time="2025-05-13T00:43:05.370035825Z" level=info msg="StopContainer for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" with timeout 2 (s)" May 13 00:43:05.370277 env[1305]: time="2025-05-13T00:43:05.370252930Z" level=info msg="Stop container \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" with signal terminated" May 13 00:43:05.372503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c-rootfs.mount: Deactivated successfully. May 13 00:43:05.376247 systemd-networkd[1081]: lxc_health: Link DOWN May 13 00:43:05.376255 systemd-networkd[1081]: lxc_health: Lost carrier May 13 00:43:05.380023 env[1305]: time="2025-05-13T00:43:05.379981349Z" level=info msg="shim disconnected" id=d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c May 13 00:43:05.380115 env[1305]: time="2025-05-13T00:43:05.380028529Z" level=warning msg="cleaning up after shim disconnected" id=d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c namespace=k8s.io May 13 00:43:05.380115 env[1305]: time="2025-05-13T00:43:05.380041013Z" level=info msg="cleaning up dead shim" May 13 00:43:05.385941 env[1305]: time="2025-05-13T00:43:05.385903828Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3857 runtime=io.containerd.runc.v2\n" May 13 00:43:05.388836 env[1305]: time="2025-05-13T00:43:05.388798746Z" level=info msg="StopContainer for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" returns successfully" May 13 00:43:05.389431 env[1305]: time="2025-05-13T00:43:05.389410225Z" level=info msg="StopPodSandbox for \"1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d\"" May 13 00:43:05.389501 env[1305]: time="2025-05-13T00:43:05.389468688Z" level=info msg="Container to stop \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.396910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d-shm.mount: Deactivated successfully. May 13 00:43:05.422887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d-rootfs.mount: Deactivated successfully. May 13 00:43:05.424963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca-rootfs.mount: Deactivated successfully. May 13 00:43:05.427500 env[1305]: time="2025-05-13T00:43:05.427443528Z" level=info msg="shim disconnected" id=1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d May 13 00:43:05.427500 env[1305]: time="2025-05-13T00:43:05.427498193Z" level=warning msg="cleaning up after shim disconnected" id=1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d namespace=k8s.io May 13 00:43:05.427621 env[1305]: time="2025-05-13T00:43:05.427510065Z" level=info msg="cleaning up dead shim" May 13 00:43:05.427621 env[1305]: time="2025-05-13T00:43:05.427491329Z" level=info msg="shim disconnected" id=f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca May 13 00:43:05.427621 env[1305]: time="2025-05-13T00:43:05.427533149Z" level=warning msg="cleaning up after shim disconnected" id=f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca namespace=k8s.io May 13 00:43:05.427621 env[1305]: time="2025-05-13T00:43:05.427543048Z" level=info msg="cleaning up dead shim" May 13 00:43:05.433520 env[1305]: time="2025-05-13T00:43:05.433477400Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3906 runtime=io.containerd.runc.v2\n" May 13 00:43:05.433985 env[1305]: time="2025-05-13T00:43:05.433960715Z" level=info msg="TearDown network for sandbox \"1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d\" successfully" May 13 00:43:05.433985 env[1305]: time="2025-05-13T00:43:05.433984089Z" level=info msg="StopPodSandbox for \"1d8db19d640b066a9750764a0d91b8b468cf1e167eeaa6bc33d259f05df7725d\" returns successfully" May 13 00:43:05.434431 env[1305]: time="2025-05-13T00:43:05.434411226Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3907 runtime=io.containerd.runc.v2\n" May 13 00:43:05.436994 env[1305]: time="2025-05-13T00:43:05.436953609Z" level=info msg="StopContainer for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" returns successfully" May 13 00:43:05.437424 env[1305]: time="2025-05-13T00:43:05.437405413Z" level=info msg="StopPodSandbox for \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\"" May 13 00:43:05.437541 env[1305]: time="2025-05-13T00:43:05.437510985Z" level=info msg="Container to stop \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.437541 env[1305]: time="2025-05-13T00:43:05.437535592Z" level=info msg="Container to stop \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.437652 env[1305]: time="2025-05-13T00:43:05.437545922Z" level=info msg="Container to stop \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.437652 env[1305]: time="2025-05-13T00:43:05.437607941Z" level=info msg="Container to stop \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.437652 env[1305]: time="2025-05-13T00:43:05.437617148Z" level=info msg="Container to stop \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:43:05.460049 env[1305]: time="2025-05-13T00:43:05.459986978Z" level=info msg="shim disconnected" id=a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a May 13 00:43:05.460049 env[1305]: time="2025-05-13T00:43:05.460044830Z" level=warning msg="cleaning up after shim disconnected" id=a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a namespace=k8s.io May 13 00:43:05.460049 env[1305]: time="2025-05-13T00:43:05.460054718Z" level=info msg="cleaning up dead shim" May 13 00:43:05.466658 env[1305]: time="2025-05-13T00:43:05.466601121Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3951 runtime=io.containerd.runc.v2\n" May 13 00:43:05.467027 env[1305]: time="2025-05-13T00:43:05.466995665Z" level=info msg="TearDown network for sandbox \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" successfully" May 13 00:43:05.467076 env[1305]: time="2025-05-13T00:43:05.467027977Z" level=info msg="StopPodSandbox for \"a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a\" returns successfully" May 13 00:43:05.558155 kubelet[2173]: I0513 00:43:05.558112 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsjxt\" (UniqueName: \"kubernetes.io/projected/aa0db833-e8cd-47ef-bdf0-450daea3948c-kube-api-access-gsjxt\") pod \"aa0db833-e8cd-47ef-bdf0-450daea3948c\" (UID: \"aa0db833-e8cd-47ef-bdf0-450daea3948c\") " May 13 00:43:05.558155 kubelet[2173]: I0513 00:43:05.558164 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa0db833-e8cd-47ef-bdf0-450daea3948c-cilium-config-path\") pod \"aa0db833-e8cd-47ef-bdf0-450daea3948c\" (UID: \"aa0db833-e8cd-47ef-bdf0-450daea3948c\") " May 13 00:43:05.558612 kubelet[2173]: I0513 00:43:05.558339 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-config-path\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559756 kubelet[2173]: I0513 00:43:05.559728 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-hostproc\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559812 kubelet[2173]: I0513 00:43:05.559760 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-net\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559812 kubelet[2173]: I0513 00:43:05.559783 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-lib-modules\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559812 kubelet[2173]: I0513 00:43:05.559805 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-hubble-tls\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559887 kubelet[2173]: I0513 00:43:05.559822 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-kernel\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559887 kubelet[2173]: I0513 00:43:05.559843 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cni-path\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559887 kubelet[2173]: I0513 00:43:05.559865 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5651aa4-975e-4270-bea2-7b9e9efccea2-clustermesh-secrets\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559887 kubelet[2173]: I0513 00:43:05.559884 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-cgroup\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.559980 kubelet[2173]: I0513 00:43:05.559904 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zhhp\" (UniqueName: \"kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-kube-api-access-9zhhp\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.561058 kubelet[2173]: I0513 00:43:05.561027 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:05.561366 kubelet[2173]: I0513 00:43:05.561331 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa0db833-e8cd-47ef-bdf0-450daea3948c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa0db833-e8cd-47ef-bdf0-450daea3948c" (UID: "aa0db833-e8cd-47ef-bdf0-450daea3948c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:05.561648 kubelet[2173]: I0513 00:43:05.561500 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.561762 kubelet[2173]: I0513 00:43:05.561518 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-hostproc" (OuterVolumeSpecName: "hostproc") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.561854 kubelet[2173]: I0513 00:43:05.561535 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cni-path" (OuterVolumeSpecName: "cni-path") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.561939 kubelet[2173]: I0513 00:43:05.561565 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.562026 kubelet[2173]: I0513 00:43:05.561598 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.562123 kubelet[2173]: I0513 00:43:05.561601 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.562219 kubelet[2173]: I0513 00:43:05.562023 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0db833-e8cd-47ef-bdf0-450daea3948c-kube-api-access-gsjxt" (OuterVolumeSpecName: "kube-api-access-gsjxt") pod "aa0db833-e8cd-47ef-bdf0-450daea3948c" (UID: "aa0db833-e8cd-47ef-bdf0-450daea3948c"). InnerVolumeSpecName "kube-api-access-gsjxt". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:05.562991 kubelet[2173]: I0513 00:43:05.562968 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-kube-api-access-9zhhp" (OuterVolumeSpecName: "kube-api-access-9zhhp") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "kube-api-access-9zhhp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:05.563856 kubelet[2173]: I0513 00:43:05.563830 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:05.564318 kubelet[2173]: I0513 00:43:05.564286 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5651aa4-975e-4270-bea2-7b9e9efccea2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:05.616089 kubelet[2173]: I0513 00:43:05.616006 2173 scope.go:117] "RemoveContainer" containerID="d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c" May 13 00:43:05.617329 env[1305]: time="2025-05-13T00:43:05.617296640Z" level=info msg="RemoveContainer for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\"" May 13 00:43:05.622876 env[1305]: time="2025-05-13T00:43:05.622854071Z" level=info msg="RemoveContainer for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" returns successfully" May 13 00:43:05.624155 kubelet[2173]: I0513 00:43:05.624099 2173 scope.go:117] "RemoveContainer" containerID="d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c" May 13 00:43:05.624839 env[1305]: time="2025-05-13T00:43:05.624739748Z" level=error msg="ContainerStatus for \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\": not found" May 13 00:43:05.625359 kubelet[2173]: E0513 00:43:05.625319 2173 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\": not found" containerID="d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c" May 13 00:43:05.625524 kubelet[2173]: I0513 00:43:05.625446 2173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c"} err="failed to get container status \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\": rpc error: code = NotFound desc = an error occurred when try to find container \"d096c0149136039b1b3938c0a4656a1ade65f4a66b9a1e5acc498ce4bb79675c\": not found" May 13 00:43:05.626082 kubelet[2173]: I0513 00:43:05.626050 2173 scope.go:117] "RemoveContainer" containerID="f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca" May 13 00:43:05.627310 env[1305]: time="2025-05-13T00:43:05.627280658Z" level=info msg="RemoveContainer for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\"" May 13 00:43:05.630873 env[1305]: time="2025-05-13T00:43:05.630830819Z" level=info msg="RemoveContainer for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" returns successfully" May 13 00:43:05.631077 kubelet[2173]: I0513 00:43:05.631049 2173 scope.go:117] "RemoveContainer" containerID="bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5" May 13 00:43:05.632279 env[1305]: time="2025-05-13T00:43:05.632238592Z" level=info msg="RemoveContainer for \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\"" May 13 00:43:05.635029 env[1305]: time="2025-05-13T00:43:05.635005164Z" level=info msg="RemoveContainer for \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\" returns successfully" May 13 00:43:05.635206 kubelet[2173]: I0513 00:43:05.635181 2173 scope.go:117] "RemoveContainer" containerID="ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000" May 13 00:43:05.636123 env[1305]: time="2025-05-13T00:43:05.636103726Z" level=info msg="RemoveContainer for \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\"" May 13 00:43:05.638823 env[1305]: time="2025-05-13T00:43:05.638794993Z" level=info msg="RemoveContainer for \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\" returns successfully" May 13 00:43:05.638976 kubelet[2173]: I0513 00:43:05.638945 2173 scope.go:117] "RemoveContainer" containerID="04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83" May 13 00:43:05.639895 env[1305]: time="2025-05-13T00:43:05.639858227Z" level=info msg="RemoveContainer for \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\"" May 13 00:43:05.642774 env[1305]: time="2025-05-13T00:43:05.642740140Z" level=info msg="RemoveContainer for \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\" returns successfully" May 13 00:43:05.642933 kubelet[2173]: I0513 00:43:05.642909 2173 scope.go:117] "RemoveContainer" containerID="565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04" May 13 00:43:05.643945 env[1305]: time="2025-05-13T00:43:05.643911060Z" level=info msg="RemoveContainer for \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\"" May 13 00:43:05.647903 env[1305]: time="2025-05-13T00:43:05.647858601Z" level=info msg="RemoveContainer for \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\" returns successfully" May 13 00:43:05.648074 kubelet[2173]: I0513 00:43:05.648046 2173 scope.go:117] "RemoveContainer" containerID="f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca" May 13 00:43:05.648275 env[1305]: time="2025-05-13T00:43:05.648217497Z" level=error msg="ContainerStatus for \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\": not found" May 13 00:43:05.648486 kubelet[2173]: E0513 00:43:05.648462 2173 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\": not found" containerID="f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca" May 13 00:43:05.648565 kubelet[2173]: I0513 00:43:05.648500 2173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca"} err="failed to get container status \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\": rpc error: code = NotFound desc = an error occurred when try to find container \"f7ed7a681c061cd38e79f49c6c131e36d5c6318cecbc6325e62b78d2c7d6abca\": not found" May 13 00:43:05.648565 kubelet[2173]: I0513 00:43:05.648539 2173 scope.go:117] "RemoveContainer" containerID="bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5" May 13 00:43:05.648809 env[1305]: time="2025-05-13T00:43:05.648754966Z" level=error msg="ContainerStatus for \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\": not found" May 13 00:43:05.648937 kubelet[2173]: E0513 00:43:05.648907 2173 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\": not found" containerID="bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5" May 13 00:43:05.648973 kubelet[2173]: I0513 00:43:05.648936 2173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5"} err="failed to get container status \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc412ebb1e305f6af2c0c0eb5909a3ca54d5d52146ada4faa2d00ed54e68ffe5\": not found" May 13 00:43:05.648973 kubelet[2173]: I0513 00:43:05.648960 2173 scope.go:117] "RemoveContainer" containerID="ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000" May 13 00:43:05.649138 env[1305]: time="2025-05-13T00:43:05.649095557Z" level=error msg="ContainerStatus for \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\": not found" May 13 00:43:05.649230 kubelet[2173]: E0513 00:43:05.649211 2173 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\": not found" containerID="ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000" May 13 00:43:05.649260 kubelet[2173]: I0513 00:43:05.649229 2173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000"} err="failed to get container status \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae9c1f933e6f7c5d60ad83b1664c424708158de7a546d470863649178db90000\": not found" May 13 00:43:05.649260 kubelet[2173]: I0513 00:43:05.649241 2173 scope.go:117] "RemoveContainer" containerID="04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83" May 13 00:43:05.649501 env[1305]: time="2025-05-13T00:43:05.649432702Z" level=error msg="ContainerStatus for \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\": not found" May 13 00:43:05.649645 kubelet[2173]: E0513 00:43:05.649621 2173 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\": not found" containerID="04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83" May 13 00:43:05.649702 kubelet[2173]: I0513 00:43:05.649647 2173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83"} err="failed to get container status \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\": rpc error: code = NotFound desc = an error occurred when try to find container \"04b62e8123099d24deeabddf77bd87f4703558e8ae27e42b3ed6edcada5f0a83\": not found" May 13 00:43:05.649702 kubelet[2173]: I0513 00:43:05.649667 2173 scope.go:117] "RemoveContainer" containerID="565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04" May 13 00:43:05.649865 env[1305]: time="2025-05-13T00:43:05.649820885Z" level=error msg="ContainerStatus for \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\": not found" May 13 00:43:05.649947 kubelet[2173]: E0513 00:43:05.649934 2173 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\": not found" containerID="565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04" May 13 00:43:05.649985 kubelet[2173]: I0513 00:43:05.649954 2173 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04"} err="failed to get container status \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\": rpc error: code = NotFound desc = an error occurred when try to find container \"565e0d01de6c13a4fe39690c9456c6bb674d022292ab750aaa25cf2ded867b04\": not found" May 13 00:43:05.660301 kubelet[2173]: I0513 00:43:05.660234 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-xtables-lock\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.660301 kubelet[2173]: I0513 00:43:05.660286 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-run\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.660427 kubelet[2173]: I0513 00:43:05.660306 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-bpf-maps\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.660427 kubelet[2173]: I0513 00:43:05.660331 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.660427 kubelet[2173]: I0513 00:43:05.660363 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-etc-cni-netd\") pod \"a5651aa4-975e-4270-bea2-7b9e9efccea2\" (UID: \"a5651aa4-975e-4270-bea2-7b9e9efccea2\") " May 13 00:43:05.660427 kubelet[2173]: I0513 00:43:05.660372 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.660427 kubelet[2173]: I0513 00:43:05.660398 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660421 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a5651aa4-975e-4270-bea2-7b9e9efccea2" (UID: "a5651aa4-975e-4270-bea2-7b9e9efccea2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660452 2173 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9zhhp\" (UniqueName: \"kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-kube-api-access-9zhhp\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660469 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660479 2173 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660489 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660499 2173 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660572 kubelet[2173]: I0513 00:43:05.660511 2173 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gsjxt\" (UniqueName: \"kubernetes.io/projected/aa0db833-e8cd-47ef-bdf0-450daea3948c-kube-api-access-gsjxt\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660521 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa0db833-e8cd-47ef-bdf0-450daea3948c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660531 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a5651aa4-975e-4270-bea2-7b9e9efccea2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660541 2173 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660565 2173 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660575 2173 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a5651aa4-975e-4270-bea2-7b9e9efccea2-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660586 2173 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660596 2173 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660738 kubelet[2173]: I0513 00:43:05.660605 2173 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a5651aa4-975e-4270-bea2-7b9e9efccea2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.660916 kubelet[2173]: I0513 00:43:05.660617 2173 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.760790 kubelet[2173]: I0513 00:43:05.760728 2173 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a5651aa4-975e-4270-bea2-7b9e9efccea2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:05.932447 kubelet[2173]: I0513 00:43:05.932314 2173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0db833-e8cd-47ef-bdf0-450daea3948c" path="/var/lib/kubelet/pods/aa0db833-e8cd-47ef-bdf0-450daea3948c/volumes" May 13 00:43:06.352744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a-rootfs.mount: Deactivated successfully. May 13 00:43:06.352886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8a0837333a7e76e8e760650760ca85948d4cae02e57b3567df53373a9b95c4a-shm.mount: Deactivated successfully. May 13 00:43:06.352972 systemd[1]: var-lib-kubelet-pods-aa0db833\x2de8cd\x2d47ef\x2dbdf0\x2d450daea3948c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgsjxt.mount: Deactivated successfully. May 13 00:43:06.353054 systemd[1]: var-lib-kubelet-pods-a5651aa4\x2d975e\x2d4270\x2dbea2\x2d7b9e9efccea2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9zhhp.mount: Deactivated successfully. May 13 00:43:06.353134 systemd[1]: var-lib-kubelet-pods-a5651aa4\x2d975e\x2d4270\x2dbea2\x2d7b9e9efccea2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:06.353221 systemd[1]: var-lib-kubelet-pods-a5651aa4\x2d975e\x2d4270\x2dbea2\x2d7b9e9efccea2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:07.313572 sshd[3798]: pam_unix(sshd:session): session closed for user core May 13 00:43:07.315735 systemd[1]: Started sshd@23-10.0.0.51:22-10.0.0.1:34876.service. May 13 00:43:07.316685 systemd[1]: sshd@22-10.0.0.51:22-10.0.0.1:37678.service: Deactivated successfully. May 13 00:43:07.317700 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:43:07.318118 systemd-logind[1290]: Session 23 logged out. Waiting for processes to exit. May 13 00:43:07.318906 systemd-logind[1290]: Removed session 23. May 13 00:43:07.349046 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 34876 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:07.350096 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:07.353653 systemd-logind[1290]: New session 24 of user core. May 13 00:43:07.354375 systemd[1]: Started session-24.scope. May 13 00:43:07.932987 kubelet[2173]: I0513 00:43:07.932938 2173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" path="/var/lib/kubelet/pods/a5651aa4-975e-4270-bea2-7b9e9efccea2/volumes" May 13 00:43:07.968369 sshd[3969]: pam_unix(sshd:session): session closed for user core May 13 00:43:07.970324 systemd[1]: Started sshd@24-10.0.0.51:22-10.0.0.1:34890.service. May 13 00:43:07.983625 systemd[1]: sshd@23-10.0.0.51:22-10.0.0.1:34876.service: Deactivated successfully. May 13 00:43:07.984488 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:43:07.989637 kubelet[2173]: I0513 00:43:07.989576 2173 topology_manager.go:215] "Topology Admit Handler" podUID="fc2053ff-6c47-406c-824b-4da6aaca6c72" podNamespace="kube-system" podName="cilium-647dx" May 13 00:43:07.989637 kubelet[2173]: E0513 00:43:07.989644 2173 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" containerName="mount-cgroup" May 13 00:43:07.989806 kubelet[2173]: E0513 00:43:07.989654 2173 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa0db833-e8cd-47ef-bdf0-450daea3948c" containerName="cilium-operator" May 13 00:43:07.989806 kubelet[2173]: E0513 00:43:07.989665 2173 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" containerName="apply-sysctl-overwrites" May 13 00:43:07.989806 kubelet[2173]: E0513 00:43:07.989672 2173 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" containerName="mount-bpf-fs" May 13 00:43:07.989806 kubelet[2173]: E0513 00:43:07.989679 2173 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" containerName="clean-cilium-state" May 13 00:43:07.989806 kubelet[2173]: E0513 00:43:07.989688 2173 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" containerName="cilium-agent" May 13 00:43:07.989806 kubelet[2173]: I0513 00:43:07.989715 2173 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5651aa4-975e-4270-bea2-7b9e9efccea2" containerName="cilium-agent" May 13 00:43:07.989806 kubelet[2173]: I0513 00:43:07.989723 2173 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0db833-e8cd-47ef-bdf0-450daea3948c" containerName="cilium-operator" May 13 00:43:07.994977 systemd-logind[1290]: Session 24 logged out. Waiting for processes to exit. May 13 00:43:08.000335 systemd-logind[1290]: Removed session 24. May 13 00:43:08.005922 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:08.008061 sshd[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:08.017945 systemd[1]: Started session-25.scope. May 13 00:43:08.018348 systemd-logind[1290]: New session 25 of user core. May 13 00:43:08.167362 systemd[1]: Started sshd@25-10.0.0.51:22-10.0.0.1:34896.service. May 13 00:43:08.171906 kubelet[2173]: E0513 00:43:08.171860 2173 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-nbx8q lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-647dx" podUID="fc2053ff-6c47-406c-824b-4da6aaca6c72" May 13 00:43:08.173132 kubelet[2173]: I0513 00:43:08.173101 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-clustermesh-secrets\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173201 kubelet[2173]: I0513 00:43:08.173138 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-config-path\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173201 kubelet[2173]: I0513 00:43:08.173164 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cni-path\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173201 kubelet[2173]: I0513 00:43:08.173185 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-lib-modules\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173296 kubelet[2173]: I0513 00:43:08.173204 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-ipsec-secrets\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173296 kubelet[2173]: I0513 00:43:08.173216 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-net\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173296 kubelet[2173]: I0513 00:43:08.173230 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-cgroup\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173296 kubelet[2173]: I0513 00:43:08.173242 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-etc-cni-netd\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173296 kubelet[2173]: I0513 00:43:08.173253 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-xtables-lock\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173296 kubelet[2173]: I0513 00:43:08.173273 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-bpf-maps\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173494 kubelet[2173]: I0513 00:43:08.173286 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-kernel\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173494 kubelet[2173]: I0513 00:43:08.173298 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-run\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173494 kubelet[2173]: I0513 00:43:08.173312 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbx8q\" (UniqueName: \"kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-kube-api-access-nbx8q\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173494 kubelet[2173]: I0513 00:43:08.173324 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-hostproc\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.173494 kubelet[2173]: I0513 00:43:08.173337 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-hubble-tls\") pod \"cilium-647dx\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " pod="kube-system/cilium-647dx" May 13 00:43:08.188155 sshd[3982]: pam_unix(sshd:session): session closed for user core May 13 00:43:08.191703 systemd[1]: sshd@24-10.0.0.51:22-10.0.0.1:34890.service: Deactivated successfully. May 13 00:43:08.193523 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:43:08.194611 systemd-logind[1290]: Session 25 logged out. Waiting for processes to exit. May 13 00:43:08.200213 systemd-logind[1290]: Removed session 25. May 13 00:43:08.218850 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 34896 ssh2: RSA SHA256:rB6W9bZE2VLaM16OfY/13txyT/mKzB4zHBxc/zNPaeA May 13 00:43:08.220406 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 13 00:43:08.224905 systemd-logind[1290]: New session 26 of user core. May 13 00:43:08.225227 systemd[1]: Started session-26.scope. May 13 00:43:08.679744 kubelet[2173]: I0513 00:43:08.679659 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-etc-cni-netd\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.679744 kubelet[2173]: I0513 00:43:08.679713 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cni-path\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.679744 kubelet[2173]: I0513 00:43:08.679747 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-clustermesh-secrets\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680063 kubelet[2173]: I0513 00:43:08.679770 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-ipsec-secrets\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680063 kubelet[2173]: I0513 00:43:08.679795 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-config-path\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680063 kubelet[2173]: I0513 00:43:08.679813 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-run\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680063 kubelet[2173]: I0513 00:43:08.679824 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.680063 kubelet[2173]: I0513 00:43:08.679835 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-hubble-tls\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680063 kubelet[2173]: I0513 00:43:08.679883 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-net\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680288 kubelet[2173]: I0513 00:43:08.679909 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-cgroup\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680288 kubelet[2173]: I0513 00:43:08.679930 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-hostproc\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680288 kubelet[2173]: I0513 00:43:08.679950 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-lib-modules\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680288 kubelet[2173]: I0513 00:43:08.679969 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-bpf-maps\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680288 kubelet[2173]: I0513 00:43:08.679993 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-xtables-lock\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680288 kubelet[2173]: I0513 00:43:08.680014 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-kernel\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680526 kubelet[2173]: I0513 00:43:08.680041 2173 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbx8q\" (UniqueName: \"kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-kube-api-access-nbx8q\") pod \"fc2053ff-6c47-406c-824b-4da6aaca6c72\" (UID: \"fc2053ff-6c47-406c-824b-4da6aaca6c72\") " May 13 00:43:08.680526 kubelet[2173]: I0513 00:43:08.680075 2173 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.680643 kubelet[2173]: I0513 00:43:08.680617 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cni-path" (OuterVolumeSpecName: "cni-path") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.680855 kubelet[2173]: I0513 00:43:08.680812 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.680979 kubelet[2173]: I0513 00:43:08.680958 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.681091 kubelet[2173]: I0513 00:43:08.681071 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-hostproc" (OuterVolumeSpecName: "hostproc") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.681206 kubelet[2173]: I0513 00:43:08.681185 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.681319 kubelet[2173]: I0513 00:43:08.681299 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.681444 kubelet[2173]: I0513 00:43:08.681410 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.681577 kubelet[2173]: I0513 00:43:08.681539 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.681688 kubelet[2173]: I0513 00:43:08.681668 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:43:08.683011 kubelet[2173]: I0513 00:43:08.682980 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:08.684855 kubelet[2173]: I0513 00:43:08.684830 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-kube-api-access-nbx8q" (OuterVolumeSpecName: "kube-api-access-nbx8q") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "kube-api-access-nbx8q". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:43:08.685127 systemd[1]: var-lib-kubelet-pods-fc2053ff\x2d6c47\x2d406c\x2d824b\x2d4da6aaca6c72-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:43:08.687516 kubelet[2173]: I0513 00:43:08.687100 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:43:08.687863 kubelet[2173]: I0513 00:43:08.687837 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:08.688176 kubelet[2173]: I0513 00:43:08.688123 2173 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fc2053ff-6c47-406c-824b-4da6aaca6c72" (UID: "fc2053ff-6c47-406c-824b-4da6aaca6c72"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:43:08.688734 systemd[1]: var-lib-kubelet-pods-fc2053ff\x2d6c47\x2d406c\x2d824b\x2d4da6aaca6c72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnbx8q.mount: Deactivated successfully. May 13 00:43:08.688897 systemd[1]: var-lib-kubelet-pods-fc2053ff\x2d6c47\x2d406c\x2d824b\x2d4da6aaca6c72-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 13 00:43:08.780631 kubelet[2173]: I0513 00:43:08.780583 2173 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.780866 kubelet[2173]: I0513 00:43:08.780828 2173 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.780866 kubelet[2173]: I0513 00:43:08.780853 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.780866 kubelet[2173]: I0513 00:43:08.780864 2173 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780873 2173 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780882 2173 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780892 2173 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nbx8q\" (UniqueName: \"kubernetes.io/projected/fc2053ff-6c47-406c-824b-4da6aaca6c72-kube-api-access-nbx8q\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780902 2173 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780912 2173 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780921 2173 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780930 2173 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781019 kubelet[2173]: I0513 00:43:08.780936 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781293 kubelet[2173]: I0513 00:43:08.780944 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.781293 kubelet[2173]: I0513 00:43:08.780950 2173 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fc2053ff-6c47-406c-824b-4da6aaca6c72-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:43:08.977844 kubelet[2173]: E0513 00:43:08.977704 2173 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:09.281424 systemd[1]: var-lib-kubelet-pods-fc2053ff\x2d6c47\x2d406c\x2d824b\x2d4da6aaca6c72-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:43:09.732953 kubelet[2173]: I0513 00:43:09.732902 2173 topology_manager.go:215] "Topology Admit Handler" podUID="04663ee0-c8ec-4633-b895-47012d3c6894" podNamespace="kube-system" podName="cilium-lphj6" May 13 00:43:09.791266 kubelet[2173]: I0513 00:43:09.791178 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-lib-modules\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791266 kubelet[2173]: I0513 00:43:09.791230 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-xtables-lock\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791266 kubelet[2173]: I0513 00:43:09.791249 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04663ee0-c8ec-4633-b895-47012d3c6894-clustermesh-secrets\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791622 kubelet[2173]: I0513 00:43:09.791278 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-bpf-maps\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791622 kubelet[2173]: I0513 00:43:09.791327 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-hostproc\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791622 kubelet[2173]: I0513 00:43:09.791361 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-etc-cni-netd\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791622 kubelet[2173]: I0513 00:43:09.791385 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-host-proc-sys-net\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791622 kubelet[2173]: I0513 00:43:09.791407 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9ch\" (UniqueName: \"kubernetes.io/projected/04663ee0-c8ec-4633-b895-47012d3c6894-kube-api-access-fk9ch\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791622 kubelet[2173]: I0513 00:43:09.791435 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-cilium-run\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791852 kubelet[2173]: I0513 00:43:09.791476 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/04663ee0-c8ec-4633-b895-47012d3c6894-cilium-ipsec-secrets\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791852 kubelet[2173]: I0513 00:43:09.791495 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-cilium-cgroup\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791852 kubelet[2173]: I0513 00:43:09.791520 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-host-proc-sys-kernel\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791852 kubelet[2173]: I0513 00:43:09.791581 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04663ee0-c8ec-4633-b895-47012d3c6894-hubble-tls\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791852 kubelet[2173]: I0513 00:43:09.791627 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04663ee0-c8ec-4633-b895-47012d3c6894-cni-path\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.791852 kubelet[2173]: I0513 00:43:09.791650 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04663ee0-c8ec-4633-b895-47012d3c6894-cilium-config-path\") pod \"cilium-lphj6\" (UID: \"04663ee0-c8ec-4633-b895-47012d3c6894\") " pod="kube-system/cilium-lphj6" May 13 00:43:09.932036 kubelet[2173]: E0513 00:43:09.931989 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:09.938203 kubelet[2173]: I0513 00:43:09.938160 2173 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2053ff-6c47-406c-824b-4da6aaca6c72" path="/var/lib/kubelet/pods/fc2053ff-6c47-406c-824b-4da6aaca6c72/volumes" May 13 00:43:10.038923 kubelet[2173]: E0513 00:43:10.038717 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:10.040028 env[1305]: time="2025-05-13T00:43:10.039970660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lphj6,Uid:04663ee0-c8ec-4633-b895-47012d3c6894,Namespace:kube-system,Attempt:0,}" May 13 00:43:10.397262 env[1305]: time="2025-05-13T00:43:10.397149338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:43:10.397608 env[1305]: time="2025-05-13T00:43:10.397578035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:43:10.397777 env[1305]: time="2025-05-13T00:43:10.397748280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:43:10.398089 env[1305]: time="2025-05-13T00:43:10.398058252Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15 pid=4027 runtime=io.containerd.runc.v2 May 13 00:43:10.508141 env[1305]: time="2025-05-13T00:43:10.508060136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lphj6,Uid:04663ee0-c8ec-4633-b895-47012d3c6894,Namespace:kube-system,Attempt:0,} returns sandbox id \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\"" May 13 00:43:10.509976 kubelet[2173]: E0513 00:43:10.509879 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:10.513278 env[1305]: time="2025-05-13T00:43:10.513217336Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:43:10.592946 env[1305]: time="2025-05-13T00:43:10.587723955Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95c1b79d616e0c9aca021007211981039a8a83ccafd6caaceb388544b3daeb2e\"" May 13 00:43:10.592946 env[1305]: time="2025-05-13T00:43:10.588579026Z" level=info msg="StartContainer for \"95c1b79d616e0c9aca021007211981039a8a83ccafd6caaceb388544b3daeb2e\"" May 13 00:43:10.681696 env[1305]: time="2025-05-13T00:43:10.681567087Z" level=info msg="StartContainer for \"95c1b79d616e0c9aca021007211981039a8a83ccafd6caaceb388544b3daeb2e\" returns successfully" May 13 00:43:10.781199 env[1305]: time="2025-05-13T00:43:10.781067425Z" level=info msg="shim disconnected" id=95c1b79d616e0c9aca021007211981039a8a83ccafd6caaceb388544b3daeb2e May 13 00:43:10.781199 env[1305]: time="2025-05-13T00:43:10.781129864Z" level=warning msg="cleaning up after shim disconnected" id=95c1b79d616e0c9aca021007211981039a8a83ccafd6caaceb388544b3daeb2e namespace=k8s.io May 13 00:43:10.781199 env[1305]: time="2025-05-13T00:43:10.781142277Z" level=info msg="cleaning up dead shim" May 13 00:43:10.800902 env[1305]: time="2025-05-13T00:43:10.800354915Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4112 runtime=io.containerd.runc.v2\n" May 13 00:43:11.391688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95c1b79d616e0c9aca021007211981039a8a83ccafd6caaceb388544b3daeb2e-rootfs.mount: Deactivated successfully. May 13 00:43:11.652235 kubelet[2173]: E0513 00:43:11.649908 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:11.654549 env[1305]: time="2025-05-13T00:43:11.654499270Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:43:11.784301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1030565792.mount: Deactivated successfully. May 13 00:43:11.831902 env[1305]: time="2025-05-13T00:43:11.831664125Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32ae8fb854bf464b59b0f7f7828d6e337d4d5cf2099de44acfc06ad80f901c39\"" May 13 00:43:11.833845 env[1305]: time="2025-05-13T00:43:11.833803425Z" level=info msg="StartContainer for \"32ae8fb854bf464b59b0f7f7828d6e337d4d5cf2099de44acfc06ad80f901c39\"" May 13 00:43:11.948916 env[1305]: time="2025-05-13T00:43:11.947549092Z" level=info msg="StartContainer for \"32ae8fb854bf464b59b0f7f7828d6e337d4d5cf2099de44acfc06ad80f901c39\" returns successfully" May 13 00:43:12.000913 env[1305]: time="2025-05-13T00:43:12.000266851Z" level=info msg="shim disconnected" id=32ae8fb854bf464b59b0f7f7828d6e337d4d5cf2099de44acfc06ad80f901c39 May 13 00:43:12.000913 env[1305]: time="2025-05-13T00:43:12.000336364Z" level=warning msg="cleaning up after shim disconnected" id=32ae8fb854bf464b59b0f7f7828d6e337d4d5cf2099de44acfc06ad80f901c39 namespace=k8s.io May 13 00:43:12.000913 env[1305]: time="2025-05-13T00:43:12.000349219Z" level=info msg="cleaning up dead shim" May 13 00:43:12.016655 env[1305]: time="2025-05-13T00:43:12.016569529Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4174 runtime=io.containerd.runc.v2\n" May 13 00:43:12.387637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32ae8fb854bf464b59b0f7f7828d6e337d4d5cf2099de44acfc06ad80f901c39-rootfs.mount: Deactivated successfully. May 13 00:43:12.655699 kubelet[2173]: E0513 00:43:12.655355 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:12.661759 env[1305]: time="2025-05-13T00:43:12.661701893Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:43:12.686707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029795432.mount: Deactivated successfully. May 13 00:43:12.704411 env[1305]: time="2025-05-13T00:43:12.704311769Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"389fbca6868d7b3d9a6a9eebbe7500074d3cf5f7afbe14c7b2585feac8e9ef34\"" May 13 00:43:12.706608 env[1305]: time="2025-05-13T00:43:12.706562971Z" level=info msg="StartContainer for \"389fbca6868d7b3d9a6a9eebbe7500074d3cf5f7afbe14c7b2585feac8e9ef34\"" May 13 00:43:12.796698 env[1305]: time="2025-05-13T00:43:12.795873691Z" level=info msg="StartContainer for \"389fbca6868d7b3d9a6a9eebbe7500074d3cf5f7afbe14c7b2585feac8e9ef34\" returns successfully" May 13 00:43:12.845088 env[1305]: time="2025-05-13T00:43:12.842686548Z" level=info msg="shim disconnected" id=389fbca6868d7b3d9a6a9eebbe7500074d3cf5f7afbe14c7b2585feac8e9ef34 May 13 00:43:12.845088 env[1305]: time="2025-05-13T00:43:12.842750480Z" level=warning msg="cleaning up after shim disconnected" id=389fbca6868d7b3d9a6a9eebbe7500074d3cf5f7afbe14c7b2585feac8e9ef34 namespace=k8s.io May 13 00:43:12.845088 env[1305]: time="2025-05-13T00:43:12.842761922Z" level=info msg="cleaning up dead shim" May 13 00:43:12.851934 env[1305]: time="2025-05-13T00:43:12.851854625Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4231 runtime=io.containerd.runc.v2\n" May 13 00:43:13.388361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-389fbca6868d7b3d9a6a9eebbe7500074d3cf5f7afbe14c7b2585feac8e9ef34-rootfs.mount: Deactivated successfully. May 13 00:43:13.661340 kubelet[2173]: E0513 00:43:13.660958 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:13.663165 env[1305]: time="2025-05-13T00:43:13.663130996Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:43:13.679737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678355481.mount: Deactivated successfully. May 13 00:43:13.680210 env[1305]: time="2025-05-13T00:43:13.680152831Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4eea310a00dec23639d39f8fe60e94d229a982f92eae80795ca1cf69e82ad86a\"" May 13 00:43:13.680958 env[1305]: time="2025-05-13T00:43:13.680924741Z" level=info msg="StartContainer for \"4eea310a00dec23639d39f8fe60e94d229a982f92eae80795ca1cf69e82ad86a\"" May 13 00:43:13.731221 env[1305]: time="2025-05-13T00:43:13.731157516Z" level=info msg="StartContainer for \"4eea310a00dec23639d39f8fe60e94d229a982f92eae80795ca1cf69e82ad86a\" returns successfully" May 13 00:43:13.752407 env[1305]: time="2025-05-13T00:43:13.752347763Z" level=info msg="shim disconnected" id=4eea310a00dec23639d39f8fe60e94d229a982f92eae80795ca1cf69e82ad86a May 13 00:43:13.752407 env[1305]: time="2025-05-13T00:43:13.752405353Z" level=warning msg="cleaning up after shim disconnected" id=4eea310a00dec23639d39f8fe60e94d229a982f92eae80795ca1cf69e82ad86a namespace=k8s.io May 13 00:43:13.752407 env[1305]: time="2025-05-13T00:43:13.752418459Z" level=info msg="cleaning up dead shim" May 13 00:43:13.759113 env[1305]: time="2025-05-13T00:43:13.759039054Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:43:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4286 runtime=io.containerd.runc.v2\n" May 13 00:43:13.979109 kubelet[2173]: E0513 00:43:13.978772 2173 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:43:14.388168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eea310a00dec23639d39f8fe60e94d229a982f92eae80795ca1cf69e82ad86a-rootfs.mount: Deactivated successfully. May 13 00:43:14.666914 kubelet[2173]: E0513 00:43:14.666755 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:14.670403 env[1305]: time="2025-05-13T00:43:14.669697101Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:43:14.697285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439167917.mount: Deactivated successfully. May 13 00:43:14.700640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683700006.mount: Deactivated successfully. May 13 00:43:14.703469 env[1305]: time="2025-05-13T00:43:14.703390074Z" level=info msg="CreateContainer within sandbox \"41b22c1eed508dfcfbd8dee24fb9d7ae2d31be174df9ad9b67d10fec04e25c15\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"91ae4fcbbb87b8621153d0e705c41381d4f64f754dd0e91fd4d8c2c50315079e\"" May 13 00:43:14.704406 env[1305]: time="2025-05-13T00:43:14.704137237Z" level=info msg="StartContainer for \"91ae4fcbbb87b8621153d0e705c41381d4f64f754dd0e91fd4d8c2c50315079e\"" May 13 00:43:14.762684 env[1305]: time="2025-05-13T00:43:14.762609135Z" level=info msg="StartContainer for \"91ae4fcbbb87b8621153d0e705c41381d4f64f754dd0e91fd4d8c2c50315079e\" returns successfully" May 13 00:43:15.079581 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 13 00:43:15.672727 kubelet[2173]: E0513 00:43:15.672674 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:15.716921 kubelet[2173]: I0513 00:43:15.716845 2173 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lphj6" podStartSLOduration=6.716823812 podStartE2EDuration="6.716823812s" podCreationTimestamp="2025-05-13 00:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:43:15.716390507 +0000 UTC m=+91.876350159" watchObservedRunningTime="2025-05-13 00:43:15.716823812 +0000 UTC m=+91.876783454" May 13 00:43:16.493099 kubelet[2173]: I0513 00:43:16.493016 2173 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:43:16Z","lastTransitionTime":"2025-05-13T00:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:43:16.673569 kubelet[2173]: E0513 00:43:16.673516 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:17.676582 kubelet[2173]: E0513 00:43:17.676514 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:17.931109 kubelet[2173]: E0513 00:43:17.930914 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:18.331311 systemd-networkd[1081]: lxc_health: Link UP May 13 00:43:18.349193 systemd-networkd[1081]: lxc_health: Gained carrier May 13 00:43:18.351015 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 13 00:43:19.564849 systemd-networkd[1081]: lxc_health: Gained IPv6LL May 13 00:43:20.040929 kubelet[2173]: E0513 00:43:20.040893 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:20.680888 kubelet[2173]: E0513 00:43:20.680859 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:24.930163 kubelet[2173]: E0513 00:43:24.930116 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:43:25.191062 sshd[3996]: pam_unix(sshd:session): session closed for user core May 13 00:43:25.193406 systemd[1]: sshd@25-10.0.0.51:22-10.0.0.1:34896.service: Deactivated successfully. May 13 00:43:25.194451 systemd-logind[1290]: Session 26 logged out. Waiting for processes to exit. May 13 00:43:25.194492 systemd[1]: session-26.scope: Deactivated successfully. May 13 00:43:25.195392 systemd-logind[1290]: Removed session 26.