May 15 00:55:40.410547 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 14 23:14:51 -00 2025 May 15 00:55:40.410566 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:40.410574 kernel: BIOS-provided physical RAM map: May 15 00:55:40.410580 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 00:55:40.410585 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 00:55:40.410591 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 00:55:40.410597 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 15 00:55:40.410603 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 15 00:55:40.410610 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:55:40.410615 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 15 00:55:40.410620 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 00:55:40.410626 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 00:55:40.410631 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 00:55:40.410637 kernel: NX (Execute Disable) protection: active May 15 00:55:40.410645 kernel: SMBIOS 2.8 present. May 15 00:55:40.410652 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 15 00:55:40.410657 kernel: Hypervisor detected: KVM May 15 00:55:40.410663 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:55:40.410669 kernel: kvm-clock: cpu 0, msr 6a196001, primary cpu clock May 15 00:55:40.410675 kernel: kvm-clock: using sched offset of 2428900762 cycles May 15 00:55:40.410682 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:55:40.410688 kernel: tsc: Detected 2794.748 MHz processor May 15 00:55:40.410694 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:55:40.410702 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:55:40.410708 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 15 00:55:40.410714 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:55:40.410720 kernel: Using GB pages for direct mapping May 15 00:55:40.410726 kernel: ACPI: Early table checksum verification disabled May 15 00:55:40.410732 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 15 00:55:40.410739 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410745 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410751 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410758 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 15 00:55:40.410764 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410770 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410776 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410782 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:55:40.410788 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 15 00:55:40.410794 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 15 00:55:40.410800 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 15 00:55:40.410810 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 15 00:55:40.410816 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 15 00:55:40.410823 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 15 00:55:40.410829 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 15 00:55:40.410836 kernel: No NUMA configuration found May 15 00:55:40.410842 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 15 00:55:40.410850 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 15 00:55:40.410856 kernel: Zone ranges: May 15 00:55:40.410863 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:55:40.410869 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 15 00:55:40.410876 kernel: Normal empty May 15 00:55:40.410882 kernel: Movable zone start for each node May 15 00:55:40.410888 kernel: Early memory node ranges May 15 00:55:40.410895 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 00:55:40.410902 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 15 00:55:40.410908 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 15 00:55:40.410916 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:55:40.410922 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 00:55:40.410929 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 15 00:55:40.410935 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:55:40.410942 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:55:40.410948 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:55:40.410955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:55:40.410969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:55:40.410975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:55:40.410983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:55:40.410989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:55:40.410996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:55:40.411002 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:55:40.411009 kernel: TSC deadline timer available May 15 00:55:40.411016 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 00:55:40.411023 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:55:40.411029 kernel: kvm-guest: setup PV sched yield May 15 00:55:40.411035 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 15 00:55:40.411043 kernel: Booting paravirtualized kernel on KVM May 15 00:55:40.411050 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:55:40.411056 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 15 00:55:40.411063 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 15 00:55:40.411069 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 15 00:55:40.411075 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 00:55:40.411082 kernel: kvm-guest: setup async PF for cpu 0 May 15 00:55:40.411088 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 15 00:55:40.411095 kernel: kvm-guest: PV spinlocks enabled May 15 00:55:40.411102 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:55:40.411109 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 15 00:55:40.411115 kernel: Policy zone: DMA32 May 15 00:55:40.411123 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:40.411130 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:55:40.411136 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:55:40.411143 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:55:40.411149 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:55:40.411157 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 15 00:55:40.411164 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:55:40.411170 kernel: ftrace: allocating 34584 entries in 136 pages May 15 00:55:40.411177 kernel: ftrace: allocated 136 pages with 2 groups May 15 00:55:40.411183 kernel: rcu: Hierarchical RCU implementation. May 15 00:55:40.411190 kernel: rcu: RCU event tracing is enabled. May 15 00:55:40.411197 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:55:40.411204 kernel: Rude variant of Tasks RCU enabled. May 15 00:55:40.411210 kernel: Tracing variant of Tasks RCU enabled. May 15 00:55:40.411218 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:55:40.411225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:55:40.411231 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 00:55:40.411238 kernel: random: crng init done May 15 00:55:40.411244 kernel: Console: colour VGA+ 80x25 May 15 00:55:40.411250 kernel: printk: console [ttyS0] enabled May 15 00:55:40.411257 kernel: ACPI: Core revision 20210730 May 15 00:55:40.411263 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:55:40.411270 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:55:40.411277 kernel: x2apic enabled May 15 00:55:40.411284 kernel: Switched APIC routing to physical x2apic. May 15 00:55:40.411291 kernel: kvm-guest: setup PV IPIs May 15 00:55:40.411297 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:55:40.411303 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:55:40.411310 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 15 00:55:40.411317 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:55:40.411323 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:55:40.411330 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:55:40.411342 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:55:40.411349 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:55:40.411356 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:55:40.411364 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 00:55:40.411370 kernel: RETBleed: Mitigation: untrained return thunk May 15 00:55:40.411377 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:55:40.411384 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 00:55:40.411391 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:55:40.411398 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:55:40.411406 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:55:40.411413 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:55:40.411420 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 00:55:40.411437 kernel: Freeing SMP alternatives memory: 32K May 15 00:55:40.411444 kernel: pid_max: default: 32768 minimum: 301 May 15 00:55:40.411451 kernel: LSM: Security Framework initializing May 15 00:55:40.411457 kernel: SELinux: Initializing. May 15 00:55:40.411464 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:55:40.411473 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:55:40.411480 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 00:55:40.411487 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:55:40.411493 kernel: ... version: 0 May 15 00:55:40.411500 kernel: ... bit width: 48 May 15 00:55:40.411507 kernel: ... generic registers: 6 May 15 00:55:40.411514 kernel: ... value mask: 0000ffffffffffff May 15 00:55:40.411521 kernel: ... max period: 00007fffffffffff May 15 00:55:40.411527 kernel: ... fixed-purpose events: 0 May 15 00:55:40.411535 kernel: ... event mask: 000000000000003f May 15 00:55:40.411542 kernel: signal: max sigframe size: 1776 May 15 00:55:40.411549 kernel: rcu: Hierarchical SRCU implementation. May 15 00:55:40.411555 kernel: smp: Bringing up secondary CPUs ... May 15 00:55:40.411562 kernel: x86: Booting SMP configuration: May 15 00:55:40.411569 kernel: .... node #0, CPUs: #1 May 15 00:55:40.411576 kernel: kvm-clock: cpu 1, msr 6a196041, secondary cpu clock May 15 00:55:40.411582 kernel: kvm-guest: setup async PF for cpu 1 May 15 00:55:40.411589 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 15 00:55:40.411597 kernel: #2 May 15 00:55:40.411604 kernel: kvm-clock: cpu 2, msr 6a196081, secondary cpu clock May 15 00:55:40.411611 kernel: kvm-guest: setup async PF for cpu 2 May 15 00:55:40.411618 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 15 00:55:40.411624 kernel: #3 May 15 00:55:40.411631 kernel: kvm-clock: cpu 3, msr 6a1960c1, secondary cpu clock May 15 00:55:40.411638 kernel: kvm-guest: setup async PF for cpu 3 May 15 00:55:40.411644 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 15 00:55:40.411651 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:55:40.411659 kernel: smpboot: Max logical packages: 1 May 15 00:55:40.411666 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 15 00:55:40.411673 kernel: devtmpfs: initialized May 15 00:55:40.411680 kernel: x86/mm: Memory block size: 128MB May 15 00:55:40.411687 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:55:40.411694 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:55:40.411700 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:55:40.411707 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:55:40.411714 kernel: audit: initializing netlink subsys (disabled) May 15 00:55:40.411721 kernel: audit: type=2000 audit(1747270539.669:1): state=initialized audit_enabled=0 res=1 May 15 00:55:40.411728 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:55:40.411735 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:55:40.411742 kernel: cpuidle: using governor menu May 15 00:55:40.411749 kernel: ACPI: bus type PCI registered May 15 00:55:40.411756 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:55:40.411762 kernel: dca service started, version 1.12.1 May 15 00:55:40.411769 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 00:55:40.411776 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 15 00:55:40.411783 kernel: PCI: Using configuration type 1 for base access May 15 00:55:40.411791 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:55:40.411798 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:55:40.411805 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:55:40.411812 kernel: ACPI: Added _OSI(Module Device) May 15 00:55:40.411819 kernel: ACPI: Added _OSI(Processor Device) May 15 00:55:40.411825 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:55:40.411832 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:55:40.411839 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 00:55:40.411846 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 00:55:40.411854 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 00:55:40.411861 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:55:40.411867 kernel: ACPI: Interpreter enabled May 15 00:55:40.411874 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:55:40.411881 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:55:40.411888 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:55:40.411895 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:55:40.411901 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:55:40.412026 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:55:40.412100 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:55:40.412168 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:55:40.412177 kernel: PCI host bridge to bus 0000:00 May 15 00:55:40.412250 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:55:40.412312 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:55:40.412377 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:55:40.412456 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 00:55:40.412518 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 00:55:40.412579 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 15 00:55:40.412639 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:55:40.412719 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:55:40.412796 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:55:40.412870 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 15 00:55:40.412938 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 15 00:55:40.413017 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 15 00:55:40.413085 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:55:40.413163 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:55:40.413232 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 15 00:55:40.413302 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 15 00:55:40.413376 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 15 00:55:40.413465 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 00:55:40.413536 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 15 00:55:40.413604 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 15 00:55:40.413671 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 15 00:55:40.413745 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:55:40.413814 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 15 00:55:40.413884 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 15 00:55:40.413953 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 15 00:55:40.414033 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 15 00:55:40.414123 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:55:40.414192 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:55:40.414266 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:55:40.414335 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 15 00:55:40.414405 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 15 00:55:40.414491 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:55:40.414561 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 15 00:55:40.414571 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:55:40.414578 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:55:40.414585 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:55:40.414592 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:55:40.414601 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:55:40.414608 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:55:40.414615 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:55:40.414622 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:55:40.414644 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:55:40.414658 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:55:40.414671 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:55:40.414679 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:55:40.414685 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:55:40.414694 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:55:40.414701 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:55:40.414707 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:55:40.414714 kernel: iommu: Default domain type: Translated May 15 00:55:40.414721 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:55:40.415451 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:55:40.415529 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:55:40.415597 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:55:40.415606 kernel: vgaarb: loaded May 15 00:55:40.415616 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 00:55:40.415623 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 00:55:40.415630 kernel: PTP clock support registered May 15 00:55:40.415637 kernel: PCI: Using ACPI for IRQ routing May 15 00:55:40.415644 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:55:40.415651 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 00:55:40.415658 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 15 00:55:40.415665 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:55:40.415671 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:55:40.415680 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:55:40.415686 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:55:40.415693 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:55:40.415700 kernel: pnp: PnP ACPI init May 15 00:55:40.415775 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 00:55:40.415785 kernel: pnp: PnP ACPI: found 6 devices May 15 00:55:40.415792 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:55:40.415799 kernel: NET: Registered PF_INET protocol family May 15 00:55:40.415809 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:55:40.415816 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:55:40.415823 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:55:40.415830 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:55:40.415837 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 00:55:40.415844 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:55:40.415851 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:55:40.415858 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:55:40.415865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:55:40.415873 kernel: NET: Registered PF_XDP protocol family May 15 00:55:40.415935 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:55:40.416006 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:55:40.416066 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:55:40.416124 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 00:55:40.416184 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 00:55:40.416243 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 15 00:55:40.416252 kernel: PCI: CLS 0 bytes, default 64 May 15 00:55:40.416261 kernel: Initialise system trusted keyrings May 15 00:55:40.416268 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:55:40.416275 kernel: Key type asymmetric registered May 15 00:55:40.416282 kernel: Asymmetric key parser 'x509' registered May 15 00:55:40.416289 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 00:55:40.416296 kernel: io scheduler mq-deadline registered May 15 00:55:40.416302 kernel: io scheduler kyber registered May 15 00:55:40.416309 kernel: io scheduler bfq registered May 15 00:55:40.416316 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:55:40.416325 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:55:40.416332 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:55:40.416339 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 00:55:40.416346 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:55:40.416353 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:55:40.416360 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:55:40.416367 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:55:40.416375 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:55:40.416383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:55:40.416469 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 00:55:40.416534 kernel: rtc_cmos 00:04: registered as rtc0 May 15 00:55:40.416595 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T00:55:39 UTC (1747270539) May 15 00:55:40.416658 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 00:55:40.416667 kernel: NET: Registered PF_INET6 protocol family May 15 00:55:40.416674 kernel: Segment Routing with IPv6 May 15 00:55:40.416681 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:55:40.416688 kernel: NET: Registered PF_PACKET protocol family May 15 00:55:40.416698 kernel: Key type dns_resolver registered May 15 00:55:40.416705 kernel: IPI shorthand broadcast: enabled May 15 00:55:40.416712 kernel: sched_clock: Marking stable (393362778, 100884563)->(542605403, -48358062) May 15 00:55:40.416719 kernel: registered taskstats version 1 May 15 00:55:40.416726 kernel: Loading compiled-in X.509 certificates May 15 00:55:40.416732 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: a3400373b5c34ccb74f940604f224840f2b40bdd' May 15 00:55:40.416739 kernel: Key type .fscrypt registered May 15 00:55:40.416746 kernel: Key type fscrypt-provisioning registered May 15 00:55:40.416753 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:55:40.416761 kernel: ima: Allocated hash algorithm: sha1 May 15 00:55:40.416768 kernel: ima: No architecture policies found May 15 00:55:40.416775 kernel: clk: Disabling unused clocks May 15 00:55:40.416782 kernel: Freeing unused kernel image (initmem) memory: 47456K May 15 00:55:40.416788 kernel: Write protecting the kernel read-only data: 28672k May 15 00:55:40.416795 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 00:55:40.416802 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 00:55:40.416809 kernel: Run /init as init process May 15 00:55:40.416816 kernel: with arguments: May 15 00:55:40.416824 kernel: /init May 15 00:55:40.416830 kernel: with environment: May 15 00:55:40.416837 kernel: HOME=/ May 15 00:55:40.416844 kernel: TERM=linux May 15 00:55:40.416850 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:55:40.416859 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:55:40.416868 systemd[1]: Detected virtualization kvm. May 15 00:55:40.416876 systemd[1]: Detected architecture x86-64. May 15 00:55:40.416884 systemd[1]: Running in initrd. May 15 00:55:40.416892 systemd[1]: No hostname configured, using default hostname. May 15 00:55:40.416899 systemd[1]: Hostname set to . May 15 00:55:40.416906 systemd[1]: Initializing machine ID from VM UUID. May 15 00:55:40.416914 systemd[1]: Queued start job for default target initrd.target. May 15 00:55:40.416921 systemd[1]: Started systemd-ask-password-console.path. May 15 00:55:40.416928 systemd[1]: Reached target cryptsetup.target. May 15 00:55:40.416935 systemd[1]: Reached target paths.target. May 15 00:55:40.416944 systemd[1]: Reached target slices.target. May 15 00:55:40.416965 systemd[1]: Reached target swap.target. May 15 00:55:40.416974 systemd[1]: Reached target timers.target. May 15 00:55:40.416982 systemd[1]: Listening on iscsid.socket. May 15 00:55:40.416989 systemd[1]: Listening on iscsiuio.socket. May 15 00:55:40.416998 systemd[1]: Listening on systemd-journald-audit.socket. May 15 00:55:40.417006 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 00:55:40.417014 systemd[1]: Listening on systemd-journald.socket. May 15 00:55:40.417021 systemd[1]: Listening on systemd-networkd.socket. May 15 00:55:40.417029 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:55:40.417037 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:55:40.417044 systemd[1]: Reached target sockets.target. May 15 00:55:40.417052 systemd[1]: Starting kmod-static-nodes.service... May 15 00:55:40.417059 systemd[1]: Finished network-cleanup.service. May 15 00:55:40.417068 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:55:40.417076 systemd[1]: Starting systemd-journald.service... May 15 00:55:40.417083 systemd[1]: Starting systemd-modules-load.service... May 15 00:55:40.417091 systemd[1]: Starting systemd-resolved.service... May 15 00:55:40.417099 systemd[1]: Starting systemd-vconsole-setup.service... May 15 00:55:40.417106 systemd[1]: Finished kmod-static-nodes.service. May 15 00:55:40.417116 systemd-journald[198]: Journal started May 15 00:55:40.417179 systemd-journald[198]: Runtime Journal (/run/log/journal/979e4d0d6f80469ab2b524e6f4c5b334) is 6.0M, max 48.5M, 42.5M free. May 15 00:55:40.415042 systemd-modules-load[199]: Inserted module 'overlay' May 15 00:55:40.450408 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:55:40.450445 systemd[1]: Started systemd-journald.service. May 15 00:55:40.450464 kernel: audit: type=1130 audit(1747270540.445:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.428893 systemd-resolved[200]: Positive Trust Anchors: May 15 00:55:40.454636 kernel: audit: type=1130 audit(1747270540.450:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.428905 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:55:40.459504 kernel: audit: type=1130 audit(1747270540.454:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.428931 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:55:40.468269 kernel: audit: type=1130 audit(1747270540.459:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.431027 systemd-resolved[200]: Defaulting to hostname 'linux'. May 15 00:55:40.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.450819 systemd[1]: Started systemd-resolved.service. May 15 00:55:40.474459 kernel: audit: type=1130 audit(1747270540.468:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.455142 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:55:40.459998 systemd[1]: Finished systemd-vconsole-setup.service. May 15 00:55:40.468761 systemd[1]: Reached target nss-lookup.target. May 15 00:55:40.474311 systemd[1]: Starting dracut-cmdline-ask.service... May 15 00:55:40.475302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:55:40.482335 systemd-modules-load[199]: Inserted module 'br_netfilter' May 15 00:55:40.483218 kernel: Bridge firewalling registered May 15 00:55:40.483637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:55:40.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.488459 kernel: audit: type=1130 audit(1747270540.483:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.489321 systemd[1]: Finished dracut-cmdline-ask.service. May 15 00:55:40.493764 kernel: audit: type=1130 audit(1747270540.489:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.490480 systemd[1]: Starting dracut-cmdline.service... May 15 00:55:40.498837 dracut-cmdline[215]: dracut-dracut-053 May 15 00:55:40.500781 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:55:40.516456 kernel: SCSI subsystem initialized May 15 00:55:40.526878 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:55:40.526905 kernel: device-mapper: uevent: version 1.0.3 May 15 00:55:40.528163 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 00:55:40.530786 systemd-modules-load[199]: Inserted module 'dm_multipath' May 15 00:55:40.532317 systemd[1]: Finished systemd-modules-load.service. May 15 00:55:40.537585 kernel: audit: type=1130 audit(1747270540.532:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.533925 systemd[1]: Starting systemd-sysctl.service... May 15 00:55:40.543366 systemd[1]: Finished systemd-sysctl.service. May 15 00:55:40.547760 kernel: audit: type=1130 audit(1747270540.543:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.562460 kernel: Loading iSCSI transport class v2.0-870. May 15 00:55:40.577473 kernel: iscsi: registered transport (tcp) May 15 00:55:40.598456 kernel: iscsi: registered transport (qla4xxx) May 15 00:55:40.598503 kernel: QLogic iSCSI HBA Driver May 15 00:55:40.620470 systemd[1]: Finished dracut-cmdline.service. May 15 00:55:40.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.621533 systemd[1]: Starting dracut-pre-udev.service... May 15 00:55:40.674446 kernel: raid6: avx2x4 gen() 30818 MB/s May 15 00:55:40.691444 kernel: raid6: avx2x4 xor() 8171 MB/s May 15 00:55:40.708444 kernel: raid6: avx2x2 gen() 28233 MB/s May 15 00:55:40.725441 kernel: raid6: avx2x2 xor() 18021 MB/s May 15 00:55:40.742442 kernel: raid6: avx2x1 gen() 24276 MB/s May 15 00:55:40.759454 kernel: raid6: avx2x1 xor() 14882 MB/s May 15 00:55:40.776451 kernel: raid6: sse2x4 gen() 14658 MB/s May 15 00:55:40.793444 kernel: raid6: sse2x4 xor() 7502 MB/s May 15 00:55:40.810446 kernel: raid6: sse2x2 gen() 16458 MB/s May 15 00:55:40.827442 kernel: raid6: sse2x2 xor() 9832 MB/s May 15 00:55:40.844444 kernel: raid6: sse2x1 gen() 12555 MB/s May 15 00:55:40.861818 kernel: raid6: sse2x1 xor() 7806 MB/s May 15 00:55:40.861838 kernel: raid6: using algorithm avx2x4 gen() 30818 MB/s May 15 00:55:40.861848 kernel: raid6: .... xor() 8171 MB/s, rmw enabled May 15 00:55:40.862530 kernel: raid6: using avx2x2 recovery algorithm May 15 00:55:40.874441 kernel: xor: automatically using best checksumming function avx May 15 00:55:40.962457 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 00:55:40.971707 systemd[1]: Finished dracut-pre-udev.service. May 15 00:55:40.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.973000 audit: BPF prog-id=7 op=LOAD May 15 00:55:40.973000 audit: BPF prog-id=8 op=LOAD May 15 00:55:40.974052 systemd[1]: Starting systemd-udevd.service... May 15 00:55:40.986625 systemd-udevd[399]: Using default interface naming scheme 'v252'. May 15 00:55:40.991058 systemd[1]: Started systemd-udevd.service. May 15 00:55:40.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:40.992549 systemd[1]: Starting dracut-pre-trigger.service... May 15 00:55:41.005314 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation May 15 00:55:41.033394 systemd[1]: Finished dracut-pre-trigger.service. May 15 00:55:41.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.035035 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:55:41.069687 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:55:41.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:41.097640 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:55:41.106016 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:55:41.106029 kernel: GPT:9289727 != 19775487 May 15 00:55:41.106038 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:55:41.106047 kernel: GPT:9289727 != 19775487 May 15 00:55:41.106055 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:55:41.106067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.106076 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:55:41.119460 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:55:41.119508 kernel: AES CTR mode by8 optimization enabled May 15 00:55:41.131440 kernel: libata version 3.00 loaded. May 15 00:55:41.132441 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (445) May 15 00:55:41.139555 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:55:41.144239 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:55:41.144253 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:55:41.144342 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:55:41.144418 kernel: scsi host0: ahci May 15 00:55:41.144528 kernel: scsi host1: ahci May 15 00:55:41.144613 kernel: scsi host2: ahci May 15 00:55:41.144695 kernel: scsi host3: ahci May 15 00:55:41.144781 kernel: scsi host4: ahci May 15 00:55:41.144864 kernel: scsi host5: ahci May 15 00:55:41.144960 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 15 00:55:41.144971 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 15 00:55:41.144980 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 15 00:55:41.144988 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 15 00:55:41.144998 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 15 00:55:41.145007 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 15 00:55:41.139707 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 00:55:41.177566 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 00:55:41.181632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 00:55:41.185353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:55:41.188969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 00:55:41.190068 systemd[1]: Starting disk-uuid.service... May 15 00:55:41.199727 disk-uuid[526]: Primary Header is updated. May 15 00:55:41.199727 disk-uuid[526]: Secondary Entries is updated. May 15 00:55:41.199727 disk-uuid[526]: Secondary Header is updated. May 15 00:55:41.204457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.207452 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:41.456911 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.456984 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 00:55:41.458914 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.458927 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.459461 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.460446 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:55:41.461455 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 00:55:41.462870 kernel: ata3.00: applying bridge limits May 15 00:55:41.462885 kernel: ata3.00: configured for UDMA/100 May 15 00:55:41.463466 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 00:55:41.516448 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 00:55:41.532965 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:55:41.532983 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 00:55:42.222880 disk-uuid[527]: The operation has completed successfully. May 15 00:55:42.224055 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:55:42.242499 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:55:42.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.242577 systemd[1]: Finished disk-uuid.service. May 15 00:55:42.250604 systemd[1]: Starting verity-setup.service... May 15 00:55:42.264470 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:55:42.282520 systemd[1]: Found device dev-mapper-usr.device. May 15 00:55:42.285364 systemd[1]: Mounting sysusr-usr.mount... May 15 00:55:42.287352 systemd[1]: Finished verity-setup.service. May 15 00:55:42.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.346460 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 00:55:42.346473 systemd[1]: Mounted sysusr-usr.mount. May 15 00:55:42.347485 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 00:55:42.348255 systemd[1]: Starting ignition-setup.service... May 15 00:55:42.351200 systemd[1]: Starting parse-ip-for-networkd.service... May 15 00:55:42.357602 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:55:42.357627 kernel: BTRFS info (device vda6): using free space tree May 15 00:55:42.357637 kernel: BTRFS info (device vda6): has skinny extents May 15 00:55:42.365052 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:55:42.372365 systemd[1]: Finished ignition-setup.service. May 15 00:55:42.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.373970 systemd[1]: Starting ignition-fetch-offline.service... May 15 00:55:42.409048 ignition[639]: Ignition 2.14.0 May 15 00:55:42.409108 ignition[639]: Stage: fetch-offline May 15 00:55:42.409161 ignition[639]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:42.409180 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:42.409302 ignition[639]: parsed url from cmdline: "" May 15 00:55:42.409306 ignition[639]: no config URL provided May 15 00:55:42.409312 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:55:42.409327 ignition[639]: no config at "/usr/lib/ignition/user.ign" May 15 00:55:42.409356 ignition[639]: op(1): [started] loading QEMU firmware config module May 15 00:55:42.409367 ignition[639]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:55:42.413381 ignition[639]: op(1): [finished] loading QEMU firmware config module May 15 00:55:42.415569 ignition[639]: parsing config with SHA512: 5629c316727d95fc6e1281be3e60ebce7d43d60139c948b7de3ecc47c7170012ced113eb4a9b960dad6c8c6752663148d5b023b9713b66ecdfd0233da372d2cb May 15 00:55:42.420286 systemd[1]: Finished parse-ip-for-networkd.service. May 15 00:55:42.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.421000 audit: BPF prog-id=9 op=LOAD May 15 00:55:42.422378 systemd[1]: Starting systemd-networkd.service... May 15 00:55:42.425817 unknown[639]: fetched base config from "system" May 15 00:55:42.426137 ignition[639]: fetch-offline: fetch-offline passed May 15 00:55:42.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.425826 unknown[639]: fetched user config from "qemu" May 15 00:55:42.426194 ignition[639]: Ignition finished successfully May 15 00:55:42.427146 systemd[1]: Finished ignition-fetch-offline.service. May 15 00:55:42.450401 systemd-networkd[722]: lo: Link UP May 15 00:55:42.450412 systemd-networkd[722]: lo: Gained carrier May 15 00:55:42.452276 systemd-networkd[722]: Enumeration completed May 15 00:55:42.452374 systemd[1]: Started systemd-networkd.service. May 15 00:55:42.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.453971 systemd[1]: Reached target network.target. May 15 00:55:42.454461 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:55:42.455165 systemd[1]: Starting ignition-kargs.service... May 15 00:55:42.457342 systemd[1]: Starting iscsiuio.service... May 15 00:55:42.461480 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:55:42.461581 systemd[1]: Started iscsiuio.service. May 15 00:55:42.463172 systemd-networkd[722]: eth0: Link UP May 15 00:55:42.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.463175 systemd-networkd[722]: eth0: Gained carrier May 15 00:55:42.465656 systemd[1]: Starting iscsid.service... May 15 00:55:42.467359 ignition[725]: Ignition 2.14.0 May 15 00:55:42.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.470614 systemd[1]: Finished ignition-kargs.service. May 15 00:55:42.467364 ignition[725]: Stage: kargs May 15 00:55:42.474452 iscsid[734]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 00:55:42.474452 iscsid[734]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 15 00:55:42.474452 iscsid[734]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 00:55:42.474452 iscsid[734]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 00:55:42.474452 iscsid[734]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 00:55:42.474452 iscsid[734]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 00:55:42.474452 iscsid[734]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 00:55:42.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.472338 systemd[1]: Starting ignition-disks.service... May 15 00:55:42.467463 ignition[725]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:42.475531 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:55:42.467472 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:42.480621 systemd[1]: Finished ignition-disks.service. May 15 00:55:42.468148 ignition[725]: kargs: kargs passed May 15 00:55:42.483057 systemd[1]: Reached target initrd-root-device.target. May 15 00:55:42.468187 ignition[725]: Ignition finished successfully May 15 00:55:42.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.485345 systemd[1]: Reached target local-fs-pre.target. May 15 00:55:42.479191 ignition[735]: Ignition 2.14.0 May 15 00:55:42.487831 systemd[1]: Reached target local-fs.target. May 15 00:55:42.479197 ignition[735]: Stage: disks May 15 00:55:42.489807 systemd[1]: Reached target sysinit.target. May 15 00:55:42.479275 ignition[735]: no configs at "/usr/lib/ignition/base.d" May 15 00:55:42.491368 systemd[1]: Reached target basic.target. May 15 00:55:42.479282 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:42.497764 systemd[1]: Started iscsid.service. May 15 00:55:42.479954 ignition[735]: disks: disks passed May 15 00:55:42.500174 systemd[1]: Starting dracut-initqueue.service... May 15 00:55:42.479985 ignition[735]: Ignition finished successfully May 15 00:55:42.513947 systemd[1]: Finished dracut-initqueue.service. May 15 00:55:42.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.514881 systemd[1]: Reached target remote-fs-pre.target. May 15 00:55:42.516441 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:55:42.516658 systemd[1]: Reached target remote-fs.target. May 15 00:55:42.519293 systemd[1]: Starting dracut-pre-mount.service... May 15 00:55:42.526400 systemd[1]: Finished dracut-pre-mount.service. May 15 00:55:42.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.527354 systemd[1]: Starting systemd-fsck-root.service... May 15 00:55:42.537163 systemd-fsck[757]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 15 00:55:42.542416 systemd[1]: Finished systemd-fsck-root.service. May 15 00:55:42.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.544500 systemd[1]: Mounting sysroot.mount... May 15 00:55:42.550310 systemd[1]: Mounted sysroot.mount. May 15 00:55:42.551627 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 00:55:42.550820 systemd[1]: Reached target initrd-root-fs.target. May 15 00:55:42.553286 systemd[1]: Mounting sysroot-usr.mount... May 15 00:55:42.554117 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 00:55:42.554164 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:55:42.554191 systemd[1]: Reached target ignition-diskful.target. May 15 00:55:42.555845 systemd[1]: Mounted sysroot-usr.mount. May 15 00:55:42.557992 systemd[1]: Starting initrd-setup-root.service... May 15 00:55:42.564315 initrd-setup-root[767]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:55:42.567845 initrd-setup-root[775]: cut: /sysroot/etc/group: No such file or directory May 15 00:55:42.573257 initrd-setup-root[783]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:55:42.577717 initrd-setup-root[791]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:55:42.605572 systemd[1]: Finished initrd-setup-root.service. May 15 00:55:42.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.607324 systemd[1]: Starting ignition-mount.service... May 15 00:55:42.608851 systemd[1]: Starting sysroot-boot.service... May 15 00:55:42.612620 bash[808]: umount: /sysroot/usr/share/oem: not mounted. May 15 00:55:42.619130 ignition[809]: INFO : Ignition 2.14.0 May 15 00:55:42.619130 ignition[809]: INFO : Stage: mount May 15 00:55:42.621294 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:42.621294 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:42.621294 ignition[809]: INFO : mount: mount passed May 15 00:55:42.621294 ignition[809]: INFO : Ignition finished successfully May 15 00:55:42.626077 systemd[1]: Finished ignition-mount.service. May 15 00:55:42.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:42.630055 systemd[1]: Finished sysroot-boot.service. May 15 00:55:42.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:43.295527 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 00:55:43.302611 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (818) May 15 00:55:43.305018 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:55:43.305041 kernel: BTRFS info (device vda6): using free space tree May 15 00:55:43.305053 kernel: BTRFS info (device vda6): has skinny extents May 15 00:55:43.309060 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 00:55:43.311841 systemd[1]: Starting ignition-files.service... May 15 00:55:43.324239 ignition[838]: INFO : Ignition 2.14.0 May 15 00:55:43.324239 ignition[838]: INFO : Stage: files May 15 00:55:43.326268 ignition[838]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:43.326268 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:43.326268 ignition[838]: DEBUG : files: compiled without relabeling support, skipping May 15 00:55:43.330312 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:55:43.330312 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:55:43.333327 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:55:43.334842 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:55:43.336983 unknown[838]: wrote ssh authorized keys file for user: core May 15 00:55:43.338123 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:55:43.339860 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 15 00:55:43.341804 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:55:43.343614 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:55:43.345362 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:55:43.345362 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:55:43.345362 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:55:43.345362 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:55:43.354443 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 15 00:55:43.529612 systemd-networkd[722]: eth0: Gained IPv6LL May 15 00:55:43.725860 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 15 00:55:44.225643 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 15 00:55:44.225643 ignition[838]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 15 00:55:44.232077 ignition[838]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:55:44.232077 ignition[838]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:55:44.232077 ignition[838]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 15 00:55:44.232077 ignition[838]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:55:44.232077 ignition[838]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:55:44.246940 ignition[838]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:55:44.249104 ignition[838]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:55:44.251161 ignition[838]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:55:44.253645 ignition[838]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:55:44.255874 ignition[838]: INFO : files: files passed May 15 00:55:44.256807 ignition[838]: INFO : Ignition finished successfully May 15 00:55:44.259251 systemd[1]: Finished ignition-files.service. May 15 00:55:44.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.262033 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 00:55:44.268417 kernel: kauditd_printk_skb: 25 callbacks suppressed May 15 00:55:44.268475 kernel: audit: type=1130 audit(1747270544.261:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.266107 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 00:55:44.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.266873 systemd[1]: Starting ignition-quench.service... May 15 00:55:44.284270 kernel: audit: type=1130 audit(1747270544.272:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.284295 kernel: audit: type=1131 audit(1747270544.272:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.284310 kernel: audit: type=1130 audit(1747270544.278:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.284403 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 00:55:44.269452 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:55:44.288745 initrd-setup-root-after-ignition[866]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:55:44.269522 systemd[1]: Finished ignition-quench.service. May 15 00:55:44.272236 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 00:55:44.279215 systemd[1]: Reached target ignition-complete.target. May 15 00:55:44.284955 systemd[1]: Starting initrd-parse-etc.service... May 15 00:55:44.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.295455 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:55:44.305253 kernel: audit: type=1130 audit(1747270544.296:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.305278 kernel: audit: type=1131 audit(1747270544.296:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.295524 systemd[1]: Finished initrd-parse-etc.service. May 15 00:55:44.296732 systemd[1]: Reached target initrd-fs.target. May 15 00:55:44.303264 systemd[1]: Reached target initrd.target. May 15 00:55:44.305249 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 00:55:44.305796 systemd[1]: Starting dracut-pre-pivot.service... May 15 00:55:44.314845 systemd[1]: Finished dracut-pre-pivot.service. May 15 00:55:44.320326 kernel: audit: type=1130 audit(1747270544.315:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.316522 systemd[1]: Starting initrd-cleanup.service... May 15 00:55:44.325151 systemd[1]: Stopped target nss-lookup.target. May 15 00:55:44.326192 systemd[1]: Stopped target remote-cryptsetup.target. May 15 00:55:44.328116 systemd[1]: Stopped target timers.target. May 15 00:55:44.329967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:55:44.336514 kernel: audit: type=1131 audit(1747270544.331:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.330055 systemd[1]: Stopped dracut-pre-pivot.service. May 15 00:55:44.331859 systemd[1]: Stopped target initrd.target. May 15 00:55:44.336643 systemd[1]: Stopped target basic.target. May 15 00:55:44.338401 systemd[1]: Stopped target ignition-complete.target. May 15 00:55:44.340256 systemd[1]: Stopped target ignition-diskful.target. May 15 00:55:44.342131 systemd[1]: Stopped target initrd-root-device.target. May 15 00:55:44.344159 systemd[1]: Stopped target remote-fs.target. May 15 00:55:44.346062 systemd[1]: Stopped target remote-fs-pre.target. May 15 00:55:44.348055 systemd[1]: Stopped target sysinit.target. May 15 00:55:44.349828 systemd[1]: Stopped target local-fs.target. May 15 00:55:44.351661 systemd[1]: Stopped target local-fs-pre.target. May 15 00:55:44.353492 systemd[1]: Stopped target swap.target. May 15 00:55:44.362162 kernel: audit: type=1131 audit(1747270544.356:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.355175 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:55:44.355273 systemd[1]: Stopped dracut-pre-mount.service. May 15 00:55:44.369132 kernel: audit: type=1131 audit(1747270544.363:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.357108 systemd[1]: Stopped target cryptsetup.target. May 15 00:55:44.362186 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:55:44.362273 systemd[1]: Stopped dracut-initqueue.service. May 15 00:55:44.364479 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:55:44.364566 systemd[1]: Stopped ignition-fetch-offline.service. May 15 00:55:44.369229 systemd[1]: Stopped target paths.target. May 15 00:55:44.371015 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:55:44.374468 systemd[1]: Stopped systemd-ask-password-console.path. May 15 00:55:44.376466 systemd[1]: Stopped target slices.target. May 15 00:55:44.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.378161 systemd[1]: Stopped target sockets.target. May 15 00:55:44.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.380368 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:55:44.388111 iscsid[734]: iscsid shutting down. May 15 00:55:44.380491 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 00:55:44.382518 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:55:44.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.394088 ignition[879]: INFO : Ignition 2.14.0 May 15 00:55:44.394088 ignition[879]: INFO : Stage: umount May 15 00:55:44.394088 ignition[879]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:55:44.394088 ignition[879]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:55:44.394088 ignition[879]: INFO : umount: umount passed May 15 00:55:44.394088 ignition[879]: INFO : Ignition finished successfully May 15 00:55:44.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.382602 systemd[1]: Stopped ignition-files.service. May 15 00:55:44.384985 systemd[1]: Stopping ignition-mount.service... May 15 00:55:44.386067 systemd[1]: Stopping iscsid.service... May 15 00:55:44.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.388684 systemd[1]: Stopping sysroot-boot.service... May 15 00:55:44.389825 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:55:44.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.389975 systemd[1]: Stopped systemd-udev-trigger.service. May 15 00:55:44.392057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:55:44.392165 systemd[1]: Stopped dracut-pre-trigger.service. May 15 00:55:44.395800 systemd[1]: iscsid.service: Deactivated successfully. May 15 00:55:44.395890 systemd[1]: Stopped iscsid.service. May 15 00:55:44.397230 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:55:44.397297 systemd[1]: Stopped ignition-mount.service. May 15 00:55:44.399560 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:55:44.399642 systemd[1]: Closed iscsid.socket. May 15 00:55:44.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.401549 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:55:44.401580 systemd[1]: Stopped ignition-disks.service. May 15 00:55:44.402443 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:55:44.402483 systemd[1]: Stopped ignition-kargs.service. May 15 00:55:44.404232 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:55:44.404265 systemd[1]: Stopped ignition-setup.service. May 15 00:55:44.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.406093 systemd[1]: Stopping iscsiuio.service... May 15 00:55:44.407862 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:55:44.408438 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:55:44.408535 systemd[1]: Finished initrd-cleanup.service. May 15 00:55:44.437000 audit: BPF prog-id=6 op=UNLOAD May 15 00:55:44.410173 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 00:55:44.410240 systemd[1]: Stopped iscsiuio.service. May 15 00:55:44.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.411675 systemd[1]: Stopped target network.target. May 15 00:55:44.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.413526 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:55:44.413564 systemd[1]: Closed iscsiuio.socket. May 15 00:55:44.415120 systemd[1]: Stopping systemd-networkd.service... May 15 00:55:44.416962 systemd[1]: Stopping systemd-resolved.service... May 15 00:55:44.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.420728 systemd-networkd[722]: eth0: DHCPv6 lease lost May 15 00:55:44.423528 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:55:44.423666 systemd[1]: Stopped systemd-networkd.service. May 15 00:55:44.445000 audit: BPF prog-id=9 op=UNLOAD May 15 00:55:44.429893 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:55:44.429980 systemd[1]: Stopped systemd-resolved.service. May 15 00:55:44.437018 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:55:44.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.437055 systemd[1]: Closed systemd-networkd.socket. May 15 00:55:44.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.439534 systemd[1]: Stopping network-cleanup.service... May 15 00:55:44.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.440628 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:55:44.440689 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 00:55:44.442162 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:55:44.442238 systemd[1]: Stopped systemd-sysctl.service. May 15 00:55:44.446349 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:55:44.446400 systemd[1]: Stopped systemd-modules-load.service. May 15 00:55:44.446809 systemd[1]: Stopping systemd-udevd.service... May 15 00:55:44.448181 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:55:44.452582 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:55:44.452666 systemd[1]: Stopped network-cleanup.service. May 15 00:55:44.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.455560 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:55:44.455638 systemd[1]: Stopped sysroot-boot.service. May 15 00:55:44.456158 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:55:44.456193 systemd[1]: Stopped initrd-setup-root.service. May 15 00:55:44.467134 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:55:44.467290 systemd[1]: Stopped systemd-udevd.service. May 15 00:55:44.470623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:55:44.470687 systemd[1]: Closed systemd-udevd-control.socket. May 15 00:55:44.473674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:55:44.473723 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 00:55:44.480211 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:55:44.480260 systemd[1]: Stopped dracut-pre-udev.service. May 15 00:55:44.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.482207 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:55:44.482241 systemd[1]: Stopped dracut-cmdline.service. May 15 00:55:44.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.485224 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:55:44.485984 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 00:55:44.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.489538 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 00:55:44.491382 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:55:44.491444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 00:55:44.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.494360 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:55:44.494392 systemd[1]: Stopped kmod-static-nodes.service. May 15 00:55:44.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.496227 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:55:44.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.496965 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 00:55:44.500345 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:55:44.502129 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:55:44.503219 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 00:55:44.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.505050 systemd[1]: Reached target initrd-switch-root.target. May 15 00:55:44.507233 systemd[1]: Starting initrd-switch-root.service... May 15 00:55:44.522835 systemd[1]: Switching root. May 15 00:55:44.542060 systemd-journald[198]: Journal stopped May 15 00:55:47.252436 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 15 00:55:47.252488 kernel: SELinux: Class mctp_socket not defined in policy. May 15 00:55:47.252506 kernel: SELinux: Class anon_inode not defined in policy. May 15 00:55:47.252516 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 00:55:47.252525 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:55:47.252534 kernel: SELinux: policy capability open_perms=1 May 15 00:55:47.252544 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:55:47.252553 kernel: SELinux: policy capability always_check_network=0 May 15 00:55:47.252562 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:55:47.252571 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:55:47.252582 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:55:47.252591 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:55:47.252605 systemd[1]: Successfully loaded SELinux policy in 38.271ms. May 15 00:55:47.252623 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.396ms. May 15 00:55:47.252634 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:55:47.252645 systemd[1]: Detected virtualization kvm. May 15 00:55:47.252656 systemd[1]: Detected architecture x86-64. May 15 00:55:47.252666 systemd[1]: Detected first boot. May 15 00:55:47.252676 systemd[1]: Initializing machine ID from VM UUID. May 15 00:55:47.252688 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 00:55:47.252699 systemd[1]: Populated /etc with preset unit settings. May 15 00:55:47.252710 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:47.252727 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:47.252738 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:47.252749 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:55:47.252759 systemd[1]: Stopped initrd-switch-root.service. May 15 00:55:47.252780 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:55:47.252792 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 00:55:47.252802 systemd[1]: Created slice system-addon\x2drun.slice. May 15 00:55:47.252813 systemd[1]: Created slice system-getty.slice. May 15 00:55:47.252823 systemd[1]: Created slice system-modprobe.slice. May 15 00:55:47.252833 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 00:55:47.252843 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 00:55:47.252853 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 00:55:47.252863 systemd[1]: Created slice user.slice. May 15 00:55:47.252876 systemd[1]: Started systemd-ask-password-console.path. May 15 00:55:47.252889 systemd[1]: Started systemd-ask-password-wall.path. May 15 00:55:47.252899 systemd[1]: Set up automount boot.automount. May 15 00:55:47.252909 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 00:55:47.252919 systemd[1]: Stopped target initrd-switch-root.target. May 15 00:55:47.252931 systemd[1]: Stopped target initrd-fs.target. May 15 00:55:47.252941 systemd[1]: Stopped target initrd-root-fs.target. May 15 00:55:47.252951 systemd[1]: Reached target integritysetup.target. May 15 00:55:47.252962 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:55:47.252972 systemd[1]: Reached target remote-fs.target. May 15 00:55:47.252982 systemd[1]: Reached target slices.target. May 15 00:55:47.252992 systemd[1]: Reached target swap.target. May 15 00:55:47.253003 systemd[1]: Reached target torcx.target. May 15 00:55:47.253013 systemd[1]: Reached target veritysetup.target. May 15 00:55:47.253024 systemd[1]: Listening on systemd-coredump.socket. May 15 00:55:47.253035 systemd[1]: Listening on systemd-initctl.socket. May 15 00:55:47.253044 systemd[1]: Listening on systemd-networkd.socket. May 15 00:55:47.253054 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:55:47.253064 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:55:47.253074 systemd[1]: Listening on systemd-userdbd.socket. May 15 00:55:47.253084 systemd[1]: Mounting dev-hugepages.mount... May 15 00:55:47.253094 systemd[1]: Mounting dev-mqueue.mount... May 15 00:55:47.253104 systemd[1]: Mounting media.mount... May 15 00:55:47.253116 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:47.253126 systemd[1]: Mounting sys-kernel-debug.mount... May 15 00:55:47.253140 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 00:55:47.253150 systemd[1]: Mounting tmp.mount... May 15 00:55:47.253161 systemd[1]: Starting flatcar-tmpfiles.service... May 15 00:55:47.253171 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:47.253181 systemd[1]: Starting kmod-static-nodes.service... May 15 00:55:47.253192 systemd[1]: Starting modprobe@configfs.service... May 15 00:55:47.253202 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:47.253213 systemd[1]: Starting modprobe@drm.service... May 15 00:55:47.253223 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:47.253233 systemd[1]: Starting modprobe@fuse.service... May 15 00:55:47.253243 systemd[1]: Starting modprobe@loop.service... May 15 00:55:47.253254 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:55:47.253264 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:55:47.253274 systemd[1]: Stopped systemd-fsck-root.service. May 15 00:55:47.253284 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:55:47.253294 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:55:47.253306 kernel: fuse: init (API version 7.34) May 15 00:55:47.253316 systemd[1]: Stopped systemd-journald.service. May 15 00:55:47.253326 kernel: loop: module loaded May 15 00:55:47.253336 systemd[1]: Starting systemd-journald.service... May 15 00:55:47.253346 systemd[1]: Starting systemd-modules-load.service... May 15 00:55:47.253356 systemd[1]: Starting systemd-network-generator.service... May 15 00:55:47.253367 systemd[1]: Starting systemd-remount-fs.service... May 15 00:55:47.253378 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:55:47.253388 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:55:47.253399 systemd[1]: Stopped verity-setup.service. May 15 00:55:47.254216 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:47.254228 systemd[1]: Mounted dev-hugepages.mount. May 15 00:55:47.254238 systemd[1]: Mounted dev-mqueue.mount. May 15 00:55:47.254248 systemd[1]: Mounted media.mount. May 15 00:55:47.254258 systemd[1]: Mounted sys-kernel-debug.mount. May 15 00:55:47.254268 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 00:55:47.254278 systemd[1]: Mounted tmp.mount. May 15 00:55:47.254288 systemd[1]: Finished flatcar-tmpfiles.service. May 15 00:55:47.254303 systemd-journald[989]: Journal started May 15 00:55:47.254341 systemd-journald[989]: Runtime Journal (/run/log/journal/979e4d0d6f80469ab2b524e6f4c5b334) is 6.0M, max 48.5M, 42.5M free. May 15 00:55:44.598000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:55:44.888000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:55:44.888000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:55:44.888000 audit: BPF prog-id=10 op=LOAD May 15 00:55:44.888000 audit: BPF prog-id=10 op=UNLOAD May 15 00:55:44.889000 audit: BPF prog-id=11 op=LOAD May 15 00:55:44.889000 audit: BPF prog-id=11 op=UNLOAD May 15 00:55:44.918000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 00:55:44.918000 audit[912]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:44.918000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:55:44.920000 audit[912]: AVC avc: denied { associate } for pid=912 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 00:55:44.920000 audit[912]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859a9 a2=1ed a3=0 items=2 ppid=895 pid=912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:44.920000 audit: CWD cwd="/" May 15 00:55:44.920000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:44.920000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:44.920000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:55:47.113000 audit: BPF prog-id=12 op=LOAD May 15 00:55:47.113000 audit: BPF prog-id=3 op=UNLOAD May 15 00:55:47.113000 audit: BPF prog-id=13 op=LOAD May 15 00:55:47.113000 audit: BPF prog-id=14 op=LOAD May 15 00:55:47.113000 audit: BPF prog-id=4 op=UNLOAD May 15 00:55:47.113000 audit: BPF prog-id=5 op=UNLOAD May 15 00:55:47.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.124000 audit: BPF prog-id=12 op=UNLOAD May 15 00:55:47.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.223000 audit: BPF prog-id=15 op=LOAD May 15 00:55:47.223000 audit: BPF prog-id=16 op=LOAD May 15 00:55:47.223000 audit: BPF prog-id=17 op=LOAD May 15 00:55:47.223000 audit: BPF prog-id=13 op=UNLOAD May 15 00:55:47.223000 audit: BPF prog-id=14 op=UNLOAD May 15 00:55:47.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.249000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 00:55:47.249000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcba1ccdd0 a2=4000 a3=7ffcba1cce6c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:47.249000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 00:55:47.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.917960 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:47.256141 systemd[1]: Finished kmod-static-nodes.service. May 15 00:55:47.256161 systemd[1]: Started systemd-journald.service. May 15 00:55:47.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.111595 systemd[1]: Queued start job for default target multi-user.target. May 15 00:55:44.918171 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 00:55:47.111607 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 00:55:44.918185 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 00:55:47.114828 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:55:44.918210 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 00:55:44.918218 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 00:55:44.918243 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 00:55:44.918253 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 00:55:44.918442 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 00:55:44.918474 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 00:55:47.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.918484 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 00:55:44.918780 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 00:55:44.918811 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 00:55:47.258113 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:55:44.918826 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 15 00:55:47.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:44.918847 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 00:55:47.258239 systemd[1]: Finished modprobe@configfs.service. May 15 00:55:44.918861 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 15 00:55:44.918873 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 00:55:47.259346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:46.720736 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:46Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:46.720981 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:46Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:46.721074 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:46Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:47.259594 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:47.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:46.721214 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:46Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 00:55:46.721258 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:46Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 00:55:46.721308 /usr/lib/systemd/system-generators/torcx-generator[912]: time="2025-05-15T00:55:46Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 00:55:47.260726 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:55:47.260857 systemd[1]: Finished modprobe@drm.service. May 15 00:55:47.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.261863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:47.261990 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:47.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.263029 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:55:47.263146 systemd[1]: Finished modprobe@fuse.service. May 15 00:55:47.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.264105 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:47.264215 systemd[1]: Finished modprobe@loop.service. May 15 00:55:47.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.265221 systemd[1]: Finished systemd-modules-load.service. May 15 00:55:47.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.266304 systemd[1]: Finished systemd-network-generator.service. May 15 00:55:47.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.267408 systemd[1]: Finished systemd-remount-fs.service. May 15 00:55:47.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.268678 systemd[1]: Reached target network-pre.target. May 15 00:55:47.270640 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 00:55:47.272403 systemd[1]: Mounting sys-kernel-config.mount... May 15 00:55:47.273168 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:55:47.274314 systemd[1]: Starting systemd-hwdb-update.service... May 15 00:55:47.275853 systemd[1]: Starting systemd-journal-flush.service... May 15 00:55:47.276761 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:47.277969 systemd[1]: Starting systemd-random-seed.service... May 15 00:55:47.279017 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:47.279881 systemd[1]: Starting systemd-sysctl.service... May 15 00:55:47.281630 systemd[1]: Starting systemd-sysusers.service... May 15 00:55:47.284722 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:55:47.285362 systemd-journald[989]: Time spent on flushing to /var/log/journal/979e4d0d6f80469ab2b524e6f4c5b334 is 25.103ms for 1082 entries. May 15 00:55:47.285362 systemd-journald[989]: System Journal (/var/log/journal/979e4d0d6f80469ab2b524e6f4c5b334) is 8.0M, max 195.6M, 187.6M free. May 15 00:55:47.326948 systemd-journald[989]: Received client request to flush runtime journal. May 15 00:55:47.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.286949 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 00:55:47.327464 udevadm[1016]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:55:47.288314 systemd[1]: Mounted sys-kernel-config.mount. May 15 00:55:47.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.290370 systemd[1]: Starting systemd-udev-settle.service... May 15 00:55:47.296668 systemd[1]: Finished systemd-random-seed.service. May 15 00:55:47.298279 systemd[1]: Finished systemd-sysusers.service. May 15 00:55:47.299521 systemd[1]: Reached target first-boot-complete.target. May 15 00:55:47.301799 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:55:47.304623 systemd[1]: Finished systemd-sysctl.service. May 15 00:55:47.317893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:55:47.327700 systemd[1]: Finished systemd-journal-flush.service. May 15 00:55:47.707474 systemd[1]: Finished systemd-hwdb-update.service. May 15 00:55:47.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.708000 audit: BPF prog-id=18 op=LOAD May 15 00:55:47.708000 audit: BPF prog-id=19 op=LOAD May 15 00:55:47.708000 audit: BPF prog-id=7 op=UNLOAD May 15 00:55:47.708000 audit: BPF prog-id=8 op=UNLOAD May 15 00:55:47.709755 systemd[1]: Starting systemd-udevd.service... May 15 00:55:47.724643 systemd-udevd[1021]: Using default interface naming scheme 'v252'. May 15 00:55:47.737081 systemd[1]: Started systemd-udevd.service. May 15 00:55:47.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.739000 audit: BPF prog-id=20 op=LOAD May 15 00:55:47.740732 systemd[1]: Starting systemd-networkd.service... May 15 00:55:47.747000 audit: BPF prog-id=21 op=LOAD May 15 00:55:47.747000 audit: BPF prog-id=22 op=LOAD May 15 00:55:47.747000 audit: BPF prog-id=23 op=LOAD May 15 00:55:47.748806 systemd[1]: Starting systemd-userdbd.service... May 15 00:55:47.768148 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 15 00:55:47.773940 systemd[1]: Started systemd-userdbd.service. May 15 00:55:47.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.787955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:55:47.808450 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:55:47.815459 kernel: ACPI: button: Power Button [PWRF] May 15 00:55:47.818931 systemd-networkd[1035]: lo: Link UP May 15 00:55:47.818938 systemd-networkd[1035]: lo: Gained carrier May 15 00:55:47.819282 systemd-networkd[1035]: Enumeration completed May 15 00:55:47.819379 systemd-networkd[1035]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:55:47.819393 systemd[1]: Started systemd-networkd.service. May 15 00:55:47.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.820717 systemd-networkd[1035]: eth0: Link UP May 15 00:55:47.820720 systemd-networkd[1035]: eth0: Gained carrier May 15 00:55:47.823000 audit[1026]: AVC avc: denied { confidentiality } for pid=1026 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 00:55:47.847593 systemd-networkd[1035]: eth0: DHCPv4 address 10.0.0.130/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:55:47.823000 audit[1026]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f414316c80 a1=338ac a2=7fca901fabc5 a3=5 items=110 ppid=1021 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:47.823000 audit: CWD cwd="/" May 15 00:55:47.823000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=1 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=2 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=3 name=(null) inode=15491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=4 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=5 name=(null) inode=15492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=6 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=7 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=8 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=9 name=(null) inode=15494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=10 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=11 name=(null) inode=15495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=12 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=13 name=(null) inode=15496 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=14 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=15 name=(null) inode=15497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=16 name=(null) inode=15493 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=17 name=(null) inode=15498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=18 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=19 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=20 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=21 name=(null) inode=15500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=22 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=23 name=(null) inode=15501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=24 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=25 name=(null) inode=15502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=26 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=27 name=(null) inode=15503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=28 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=29 name=(null) inode=15504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=30 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=31 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=32 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=33 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=34 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=35 name=(null) inode=15507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=36 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=37 name=(null) inode=15508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=38 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=39 name=(null) inode=15509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=40 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=41 name=(null) inode=15510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=42 name=(null) inode=15490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=43 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=44 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=45 name=(null) inode=15512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=46 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=47 name=(null) inode=15513 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=48 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=49 name=(null) inode=15514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=50 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=51 name=(null) inode=15515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=52 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=53 name=(null) inode=15516 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=55 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=56 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=57 name=(null) inode=15518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=58 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=59 name=(null) inode=15519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=60 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=61 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=62 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=63 name=(null) inode=15521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=64 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=65 name=(null) inode=15522 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=66 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=67 name=(null) inode=15523 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=68 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=69 name=(null) inode=15524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=70 name=(null) inode=15520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=71 name=(null) inode=15525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=72 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=73 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=74 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=75 name=(null) inode=15527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=76 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=77 name=(null) inode=15528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=78 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=79 name=(null) inode=15529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=80 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=81 name=(null) inode=15530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=82 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=83 name=(null) inode=15531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=84 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=85 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=86 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=87 name=(null) inode=15533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=88 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=89 name=(null) inode=15534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=90 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=91 name=(null) inode=15535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=92 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=93 name=(null) inode=15536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=94 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=95 name=(null) inode=15537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=96 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=97 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=98 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=99 name=(null) inode=15539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=100 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=101 name=(null) inode=15540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=102 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=103 name=(null) inode=15541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=104 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=105 name=(null) inode=15542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=106 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=107 name=(null) inode=15543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PATH item=109 name=(null) inode=14673 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:55:47.823000 audit: PROCTITLE proctitle="(udev-worker)" May 15 00:55:47.856517 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 00:55:47.861447 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:55:47.865892 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:55:47.866008 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:55:47.872456 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:55:47.895455 kernel: kvm: Nested Virtualization enabled May 15 00:55:47.895600 kernel: SVM: kvm: Nested Paging enabled May 15 00:55:47.895647 kernel: SVM: Virtual VMLOAD VMSAVE supported May 15 00:55:47.895674 kernel: SVM: Virtual GIF supported May 15 00:55:47.912447 kernel: EDAC MC: Ver: 3.0.0 May 15 00:55:47.944811 systemd[1]: Finished systemd-udev-settle.service. May 15 00:55:47.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.947032 systemd[1]: Starting lvm2-activation-early.service... May 15 00:55:47.954932 lvm[1057]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:55:47.982355 systemd[1]: Finished lvm2-activation-early.service. May 15 00:55:47.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:47.983548 systemd[1]: Reached target cryptsetup.target. May 15 00:55:47.985364 systemd[1]: Starting lvm2-activation.service... May 15 00:55:47.990039 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:55:48.016392 systemd[1]: Finished lvm2-activation.service. May 15 00:55:48.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.017467 systemd[1]: Reached target local-fs-pre.target. May 15 00:55:48.018444 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:55:48.018463 systemd[1]: Reached target local-fs.target. May 15 00:55:48.019328 systemd[1]: Reached target machines.target. May 15 00:55:48.021060 systemd[1]: Starting ldconfig.service... May 15 00:55:48.022043 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:48.022078 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:48.022872 systemd[1]: Starting systemd-boot-update.service... May 15 00:55:48.024649 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 00:55:48.026641 systemd[1]: Starting systemd-machine-id-commit.service... May 15 00:55:48.028418 systemd[1]: Starting systemd-sysext.service... May 15 00:55:48.029660 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1060 (bootctl) May 15 00:55:48.031016 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 00:55:48.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.035068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 00:55:48.038730 systemd[1]: Unmounting usr-share-oem.mount... May 15 00:55:48.042010 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 00:55:48.042185 systemd[1]: Unmounted usr-share-oem.mount. May 15 00:55:48.051454 kernel: loop0: detected capacity change from 0 to 205544 May 15 00:55:48.065483 systemd-fsck[1068]: fsck.fat 4.2 (2021-01-31) May 15 00:55:48.065483 systemd-fsck[1068]: /dev/vda1: 790 files, 120690/258078 clusters May 15 00:55:48.066844 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 00:55:48.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.069648 systemd[1]: Mounting boot.mount... May 15 00:55:48.219527 systemd[1]: Mounted boot.mount. May 15 00:55:48.227577 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:55:48.232437 systemd[1]: Finished systemd-machine-id-commit.service. May 15 00:55:48.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.236985 systemd[1]: Finished systemd-boot-update.service. May 15 00:55:48.243010 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:55:48.244472 kernel: loop1: detected capacity change from 0 to 205544 May 15 00:55:48.249097 (sd-sysext)[1073]: Using extensions 'kubernetes'. May 15 00:55:48.249485 (sd-sysext)[1073]: Merged extensions into '/usr'. May 15 00:55:48.264202 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.265844 systemd[1]: Mounting usr-share-oem.mount... May 15 00:55:48.266912 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:48.268196 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:48.271766 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:48.273663 systemd[1]: Starting modprobe@loop.service... May 15 00:55:48.274409 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:48.274532 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:48.274631 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.276853 systemd[1]: Mounted usr-share-oem.mount. May 15 00:55:48.277870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:48.277982 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:48.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.279093 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:48.279186 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:48.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.280302 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:48.280392 systemd[1]: Finished modprobe@loop.service. May 15 00:55:48.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.281662 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:48.281769 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:48.282622 systemd[1]: Finished systemd-sysext.service. May 15 00:55:48.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.284546 systemd[1]: Starting ensure-sysext.service... May 15 00:55:48.286315 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 00:55:48.291335 systemd[1]: Reloading. May 15 00:55:48.295881 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 00:55:48.296501 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:55:48.297915 systemd-tmpfiles[1080]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:55:48.375314 ldconfig[1059]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:55:48.409888 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2025-05-15T00:55:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:48.410196 /usr/lib/systemd/system-generators/torcx-generator[1099]: time="2025-05-15T00:55:48Z" level=info msg="torcx already run" May 15 00:55:48.473836 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:48.473855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:48.491163 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:48.546000 audit: BPF prog-id=24 op=LOAD May 15 00:55:48.546000 audit: BPF prog-id=21 op=UNLOAD May 15 00:55:48.546000 audit: BPF prog-id=25 op=LOAD May 15 00:55:48.546000 audit: BPF prog-id=26 op=LOAD May 15 00:55:48.546000 audit: BPF prog-id=22 op=UNLOAD May 15 00:55:48.546000 audit: BPF prog-id=23 op=UNLOAD May 15 00:55:48.546000 audit: BPF prog-id=27 op=LOAD May 15 00:55:48.546000 audit: BPF prog-id=28 op=LOAD May 15 00:55:48.546000 audit: BPF prog-id=18 op=UNLOAD May 15 00:55:48.546000 audit: BPF prog-id=19 op=UNLOAD May 15 00:55:48.547000 audit: BPF prog-id=29 op=LOAD May 15 00:55:48.547000 audit: BPF prog-id=15 op=UNLOAD May 15 00:55:48.547000 audit: BPF prog-id=30 op=LOAD May 15 00:55:48.548000 audit: BPF prog-id=31 op=LOAD May 15 00:55:48.548000 audit: BPF prog-id=16 op=UNLOAD May 15 00:55:48.548000 audit: BPF prog-id=17 op=UNLOAD May 15 00:55:48.548000 audit: BPF prog-id=32 op=LOAD May 15 00:55:48.548000 audit: BPF prog-id=20 op=UNLOAD May 15 00:55:48.553593 systemd[1]: Finished ldconfig.service. May 15 00:55:48.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.569490 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 00:55:48.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.574189 systemd[1]: Starting audit-rules.service... May 15 00:55:48.575849 systemd[1]: Starting clean-ca-certificates.service... May 15 00:55:48.577670 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 00:55:48.578000 audit: BPF prog-id=33 op=LOAD May 15 00:55:48.579976 systemd[1]: Starting systemd-resolved.service... May 15 00:55:48.581000 audit: BPF prog-id=34 op=LOAD May 15 00:55:48.582087 systemd[1]: Starting systemd-timesyncd.service... May 15 00:55:48.583818 systemd[1]: Starting systemd-update-utmp.service... May 15 00:55:48.587000 audit[1149]: SYSTEM_BOOT pid=1149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 00:55:48.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.587004 systemd[1]: Finished clean-ca-certificates.service. May 15 00:55:48.591850 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.592092 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:48.593349 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:48.595122 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:48.597047 systemd[1]: Starting modprobe@loop.service... May 15 00:55:48.597830 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:48.598122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:48.598286 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:48.598413 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.600493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:48.600636 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:48.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.602002 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 00:55:48.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.603344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:48.603451 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:48.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.604753 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:48.604846 systemd[1]: Finished modprobe@loop.service. May 15 00:55:48.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.606795 systemd[1]: Finished systemd-update-utmp.service. May 15 00:55:48.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:55:48.609599 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.609812 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:48.610943 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:48.612959 augenrules[1165]: No rules May 15 00:55:48.612000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 00:55:48.612000 audit[1165]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe5636ffc0 a2=420 a3=0 items=0 ppid=1141 pid=1165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:55:48.612000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 00:55:48.615265 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:48.616913 systemd[1]: Starting modprobe@loop.service... May 15 00:55:48.617719 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:48.617828 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:48.618819 systemd[1]: Starting systemd-update-done.service... May 15 00:55:48.619731 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:48.619822 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.620664 systemd[1]: Finished audit-rules.service. May 15 00:55:48.622004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:48.622102 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:48.623421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:48.623630 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:48.624894 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:48.624988 systemd[1]: Finished modprobe@loop.service. May 15 00:55:48.626147 systemd[1]: Finished systemd-update-done.service. May 15 00:55:48.629908 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.630152 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:55:48.631483 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:55:48.633079 systemd[1]: Starting modprobe@drm.service... May 15 00:55:48.634770 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:55:48.636379 systemd[1]: Starting modprobe@loop.service... May 15 00:55:48.637398 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:55:48.637504 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:48.638517 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 00:55:48.639628 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:55:48.639829 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:55:48.640714 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:55:48.640820 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:55:48.642201 systemd[1]: Started systemd-timesyncd.service. May 15 00:55:49.256164 systemd-timesyncd[1148]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:55:49.256385 systemd-timesyncd[1148]: Initial clock synchronization to Thu 2025-05-15 00:55:49.256100 UTC. May 15 00:55:49.257357 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:55:49.257476 systemd[1]: Finished modprobe@drm.service. May 15 00:55:49.257783 systemd-resolved[1145]: Positive Trust Anchors: May 15 00:55:49.257799 systemd-resolved[1145]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:55:49.257826 systemd-resolved[1145]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:55:49.258823 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:55:49.258965 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:55:49.260394 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:55:49.260488 systemd[1]: Finished modprobe@loop.service. May 15 00:55:49.261903 systemd[1]: Reached target time-set.target. May 15 00:55:49.262965 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:55:49.262996 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:55:49.263301 systemd[1]: Finished ensure-sysext.service. May 15 00:55:49.288542 systemd-resolved[1145]: Defaulting to hostname 'linux'. May 15 00:55:49.289987 systemd[1]: Started systemd-resolved.service. May 15 00:55:49.290920 systemd[1]: Reached target network.target. May 15 00:55:49.291773 systemd[1]: Reached target nss-lookup.target. May 15 00:55:49.292845 systemd[1]: Reached target sysinit.target. May 15 00:55:49.293768 systemd[1]: Started motdgen.path. May 15 00:55:49.294552 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 00:55:49.295863 systemd[1]: Started logrotate.timer. May 15 00:55:49.296754 systemd[1]: Started mdadm.timer. May 15 00:55:49.297505 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 00:55:49.298438 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:55:49.298464 systemd[1]: Reached target paths.target. May 15 00:55:49.299290 systemd[1]: Reached target timers.target. May 15 00:55:49.300469 systemd[1]: Listening on dbus.socket. May 15 00:55:49.302292 systemd[1]: Starting docker.socket... May 15 00:55:49.304815 systemd[1]: Listening on sshd.socket. May 15 00:55:49.305732 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:49.306069 systemd[1]: Listening on docker.socket. May 15 00:55:49.306944 systemd[1]: Reached target sockets.target. May 15 00:55:49.307779 systemd[1]: Reached target basic.target. May 15 00:55:49.308702 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:55:49.308725 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:55:49.309501 systemd[1]: Starting containerd.service... May 15 00:55:49.311133 systemd[1]: Starting dbus.service... May 15 00:55:49.312728 systemd[1]: Starting enable-oem-cloudinit.service... May 15 00:55:49.315032 systemd[1]: Starting extend-filesystems.service... May 15 00:55:49.316114 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 00:55:49.317273 systemd[1]: Starting motdgen.service... May 15 00:55:49.324495 extend-filesystems[1184]: Found loop1 May 15 00:55:49.361040 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 00:55:49.361221 jq[1183]: false May 15 00:55:49.361409 extend-filesystems[1184]: Found sr0 May 15 00:55:49.361409 extend-filesystems[1184]: Found vda May 15 00:55:49.361409 extend-filesystems[1184]: Found vda1 May 15 00:55:49.361409 extend-filesystems[1184]: Found vda2 May 15 00:55:49.361409 extend-filesystems[1184]: Found vda3 May 15 00:55:49.361409 extend-filesystems[1184]: Found usr May 15 00:55:49.361409 extend-filesystems[1184]: Found vda4 May 15 00:55:49.361409 extend-filesystems[1184]: Found vda6 May 15 00:55:49.361409 extend-filesystems[1184]: Found vda7 May 15 00:55:49.361409 extend-filesystems[1184]: Found vda9 May 15 00:55:49.361409 extend-filesystems[1184]: Checking size of /dev/vda9 May 15 00:55:49.366778 systemd[1]: Starting sshd-keygen.service... May 15 00:55:49.377228 dbus-daemon[1182]: [system] SELinux support is enabled May 15 00:55:49.382065 extend-filesystems[1184]: Resized partition /dev/vda9 May 15 00:55:49.371485 systemd[1]: Starting systemd-logind.service... May 15 00:55:49.386475 extend-filesystems[1200]: resize2fs 1.46.5 (30-Dec-2021) May 15 00:55:49.372543 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:55:49.372652 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:55:49.380963 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:55:49.381692 systemd[1]: Starting update-engine.service... May 15 00:55:49.384579 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 00:55:49.386167 systemd[1]: Started dbus.service. May 15 00:55:49.391651 jq[1206]: true May 15 00:55:49.391972 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:55:49.392150 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 00:55:49.392441 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:55:49.392584 systemd[1]: Finished motdgen.service. May 15 00:55:49.393553 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:55:49.393766 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 00:55:49.403993 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:55:49.404032 systemd[1]: Reached target system-config.target. May 15 00:55:49.404980 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:55:49.405006 systemd[1]: Reached target user-config.target. May 15 00:55:49.434029 jq[1209]: true May 15 00:55:49.490888 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:55:49.492120 systemd-logind[1202]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:55:49.492419 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:55:49.495746 systemd-logind[1202]: New seat seat0. May 15 00:55:49.503616 systemd[1]: Started systemd-logind.service. May 15 00:55:49.518965 env[1210]: time="2025-05-15T00:55:49.518898794Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 00:55:49.536913 env[1210]: time="2025-05-15T00:55:49.536883358Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:55:49.537127 env[1210]: time="2025-05-15T00:55:49.537109312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:49.538306 env[1210]: time="2025-05-15T00:55:49.538273185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:49.538387 env[1210]: time="2025-05-15T00:55:49.538368664Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:49.538657 env[1210]: time="2025-05-15T00:55:49.538637318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:49.538736 env[1210]: time="2025-05-15T00:55:49.538718260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:55:49.538812 env[1210]: time="2025-05-15T00:55:49.538793321Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 00:55:49.538902 env[1210]: time="2025-05-15T00:55:49.538879963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:55:49.539068 env[1210]: time="2025-05-15T00:55:49.539048560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:49.539398 env[1210]: time="2025-05-15T00:55:49.539381003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:55:49.539596 env[1210]: time="2025-05-15T00:55:49.539575328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:55:49.539684 env[1210]: time="2025-05-15T00:55:49.539665356Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:55:49.539818 env[1210]: time="2025-05-15T00:55:49.539797564Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 00:55:49.539915 env[1210]: time="2025-05-15T00:55:49.539895388Z" level=info msg="metadata content store policy set" policy=shared May 15 00:55:49.837388 update_engine[1205]: I0515 00:55:49.837180 1205 main.cc:92] Flatcar Update Engine starting May 15 00:55:49.846735 update_engine[1205]: I0515 00:55:49.846688 1205 update_check_scheduler.cc:74] Next update check in 8m59s May 15 00:55:49.846709 systemd[1]: Started update-engine.service. May 15 00:55:49.849864 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:55:49.850372 systemd[1]: Started locksmithd.service. May 15 00:55:49.914531 locksmithd[1236]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:55:50.027366 extend-filesystems[1200]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:55:50.027366 extend-filesystems[1200]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:55:50.027366 extend-filesystems[1200]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:55:50.032558 extend-filesystems[1184]: Resized filesystem in /dev/vda9 May 15 00:55:50.028258 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:55:50.028441 systemd[1]: Finished extend-filesystems.service. May 15 00:55:50.036326 env[1210]: time="2025-05-15T00:55:50.036282699Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:55:50.036384 env[1210]: time="2025-05-15T00:55:50.036328054Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:55:50.036384 env[1210]: time="2025-05-15T00:55:50.036342751Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:55:50.036384 env[1210]: time="2025-05-15T00:55:50.036381474Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036451 env[1210]: time="2025-05-15T00:55:50.036398105Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036451 env[1210]: time="2025-05-15T00:55:50.036411049Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036451 env[1210]: time="2025-05-15T00:55:50.036422902Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036451 env[1210]: time="2025-05-15T00:55:50.036435105Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036451 env[1210]: time="2025-05-15T00:55:50.036447237Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036548 env[1210]: time="2025-05-15T00:55:50.036459019Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036548 env[1210]: time="2025-05-15T00:55:50.036470521Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:55:50.036548 env[1210]: time="2025-05-15T00:55:50.036510847Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:55:50.036652 env[1210]: time="2025-05-15T00:55:50.036627806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:55:50.036724 bash[1230]: Updated "/home/core/.ssh/authorized_keys" May 15 00:55:50.036994 env[1210]: time="2025-05-15T00:55:50.036704981Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:55:50.037276 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 00:55:50.037801 env[1210]: time="2025-05-15T00:55:50.037767233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:55:50.037801 env[1210]: time="2025-05-15T00:55:50.037799293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:55:50.037878 env[1210]: time="2025-05-15T00:55:50.037812458Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:55:50.038081 env[1210]: time="2025-05-15T00:55:50.037873613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038081 env[1210]: time="2025-05-15T00:55:50.038072516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038084208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038097964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038109034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038130515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038141746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038152265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038183 env[1210]: time="2025-05-15T00:55:50.038164488Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038268213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038282780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038300834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038312336Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038328436Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038338765Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:55:50.038377 env[1210]: time="2025-05-15T00:55:50.038358993Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 00:55:50.038558 env[1210]: time="2025-05-15T00:55:50.038393468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:55:50.038635 env[1210]: time="2025-05-15T00:55:50.038574598Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:55:50.038635 env[1210]: time="2025-05-15T00:55:50.038629250Z" level=info msg="Connect containerd service" May 15 00:55:50.039277 env[1210]: time="2025-05-15T00:55:50.038664957Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:55:50.039327 env[1210]: time="2025-05-15T00:55:50.039304537Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:55:50.039533 env[1210]: time="2025-05-15T00:55:50.039457013Z" level=info msg="Start subscribing containerd event" May 15 00:55:50.039576 env[1210]: time="2025-05-15T00:55:50.039544317Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:55:50.039600 env[1210]: time="2025-05-15T00:55:50.039575856Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:55:50.039600 env[1210]: time="2025-05-15T00:55:50.039578621Z" level=info msg="Start recovering state" May 15 00:55:50.039660 systemd[1]: Started containerd.service. May 15 00:55:50.040383 env[1210]: time="2025-05-15T00:55:50.040145284Z" level=info msg="Start event monitor" May 15 00:55:50.041605 env[1210]: time="2025-05-15T00:55:50.040180269Z" level=info msg="Start snapshots syncer" May 15 00:55:50.041605 env[1210]: time="2025-05-15T00:55:50.041198970Z" level=info msg="Start cni network conf syncer for default" May 15 00:55:50.041605 env[1210]: time="2025-05-15T00:55:50.041166499Z" level=info msg="containerd successfully booted in 0.528047s" May 15 00:55:50.041605 env[1210]: time="2025-05-15T00:55:50.041211093Z" level=info msg="Start streaming server" May 15 00:55:50.297661 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:55:50.320265 systemd[1]: Finished sshd-keygen.service. May 15 00:55:50.322796 systemd[1]: Starting issuegen.service... May 15 00:55:50.328279 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:55:50.328440 systemd[1]: Finished issuegen.service. May 15 00:55:50.330717 systemd[1]: Starting systemd-user-sessions.service... May 15 00:55:50.338865 systemd[1]: Finished systemd-user-sessions.service. May 15 00:55:50.341403 systemd[1]: Started getty@tty1.service. May 15 00:55:50.343948 systemd[1]: Started serial-getty@ttyS0.service. May 15 00:55:50.345287 systemd[1]: Reached target getty.target. May 15 00:55:50.352033 systemd-networkd[1035]: eth0: Gained IPv6LL May 15 00:55:50.357617 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 00:55:50.359204 systemd[1]: Reached target network-online.target. May 15 00:55:50.363672 systemd[1]: Starting kubelet.service... May 15 00:55:51.314851 systemd[1]: Started kubelet.service. May 15 00:55:51.316301 systemd[1]: Reached target multi-user.target. May 15 00:55:51.318532 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 00:55:51.326247 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 00:55:51.326398 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 00:55:51.327557 systemd[1]: Startup finished in 862ms (kernel) + 4.583s (initrd) + 6.155s (userspace) = 11.600s. May 15 00:55:51.913157 kubelet[1260]: E0515 00:55:51.913104 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:55:51.914964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:55:51.915077 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:55:51.915300 systemd[1]: kubelet.service: Consumed 1.470s CPU time. May 15 00:55:53.290476 systemd[1]: Created slice system-sshd.slice. May 15 00:55:53.291460 systemd[1]: Started sshd@0-10.0.0.130:22-10.0.0.1:59502.service. May 15 00:55:53.332220 sshd[1270]: Accepted publickey for core from 10.0.0.1 port 59502 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:53.333890 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.341120 systemd[1]: Created slice user-500.slice. May 15 00:55:53.342019 systemd[1]: Starting user-runtime-dir@500.service... May 15 00:55:53.343410 systemd-logind[1202]: New session 1 of user core. May 15 00:55:53.350620 systemd[1]: Finished user-runtime-dir@500.service. May 15 00:55:53.351997 systemd[1]: Starting user@500.service... May 15 00:55:53.355079 (systemd)[1273]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.420605 systemd[1273]: Queued start job for default target default.target. May 15 00:55:53.421070 systemd[1273]: Reached target paths.target. May 15 00:55:53.421095 systemd[1273]: Reached target sockets.target. May 15 00:55:53.421111 systemd[1273]: Reached target timers.target. May 15 00:55:53.421125 systemd[1273]: Reached target basic.target. May 15 00:55:53.421168 systemd[1273]: Reached target default.target. May 15 00:55:53.421198 systemd[1273]: Startup finished in 59ms. May 15 00:55:53.421323 systemd[1]: Started user@500.service. May 15 00:55:53.422327 systemd[1]: Started session-1.scope. May 15 00:55:53.473807 systemd[1]: Started sshd@1-10.0.0.130:22-10.0.0.1:59508.service. May 15 00:55:53.512631 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 59508 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:53.513882 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.517237 systemd-logind[1202]: New session 2 of user core. May 15 00:55:53.518352 systemd[1]: Started session-2.scope. May 15 00:55:53.570173 sshd[1282]: pam_unix(sshd:session): session closed for user core May 15 00:55:53.572520 systemd[1]: sshd@1-10.0.0.130:22-10.0.0.1:59508.service: Deactivated successfully. May 15 00:55:53.573000 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:55:53.573386 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. May 15 00:55:53.574288 systemd[1]: Started sshd@2-10.0.0.130:22-10.0.0.1:59516.service. May 15 00:55:53.574812 systemd-logind[1202]: Removed session 2. May 15 00:55:53.609180 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 59516 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:53.610200 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.613177 systemd-logind[1202]: New session 3 of user core. May 15 00:55:53.613893 systemd[1]: Started session-3.scope. May 15 00:55:53.662031 sshd[1288]: pam_unix(sshd:session): session closed for user core May 15 00:55:53.665053 systemd[1]: sshd@2-10.0.0.130:22-10.0.0.1:59516.service: Deactivated successfully. May 15 00:55:53.665555 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:55:53.666043 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. May 15 00:55:53.666878 systemd[1]: Started sshd@3-10.0.0.130:22-10.0.0.1:59526.service. May 15 00:55:53.667543 systemd-logind[1202]: Removed session 3. May 15 00:55:53.701421 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 59526 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:53.702364 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.705360 systemd-logind[1202]: New session 4 of user core. May 15 00:55:53.706153 systemd[1]: Started session-4.scope. May 15 00:55:53.762069 sshd[1295]: pam_unix(sshd:session): session closed for user core May 15 00:55:53.765148 systemd[1]: Started sshd@4-10.0.0.130:22-10.0.0.1:59542.service. May 15 00:55:53.765593 systemd[1]: sshd@3-10.0.0.130:22-10.0.0.1:59526.service: Deactivated successfully. May 15 00:55:53.766183 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:55:53.766638 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. May 15 00:55:53.767359 systemd-logind[1202]: Removed session 4. May 15 00:55:53.800599 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 59542 ssh2: RSA SHA256:ssM3xoNo8tNgrxXV0NldYD5bf8Pu5BYi5TbjlfdpxM4 May 15 00:55:53.801672 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:55:53.804581 systemd-logind[1202]: New session 5 of user core. May 15 00:55:53.805298 systemd[1]: Started session-5.scope. May 15 00:55:53.860154 sudo[1304]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:55:53.860417 sudo[1304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:55:53.873036 systemd[1]: Starting coreos-metadata.service... May 15 00:55:53.879199 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:55:53.879332 systemd[1]: Finished coreos-metadata.service. May 15 00:55:54.538286 systemd[1]: Stopped kubelet.service. May 15 00:55:54.538478 systemd[1]: kubelet.service: Consumed 1.470s CPU time. May 15 00:55:54.540344 systemd[1]: Starting kubelet.service... May 15 00:55:54.560907 systemd[1]: Reloading. May 15 00:55:54.635703 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2025-05-15T00:55:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:55:54.635734 /usr/lib/systemd/system-generators/torcx-generator[1364]: time="2025-05-15T00:55:54Z" level=info msg="torcx already run" May 15 00:55:55.207788 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:55:55.207804 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:55:55.224286 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:55:55.292725 systemd[1]: Started kubelet.service. May 15 00:55:55.296390 systemd[1]: Stopping kubelet.service... May 15 00:55:55.297407 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:55:55.297585 systemd[1]: Stopped kubelet.service. May 15 00:55:55.298937 systemd[1]: Starting kubelet.service... May 15 00:55:55.370447 systemd[1]: Started kubelet.service. May 15 00:55:55.417007 kubelet[1411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:55:55.417007 kubelet[1411]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:55:55.417007 kubelet[1411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:55:55.418173 kubelet[1411]: I0515 00:55:55.418125 1411 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:55:55.725990 kubelet[1411]: I0515 00:55:55.725930 1411 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:55:55.725990 kubelet[1411]: I0515 00:55:55.725980 1411 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:55:55.726259 kubelet[1411]: I0515 00:55:55.726236 1411 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:55:55.756538 kubelet[1411]: I0515 00:55:55.756488 1411 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:55:55.765811 kubelet[1411]: E0515 00:55:55.765767 1411 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:55:55.765811 kubelet[1411]: I0515 00:55:55.765799 1411 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:55:55.770150 kubelet[1411]: I0515 00:55:55.770115 1411 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:55:55.770212 kubelet[1411]: I0515 00:55:55.770199 1411 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:55:55.770342 kubelet[1411]: I0515 00:55:55.770308 1411 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:55:55.770503 kubelet[1411]: I0515 00:55:55.770336 1411 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:55:55.770593 kubelet[1411]: I0515 00:55:55.770505 1411 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:55:55.770593 kubelet[1411]: I0515 00:55:55.770514 1411 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:55:55.770643 kubelet[1411]: I0515 00:55:55.770607 1411 state_mem.go:36] "Initialized new in-memory state store" May 15 00:55:55.778827 kubelet[1411]: I0515 00:55:55.778791 1411 kubelet.go:408] "Attempting to sync node with API server" May 15 00:55:55.778827 kubelet[1411]: I0515 00:55:55.778813 1411 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:55:55.779013 kubelet[1411]: I0515 00:55:55.778864 1411 kubelet.go:314] "Adding apiserver pod source" May 15 00:55:55.779013 kubelet[1411]: I0515 00:55:55.778883 1411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:55:55.779013 kubelet[1411]: E0515 00:55:55.778935 1411 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:55:55.811360 kubelet[1411]: E0515 00:55:55.811315 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:55:55.817508 kubelet[1411]: I0515 00:55:55.817488 1411 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:55:55.818178 kubelet[1411]: W0515 00:55:55.818151 1411 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.130" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 15 00:55:55.818224 kubelet[1411]: E0515 00:55:55.818203 1411 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.130\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 15 00:55:55.818357 kubelet[1411]: W0515 00:55:55.818322 1411 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 15 00:55:55.818480 kubelet[1411]: E0515 00:55:55.818364 1411 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 15 00:55:55.818885 kubelet[1411]: I0515 00:55:55.818869 1411 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:55:55.819390 kubelet[1411]: W0515 00:55:55.819374 1411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:55:55.820400 kubelet[1411]: I0515 00:55:55.820376 1411 server.go:1269] "Started kubelet" May 15 00:55:55.821424 kubelet[1411]: I0515 00:55:55.821363 1411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:55:55.821635 kubelet[1411]: I0515 00:55:55.821602 1411 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:55:55.821752 kubelet[1411]: I0515 00:55:55.821728 1411 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:55:55.823438 kubelet[1411]: I0515 00:55:55.823410 1411 server.go:460] "Adding debug handlers to kubelet server" May 15 00:55:55.824567 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 00:55:55.824686 kubelet[1411]: I0515 00:55:55.824670 1411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:55:55.825300 kubelet[1411]: I0515 00:55:55.825281 1411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:55:55.825453 kubelet[1411]: I0515 00:55:55.825437 1411 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:55:55.825571 kubelet[1411]: E0515 00:55:55.825537 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:55.826091 kubelet[1411]: I0515 00:55:55.825897 1411 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:55:55.826091 kubelet[1411]: I0515 00:55:55.825958 1411 reconciler.go:26] "Reconciler: start to sync state" May 15 00:55:55.830428 kubelet[1411]: I0515 00:55:55.830407 1411 factory.go:221] Registration of the systemd container factory successfully May 15 00:55:55.830535 kubelet[1411]: I0515 00:55:55.830508 1411 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:55:55.831065 kubelet[1411]: E0515 00:55:55.831043 1411 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:55:55.835888 kubelet[1411]: I0515 00:55:55.835866 1411 factory.go:221] Registration of the containerd container factory successfully May 15 00:55:55.838936 kubelet[1411]: E0515 00:55:55.838041 1411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.130.183f8d50262bf0d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.130,UID:10.0.0.130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.130,},FirstTimestamp:2025-05-15 00:55:55.820355797 +0000 UTC m=+0.446826721,LastTimestamp:2025-05-15 00:55:55.820355797 +0000 UTC m=+0.446826721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.130,}" May 15 00:55:55.839239 kubelet[1411]: W0515 00:55:55.839224 1411 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 15 00:55:55.839332 kubelet[1411]: E0515 00:55:55.839311 1411 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 15 00:55:55.839474 kubelet[1411]: E0515 00:55:55.839455 1411 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.130\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 15 00:55:55.843296 kubelet[1411]: E0515 00:55:55.843195 1411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.130.183f8d5026cedb25 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.130,UID:10.0.0.130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.130,},FirstTimestamp:2025-05-15 00:55:55.831032613 +0000 UTC m=+0.457503527,LastTimestamp:2025-05-15 00:55:55.831032613 +0000 UTC m=+0.457503527,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.130,}" May 15 00:55:55.844102 kubelet[1411]: I0515 00:55:55.844085 1411 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:55:55.844102 kubelet[1411]: I0515 00:55:55.844096 1411 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:55:55.844151 kubelet[1411]: I0515 00:55:55.844112 1411 state_mem.go:36] "Initialized new in-memory state store" May 15 00:55:55.858393 kubelet[1411]: E0515 00:55:55.858290 1411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.130.183f8d50278af16c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.130,UID:10.0.0.130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.130 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.130,},FirstTimestamp:2025-05-15 00:55:55.843359084 +0000 UTC m=+0.469830008,LastTimestamp:2025-05-15 00:55:55.843359084 +0000 UTC m=+0.469830008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.130,}" May 15 00:55:55.862014 kubelet[1411]: E0515 00:55:55.861892 1411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.130.183f8d50278b0100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.130,UID:10.0.0.130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.130 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.130,},FirstTimestamp:2025-05-15 00:55:55.843363072 +0000 UTC m=+0.469833996,LastTimestamp:2025-05-15 00:55:55.843363072 +0000 UTC m=+0.469833996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.130,}" May 15 00:55:55.865422 kubelet[1411]: E0515 00:55:55.865310 1411 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.130.183f8d50278b0a46 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.130,UID:10.0.0.130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.130 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.130,},FirstTimestamp:2025-05-15 00:55:55.843365446 +0000 UTC m=+0.469836360,LastTimestamp:2025-05-15 00:55:55.843365446 +0000 UTC m=+0.469836360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.130,}" May 15 00:55:55.925934 kubelet[1411]: E0515 00:55:55.925889 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.026353 kubelet[1411]: E0515 00:55:56.026269 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.127159 kubelet[1411]: E0515 00:55:56.127109 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.137175 kubelet[1411]: E0515 00:55:56.137108 1411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.130\" not found" node="10.0.0.130" May 15 00:55:56.227385 kubelet[1411]: E0515 00:55:56.227309 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.328330 kubelet[1411]: E0515 00:55:56.328204 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.428660 kubelet[1411]: E0515 00:55:56.428626 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.529220 kubelet[1411]: E0515 00:55:56.529173 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.629689 kubelet[1411]: E0515 00:55:56.629656 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.691004 kubelet[1411]: I0515 00:55:56.690960 1411 policy_none.go:49] "None policy: Start" May 15 00:55:56.691825 kubelet[1411]: I0515 00:55:56.691776 1411 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:55:56.691825 kubelet[1411]: I0515 00:55:56.691802 1411 state_mem.go:35] "Initializing new in-memory state store" May 15 00:55:56.729395 kubelet[1411]: I0515 00:55:56.729348 1411 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 15 00:55:56.730472 kubelet[1411]: E0515 00:55:56.730428 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.731780 kubelet[1411]: I0515 00:55:56.731747 1411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:55:56.732929 kubelet[1411]: I0515 00:55:56.732903 1411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:55:56.732984 kubelet[1411]: I0515 00:55:56.732952 1411 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:55:56.732984 kubelet[1411]: I0515 00:55:56.732981 1411 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:55:56.733070 kubelet[1411]: E0515 00:55:56.733032 1411 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:55:56.738776 systemd[1]: Created slice kubepods.slice. May 15 00:55:56.742463 systemd[1]: Created slice kubepods-besteffort.slice. May 15 00:55:56.748724 systemd[1]: Created slice kubepods-burstable.slice. May 15 00:55:56.749831 kubelet[1411]: I0515 00:55:56.749792 1411 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:55:56.749990 kubelet[1411]: I0515 00:55:56.749966 1411 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:55:56.750030 kubelet[1411]: I0515 00:55:56.749985 1411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:55:56.750367 kubelet[1411]: I0515 00:55:56.750347 1411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:55:56.751775 kubelet[1411]: E0515 00:55:56.751753 1411 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.130\" not found" May 15 00:55:56.811882 kubelet[1411]: E0515 00:55:56.811805 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:55:56.852421 kubelet[1411]: I0515 00:55:56.852381 1411 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.130" May 15 00:55:56.856374 kubelet[1411]: I0515 00:55:56.856343 1411 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.130" May 15 00:55:56.856417 kubelet[1411]: E0515 00:55:56.856377 1411 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.130\": node \"10.0.0.130\" not found" May 15 00:55:56.866421 kubelet[1411]: E0515 00:55:56.866391 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:56.899946 sudo[1304]: pam_unix(sudo:session): session closed for user root May 15 00:55:56.902904 sshd[1300]: pam_unix(sshd:session): session closed for user core May 15 00:55:56.906223 systemd[1]: sshd@4-10.0.0.130:22-10.0.0.1:59542.service: Deactivated successfully. May 15 00:55:56.906988 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:55:56.907671 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. May 15 00:55:56.908406 systemd-logind[1202]: Removed session 5. May 15 00:55:56.967484 kubelet[1411]: E0515 00:55:56.967439 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:57.067997 kubelet[1411]: E0515 00:55:57.067943 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:57.168373 kubelet[1411]: E0515 00:55:57.168257 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:57.268999 kubelet[1411]: E0515 00:55:57.268962 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:57.369872 kubelet[1411]: E0515 00:55:57.369821 1411 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.130\" not found" May 15 00:55:57.470551 kubelet[1411]: I0515 00:55:57.470459 1411 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 15 00:55:57.471016 kubelet[1411]: I0515 00:55:57.470955 1411 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 15 00:55:57.471044 env[1210]: time="2025-05-15T00:55:57.470739274Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:55:57.812227 kubelet[1411]: I0515 00:55:57.812117 1411 apiserver.go:52] "Watching apiserver" May 15 00:55:57.812227 kubelet[1411]: E0515 00:55:57.812133 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:55:57.820971 systemd[1]: Created slice kubepods-besteffort-podeb021d19_af19_4caf_b13c_bcb51b140db1.slice. May 15 00:55:57.826848 kubelet[1411]: I0515 00:55:57.826815 1411 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:55:57.830546 systemd[1]: Created slice kubepods-burstable-pode9482f76_4826_48c5_994a_c20dc03ba1e5.slice. May 15 00:55:57.837377 kubelet[1411]: I0515 00:55:57.837354 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-net\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837449 kubelet[1411]: I0515 00:55:57.837378 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb021d19-af19-4caf-b13c-bcb51b140db1-lib-modules\") pod \"kube-proxy-86cfd\" (UID: \"eb021d19-af19-4caf-b13c-bcb51b140db1\") " pod="kube-system/kube-proxy-86cfd" May 15 00:55:57.837449 kubelet[1411]: I0515 00:55:57.837397 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6d5k\" (UniqueName: \"kubernetes.io/projected/eb021d19-af19-4caf-b13c-bcb51b140db1-kube-api-access-g6d5k\") pod \"kube-proxy-86cfd\" (UID: \"eb021d19-af19-4caf-b13c-bcb51b140db1\") " pod="kube-system/kube-proxy-86cfd" May 15 00:55:57.837449 kubelet[1411]: I0515 00:55:57.837411 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-bpf-maps\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837449 kubelet[1411]: I0515 00:55:57.837423 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-xtables-lock\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837449 kubelet[1411]: I0515 00:55:57.837435 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-config-path\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837449 kubelet[1411]: I0515 00:55:57.837448 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-hostproc\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837628 kubelet[1411]: I0515 00:55:57.837460 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-hubble-tls\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837628 kubelet[1411]: I0515 00:55:57.837496 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb021d19-af19-4caf-b13c-bcb51b140db1-xtables-lock\") pod \"kube-proxy-86cfd\" (UID: \"eb021d19-af19-4caf-b13c-bcb51b140db1\") " pod="kube-system/kube-proxy-86cfd" May 15 00:55:57.837628 kubelet[1411]: I0515 00:55:57.837519 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cni-path\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837628 kubelet[1411]: I0515 00:55:57.837541 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-etc-cni-netd\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837628 kubelet[1411]: I0515 00:55:57.837554 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-lib-modules\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837628 kubelet[1411]: I0515 00:55:57.837566 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-kernel\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837777 kubelet[1411]: I0515 00:55:57.837585 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4c7\" (UniqueName: \"kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-kube-api-access-nn4c7\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837777 kubelet[1411]: I0515 00:55:57.837604 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb021d19-af19-4caf-b13c-bcb51b140db1-kube-proxy\") pod \"kube-proxy-86cfd\" (UID: \"eb021d19-af19-4caf-b13c-bcb51b140db1\") " pod="kube-system/kube-proxy-86cfd" May 15 00:55:57.837777 kubelet[1411]: I0515 00:55:57.837622 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-run\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837777 kubelet[1411]: I0515 00:55:57.837639 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-cgroup\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.837777 kubelet[1411]: I0515 00:55:57.837659 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9482f76-4826-48c5-994a-c20dc03ba1e5-clustermesh-secrets\") pod \"cilium-v7lmr\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " pod="kube-system/cilium-v7lmr" May 15 00:55:57.938392 kubelet[1411]: I0515 00:55:57.938353 1411 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 00:55:58.129004 kubelet[1411]: E0515 00:55:58.128971 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:55:58.129463 env[1210]: time="2025-05-15T00:55:58.129418298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86cfd,Uid:eb021d19-af19-4caf-b13c-bcb51b140db1,Namespace:kube-system,Attempt:0,}" May 15 00:55:58.137411 kubelet[1411]: E0515 00:55:58.137375 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:55:58.137754 env[1210]: time="2025-05-15T00:55:58.137713678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7lmr,Uid:e9482f76-4826-48c5-994a-c20dc03ba1e5,Namespace:kube-system,Attempt:0,}" May 15 00:55:58.754785 env[1210]: time="2025-05-15T00:55:58.754735905Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.757599 env[1210]: time="2025-05-15T00:55:58.757567626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.760938 env[1210]: time="2025-05-15T00:55:58.760908393Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.761975 env[1210]: time="2025-05-15T00:55:58.761943544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.763623 env[1210]: time="2025-05-15T00:55:58.763597767Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.766550 env[1210]: time="2025-05-15T00:55:58.766508637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.769246 env[1210]: time="2025-05-15T00:55:58.769216496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.771173 env[1210]: time="2025-05-15T00:55:58.771145815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:55:58.784744 env[1210]: time="2025-05-15T00:55:58.784675723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:55:58.784744 env[1210]: time="2025-05-15T00:55:58.784719746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:55:58.784744 env[1210]: time="2025-05-15T00:55:58.784732891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:55:58.784970 env[1210]: time="2025-05-15T00:55:58.784924751Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845 pid=1467 runtime=io.containerd.runc.v2 May 15 00:55:58.793333 env[1210]: time="2025-05-15T00:55:58.793262069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:55:58.793333 env[1210]: time="2025-05-15T00:55:58.793305440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:55:58.793333 env[1210]: time="2025-05-15T00:55:58.793318164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:55:58.793500 env[1210]: time="2025-05-15T00:55:58.793459249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4604b76b636071ad701566477c4459c71e14449440aa59c71e3bad6326ca590a pid=1484 runtime=io.containerd.runc.v2 May 15 00:55:58.798087 systemd[1]: Started cri-containerd-ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845.scope. May 15 00:55:58.813507 kubelet[1411]: E0515 00:55:58.813471 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:55:58.820705 systemd[1]: Started cri-containerd-4604b76b636071ad701566477c4459c71e14449440aa59c71e3bad6326ca590a.scope. May 15 00:55:58.924802 env[1210]: time="2025-05-15T00:55:58.924751015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v7lmr,Uid:e9482f76-4826-48c5-994a-c20dc03ba1e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\"" May 15 00:55:58.925931 kubelet[1411]: E0515 00:55:58.925905 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:55:58.927598 env[1210]: time="2025-05-15T00:55:58.927555345Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:55:58.944295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3941500513.mount: Deactivated successfully. May 15 00:55:58.947109 env[1210]: time="2025-05-15T00:55:58.947081811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86cfd,Uid:eb021d19-af19-4caf-b13c-bcb51b140db1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4604b76b636071ad701566477c4459c71e14449440aa59c71e3bad6326ca590a\"" May 15 00:55:58.947702 kubelet[1411]: E0515 00:55:58.947682 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:55:59.813934 kubelet[1411]: E0515 00:55:59.813901 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:00.814339 kubelet[1411]: E0515 00:56:00.814294 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:01.815253 kubelet[1411]: E0515 00:56:01.815213 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:02.815732 kubelet[1411]: E0515 00:56:02.815664 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:03.816598 kubelet[1411]: E0515 00:56:03.816534 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:04.685179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694950944.mount: Deactivated successfully. May 15 00:56:04.816885 kubelet[1411]: E0515 00:56:04.816824 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:05.817088 kubelet[1411]: E0515 00:56:05.817034 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:06.817337 kubelet[1411]: E0515 00:56:06.817287 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:07.817972 kubelet[1411]: E0515 00:56:07.817915 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:08.818919 kubelet[1411]: E0515 00:56:08.818876 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:09.176501 env[1210]: time="2025-05-15T00:56:09.176444571Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:09.178431 env[1210]: time="2025-05-15T00:56:09.178399128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:09.180277 env[1210]: time="2025-05-15T00:56:09.180254398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:09.180967 env[1210]: time="2025-05-15T00:56:09.180901091Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 00:56:09.182399 env[1210]: time="2025-05-15T00:56:09.182370196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 00:56:09.183711 env[1210]: time="2025-05-15T00:56:09.183673591Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:56:09.200963 env[1210]: time="2025-05-15T00:56:09.200892639Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\"" May 15 00:56:09.201575 env[1210]: time="2025-05-15T00:56:09.201547487Z" level=info msg="StartContainer for \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\"" May 15 00:56:09.225384 systemd[1]: Started cri-containerd-d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae.scope. May 15 00:56:09.288794 env[1210]: time="2025-05-15T00:56:09.288736499Z" level=info msg="StartContainer for \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\" returns successfully" May 15 00:56:09.300566 systemd[1]: cri-containerd-d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae.scope: Deactivated successfully. May 15 00:56:09.747329 env[1210]: time="2025-05-15T00:56:09.747279835Z" level=info msg="shim disconnected" id=d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae May 15 00:56:09.747329 env[1210]: time="2025-05-15T00:56:09.747323056Z" level=warning msg="cleaning up after shim disconnected" id=d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae namespace=k8s.io May 15 00:56:09.747329 env[1210]: time="2025-05-15T00:56:09.747334778Z" level=info msg="cleaning up dead shim" May 15 00:56:09.753901 kubelet[1411]: E0515 00:56:09.753872 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:09.755799 env[1210]: time="2025-05-15T00:56:09.755759681Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1588 runtime=io.containerd.runc.v2\n" May 15 00:56:09.819778 kubelet[1411]: E0515 00:56:09.819749 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:10.193829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae-rootfs.mount: Deactivated successfully. May 15 00:56:10.756693 kubelet[1411]: E0515 00:56:10.756658 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:10.758236 env[1210]: time="2025-05-15T00:56:10.758186838Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:56:10.782347 env[1210]: time="2025-05-15T00:56:10.782287684Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\"" May 15 00:56:10.783153 env[1210]: time="2025-05-15T00:56:10.783109876Z" level=info msg="StartContainer for \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\"" May 15 00:56:10.808441 systemd[1]: Started cri-containerd-5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70.scope. May 15 00:56:10.820669 kubelet[1411]: E0515 00:56:10.820623 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:11.035287 env[1210]: time="2025-05-15T00:56:11.034993738Z" level=info msg="StartContainer for \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\" returns successfully" May 15 00:56:11.036717 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:56:11.036971 systemd[1]: Stopped systemd-sysctl.service. May 15 00:56:11.037175 systemd[1]: Stopping systemd-sysctl.service... May 15 00:56:11.038895 systemd[1]: Starting systemd-sysctl.service... May 15 00:56:11.041288 systemd[1]: cri-containerd-5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70.scope: Deactivated successfully. May 15 00:56:11.045873 systemd[1]: Finished systemd-sysctl.service. May 15 00:56:11.177338 env[1210]: time="2025-05-15T00:56:11.177289374Z" level=info msg="shim disconnected" id=5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70 May 15 00:56:11.177338 env[1210]: time="2025-05-15T00:56:11.177334158Z" level=warning msg="cleaning up after shim disconnected" id=5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70 namespace=k8s.io May 15 00:56:11.177338 env[1210]: time="2025-05-15T00:56:11.177342534Z" level=info msg="cleaning up dead shim" May 15 00:56:11.186133 env[1210]: time="2025-05-15T00:56:11.186096033Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1652 runtime=io.containerd.runc.v2\n" May 15 00:56:11.194165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70-rootfs.mount: Deactivated successfully. May 15 00:56:11.194268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393226852.mount: Deactivated successfully. May 15 00:56:11.759364 kubelet[1411]: E0515 00:56:11.759332 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:11.760773 env[1210]: time="2025-05-15T00:56:11.760737566Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:56:11.807229 env[1210]: time="2025-05-15T00:56:11.807176643Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\"" May 15 00:56:11.807807 env[1210]: time="2025-05-15T00:56:11.807747604Z" level=info msg="StartContainer for \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\"" May 15 00:56:11.821194 kubelet[1411]: E0515 00:56:11.821160 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:11.831925 systemd[1]: Started cri-containerd-fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c.scope. May 15 00:56:11.954164 env[1210]: time="2025-05-15T00:56:11.954115508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.956121 env[1210]: time="2025-05-15T00:56:11.955988081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.957371 env[1210]: time="2025-05-15T00:56:11.957341199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.958920 env[1210]: time="2025-05-15T00:56:11.958894322Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:11.959278 env[1210]: time="2025-05-15T00:56:11.959237385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 15 00:56:11.961512 env[1210]: time="2025-05-15T00:56:11.961475614Z" level=info msg="CreateContainer within sandbox \"4604b76b636071ad701566477c4459c71e14449440aa59c71e3bad6326ca590a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:56:11.974728 systemd[1]: cri-containerd-fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c.scope: Deactivated successfully. May 15 00:56:11.975708 env[1210]: time="2025-05-15T00:56:11.975666291Z" level=info msg="StartContainer for \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\" returns successfully" May 15 00:56:11.979278 env[1210]: time="2025-05-15T00:56:11.979194379Z" level=info msg="CreateContainer within sandbox \"4604b76b636071ad701566477c4459c71e14449440aa59c71e3bad6326ca590a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"93894f995c3e0008ef5dfd3a304b0331587102bcf878f94bbf8d18e387468374\"" May 15 00:56:11.979792 env[1210]: time="2025-05-15T00:56:11.979714855Z" level=info msg="StartContainer for \"93894f995c3e0008ef5dfd3a304b0331587102bcf878f94bbf8d18e387468374\"" May 15 00:56:12.000409 systemd[1]: Started cri-containerd-93894f995c3e0008ef5dfd3a304b0331587102bcf878f94bbf8d18e387468374.scope. May 15 00:56:12.195404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c-rootfs.mount: Deactivated successfully. May 15 00:56:12.201419 env[1210]: time="2025-05-15T00:56:12.201328689Z" level=info msg="StartContainer for \"93894f995c3e0008ef5dfd3a304b0331587102bcf878f94bbf8d18e387468374\" returns successfully" May 15 00:56:12.202745 env[1210]: time="2025-05-15T00:56:12.202688039Z" level=info msg="shim disconnected" id=fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c May 15 00:56:12.202930 env[1210]: time="2025-05-15T00:56:12.202753332Z" level=warning msg="cleaning up after shim disconnected" id=fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c namespace=k8s.io May 15 00:56:12.202930 env[1210]: time="2025-05-15T00:56:12.202781525Z" level=info msg="cleaning up dead shim" May 15 00:56:12.219174 env[1210]: time="2025-05-15T00:56:12.219112487Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1748 runtime=io.containerd.runc.v2\n" May 15 00:56:12.762356 kubelet[1411]: E0515 00:56:12.762324 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:12.764019 kubelet[1411]: E0515 00:56:12.763987 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:12.765400 env[1210]: time="2025-05-15T00:56:12.765369679Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:56:12.770952 kubelet[1411]: I0515 00:56:12.770903 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86cfd" podStartSLOduration=3.759022259 podStartE2EDuration="16.770874595s" podCreationTimestamp="2025-05-15 00:55:56 +0000 UTC" firstStartedPulling="2025-05-15 00:55:58.948154032 +0000 UTC m=+3.574624956" lastFinishedPulling="2025-05-15 00:56:11.960006368 +0000 UTC m=+16.586477292" observedRunningTime="2025-05-15 00:56:12.770737277 +0000 UTC m=+17.397208201" watchObservedRunningTime="2025-05-15 00:56:12.770874595 +0000 UTC m=+17.397345519" May 15 00:56:12.779157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814421596.mount: Deactivated successfully. May 15 00:56:12.784452 env[1210]: time="2025-05-15T00:56:12.784419531Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\"" May 15 00:56:12.784948 env[1210]: time="2025-05-15T00:56:12.784913678Z" level=info msg="StartContainer for \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\"" May 15 00:56:12.805530 systemd[1]: Started cri-containerd-795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37.scope. May 15 00:56:12.821525 kubelet[1411]: E0515 00:56:12.821467 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:12.836234 systemd[1]: cri-containerd-795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37.scope: Deactivated successfully. May 15 00:56:12.838365 env[1210]: time="2025-05-15T00:56:12.838322137Z" level=info msg="StartContainer for \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\" returns successfully" May 15 00:56:12.857012 env[1210]: time="2025-05-15T00:56:12.856959636Z" level=info msg="shim disconnected" id=795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37 May 15 00:56:12.857012 env[1210]: time="2025-05-15T00:56:12.857005632Z" level=warning msg="cleaning up after shim disconnected" id=795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37 namespace=k8s.io May 15 00:56:12.857012 env[1210]: time="2025-05-15T00:56:12.857013557Z" level=info msg="cleaning up dead shim" May 15 00:56:12.863028 env[1210]: time="2025-05-15T00:56:12.862996920Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1935 runtime=io.containerd.runc.v2\n" May 15 00:56:13.194222 systemd[1]: run-containerd-runc-k8s.io-795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37-runc.pbMtFZ.mount: Deactivated successfully. May 15 00:56:13.194343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37-rootfs.mount: Deactivated successfully. May 15 00:56:13.768182 kubelet[1411]: E0515 00:56:13.768155 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:13.768392 kubelet[1411]: E0515 00:56:13.768155 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:13.769734 env[1210]: time="2025-05-15T00:56:13.769689817Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:56:13.791253 env[1210]: time="2025-05-15T00:56:13.791202278Z" level=info msg="CreateContainer within sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\"" May 15 00:56:13.791899 env[1210]: time="2025-05-15T00:56:13.791867025Z" level=info msg="StartContainer for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\"" May 15 00:56:13.808156 systemd[1]: Started cri-containerd-1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039.scope. May 15 00:56:13.822345 kubelet[1411]: E0515 00:56:13.822300 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:13.867447 env[1210]: time="2025-05-15T00:56:13.867389734Z" level=info msg="StartContainer for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" returns successfully" May 15 00:56:13.945355 kubelet[1411]: I0515 00:56:13.944756 1411 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:56:14.189860 kernel: Initializing XFRM netlink socket May 15 00:56:14.772087 kubelet[1411]: E0515 00:56:14.772032 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:14.823000 kubelet[1411]: E0515 00:56:14.822946 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:15.773930 kubelet[1411]: E0515 00:56:15.773878 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:15.779278 kubelet[1411]: E0515 00:56:15.779243 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:15.824035 kubelet[1411]: E0515 00:56:15.823963 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:15.835675 systemd-networkd[1035]: cilium_host: Link UP May 15 00:56:15.838115 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 00:56:15.838153 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 00:56:15.835814 systemd-networkd[1035]: cilium_net: Link UP May 15 00:56:15.836683 systemd-networkd[1035]: cilium_net: Gained carrier May 15 00:56:15.838099 systemd-networkd[1035]: cilium_host: Gained carrier May 15 00:56:15.923324 systemd-networkd[1035]: cilium_vxlan: Link UP May 15 00:56:15.923333 systemd-networkd[1035]: cilium_vxlan: Gained carrier May 15 00:56:16.145945 kernel: NET: Registered PF_ALG protocol family May 15 00:56:16.336023 systemd-networkd[1035]: cilium_net: Gained IPv6LL May 15 00:56:16.527973 systemd-networkd[1035]: cilium_host: Gained IPv6LL May 15 00:56:16.691523 systemd-networkd[1035]: lxc_health: Link UP May 15 00:56:16.699856 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 00:56:16.700915 systemd-networkd[1035]: lxc_health: Gained carrier May 15 00:56:16.774732 kubelet[1411]: E0515 00:56:16.774691 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:16.825079 kubelet[1411]: E0515 00:56:16.824967 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:17.679007 systemd-networkd[1035]: cilium_vxlan: Gained IPv6LL May 15 00:56:17.807059 systemd-networkd[1035]: lxc_health: Gained IPv6LL May 15 00:56:17.825910 kubelet[1411]: E0515 00:56:17.825863 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:18.139905 kubelet[1411]: E0515 00:56:18.139602 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:18.181708 kubelet[1411]: I0515 00:56:18.181643 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v7lmr" podStartSLOduration=11.926655906 podStartE2EDuration="22.181625868s" podCreationTimestamp="2025-05-15 00:55:56 +0000 UTC" firstStartedPulling="2025-05-15 00:55:58.9271103 +0000 UTC m=+3.553581224" lastFinishedPulling="2025-05-15 00:56:09.182080262 +0000 UTC m=+13.808551186" observedRunningTime="2025-05-15 00:56:14.917184889 +0000 UTC m=+19.543655823" watchObservedRunningTime="2025-05-15 00:56:18.181625868 +0000 UTC m=+22.808096782" May 15 00:56:18.630454 systemd[1]: Created slice kubepods-besteffort-pod35025d11_9506_4c03_98bd_f26f7f6f81ab.slice. May 15 00:56:18.777041 kubelet[1411]: I0515 00:56:18.777007 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx59f\" (UniqueName: \"kubernetes.io/projected/35025d11-9506-4c03-98bd-f26f7f6f81ab-kube-api-access-xx59f\") pod \"nginx-deployment-8587fbcb89-dtp9w\" (UID: \"35025d11-9506-4c03-98bd-f26f7f6f81ab\") " pod="default/nginx-deployment-8587fbcb89-dtp9w" May 15 00:56:18.778064 kubelet[1411]: E0515 00:56:18.778049 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:18.826151 kubelet[1411]: E0515 00:56:18.826123 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:19.232936 env[1210]: time="2025-05-15T00:56:19.232887774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-dtp9w,Uid:35025d11-9506-4c03-98bd-f26f7f6f81ab,Namespace:default,Attempt:0,}" May 15 00:56:19.295184 systemd-networkd[1035]: lxc0187caa0aa36: Link UP May 15 00:56:19.303736 kernel: eth0: renamed from tmp7d194 May 15 00:56:19.309619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:56:19.309701 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0187caa0aa36: link becomes ready May 15 00:56:19.309131 systemd-networkd[1035]: lxc0187caa0aa36: Gained carrier May 15 00:56:19.826983 kubelet[1411]: E0515 00:56:19.826914 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:20.844468 kubelet[1411]: E0515 00:56:20.844411 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:20.985135 env[1210]: time="2025-05-15T00:56:20.985056492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:20.985135 env[1210]: time="2025-05-15T00:56:20.985102018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:20.985135 env[1210]: time="2025-05-15T00:56:20.985111936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:20.985537 env[1210]: time="2025-05-15T00:56:20.985429592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d1943f1fb108b7d7d0210a023d8fac3869c05e77d68a287a8de0e19013473e0 pid=2481 runtime=io.containerd.runc.v2 May 15 00:56:21.026372 systemd[1]: Started cri-containerd-7d1943f1fb108b7d7d0210a023d8fac3869c05e77d68a287a8de0e19013473e0.scope. May 15 00:56:21.039997 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:21.062290 env[1210]: time="2025-05-15T00:56:21.062221061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-dtp9w,Uid:35025d11-9506-4c03-98bd-f26f7f6f81ab,Namespace:default,Attempt:0,} returns sandbox id \"7d1943f1fb108b7d7d0210a023d8fac3869c05e77d68a287a8de0e19013473e0\"" May 15 00:56:21.063755 env[1210]: time="2025-05-15T00:56:21.063723619Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 15 00:56:21.070940 systemd-networkd[1035]: lxc0187caa0aa36: Gained IPv6LL May 15 00:56:21.845179 kubelet[1411]: E0515 00:56:21.845124 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:22.845707 kubelet[1411]: E0515 00:56:22.845658 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:23.846556 kubelet[1411]: E0515 00:56:23.846500 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:24.403067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459696372.mount: Deactivated successfully. May 15 00:56:24.847114 kubelet[1411]: E0515 00:56:24.846979 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:25.847611 kubelet[1411]: E0515 00:56:25.847566 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:26.477176 env[1210]: time="2025-05-15T00:56:26.477117257Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:26.479226 env[1210]: time="2025-05-15T00:56:26.479169718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:26.480717 env[1210]: time="2025-05-15T00:56:26.480687887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:26.482215 env[1210]: time="2025-05-15T00:56:26.482190926Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:26.483076 env[1210]: time="2025-05-15T00:56:26.483030224Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 15 00:56:26.485458 env[1210]: time="2025-05-15T00:56:26.485430552Z" level=info msg="CreateContainer within sandbox \"7d1943f1fb108b7d7d0210a023d8fac3869c05e77d68a287a8de0e19013473e0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 15 00:56:26.498094 env[1210]: time="2025-05-15T00:56:26.498046661Z" level=info msg="CreateContainer within sandbox \"7d1943f1fb108b7d7d0210a023d8fac3869c05e77d68a287a8de0e19013473e0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"03e2d083c66de7840676828323772f4914480b23dce8ba1bb73f7e19693f2f13\"" May 15 00:56:26.498609 env[1210]: time="2025-05-15T00:56:26.498564032Z" level=info msg="StartContainer for \"03e2d083c66de7840676828323772f4914480b23dce8ba1bb73f7e19693f2f13\"" May 15 00:56:26.527278 systemd[1]: run-containerd-runc-k8s.io-03e2d083c66de7840676828323772f4914480b23dce8ba1bb73f7e19693f2f13-runc.kWMVGO.mount: Deactivated successfully. May 15 00:56:26.529919 systemd[1]: Started cri-containerd-03e2d083c66de7840676828323772f4914480b23dce8ba1bb73f7e19693f2f13.scope. May 15 00:56:26.612435 env[1210]: time="2025-05-15T00:56:26.612364809Z" level=info msg="StartContainer for \"03e2d083c66de7840676828323772f4914480b23dce8ba1bb73f7e19693f2f13\" returns successfully" May 15 00:56:26.835059 kubelet[1411]: I0515 00:56:26.834918 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-dtp9w" podStartSLOduration=3.414239694 podStartE2EDuration="8.834899708s" podCreationTimestamp="2025-05-15 00:56:18 +0000 UTC" firstStartedPulling="2025-05-15 00:56:21.063392288 +0000 UTC m=+25.689863212" lastFinishedPulling="2025-05-15 00:56:26.484052302 +0000 UTC m=+31.110523226" observedRunningTime="2025-05-15 00:56:26.834700056 +0000 UTC m=+31.461170980" watchObservedRunningTime="2025-05-15 00:56:26.834899708 +0000 UTC m=+31.461370632" May 15 00:56:26.847788 kubelet[1411]: E0515 00:56:26.847720 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:27.848477 kubelet[1411]: E0515 00:56:27.848432 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:28.849645 kubelet[1411]: E0515 00:56:28.849571 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:29.849797 kubelet[1411]: E0515 00:56:29.849722 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:30.631432 systemd[1]: Created slice kubepods-besteffort-pod18e638e8_5140_4602_8b69_92435501a130.slice. May 15 00:56:30.738964 kubelet[1411]: I0515 00:56:30.738930 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7t2\" (UniqueName: \"kubernetes.io/projected/18e638e8-5140-4602-8b69-92435501a130-kube-api-access-xf7t2\") pod \"nfs-server-provisioner-0\" (UID: \"18e638e8-5140-4602-8b69-92435501a130\") " pod="default/nfs-server-provisioner-0" May 15 00:56:30.738964 kubelet[1411]: I0515 00:56:30.738966 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/18e638e8-5140-4602-8b69-92435501a130-data\") pod \"nfs-server-provisioner-0\" (UID: \"18e638e8-5140-4602-8b69-92435501a130\") " pod="default/nfs-server-provisioner-0" May 15 00:56:30.850613 kubelet[1411]: E0515 00:56:30.850573 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:30.934740 env[1210]: time="2025-05-15T00:56:30.934622978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:18e638e8-5140-4602-8b69-92435501a130,Namespace:default,Attempt:0,}" May 15 00:56:31.531327 systemd-networkd[1035]: lxccf3a88a309dc: Link UP May 15 00:56:31.539856 kernel: eth0: renamed from tmp6a3c6 May 15 00:56:31.554488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:56:31.554598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccf3a88a309dc: link becomes ready May 15 00:56:31.554813 systemd-networkd[1035]: lxccf3a88a309dc: Gained carrier May 15 00:56:31.762494 env[1210]: time="2025-05-15T00:56:31.762424700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:31.762494 env[1210]: time="2025-05-15T00:56:31.762461802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:31.762494 env[1210]: time="2025-05-15T00:56:31.762471991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:31.762683 env[1210]: time="2025-05-15T00:56:31.762642896Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a3c6f0dc9dd94c5cc02dec5ab9dc8bd79fe67fae6a4dc8e8cd80b28ec88a994 pid=2612 runtime=io.containerd.runc.v2 May 15 00:56:31.781330 systemd[1]: Started cri-containerd-6a3c6f0dc9dd94c5cc02dec5ab9dc8bd79fe67fae6a4dc8e8cd80b28ec88a994.scope. May 15 00:56:31.805757 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:31.830455 env[1210]: time="2025-05-15T00:56:31.830409459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:18e638e8-5140-4602-8b69-92435501a130,Namespace:default,Attempt:0,} returns sandbox id \"6a3c6f0dc9dd94c5cc02dec5ab9dc8bd79fe67fae6a4dc8e8cd80b28ec88a994\"" May 15 00:56:31.831800 env[1210]: time="2025-05-15T00:56:31.831779007Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 15 00:56:31.851200 kubelet[1411]: E0515 00:56:31.851179 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:32.783092 systemd-networkd[1035]: lxccf3a88a309dc: Gained IPv6LL May 15 00:56:32.852199 kubelet[1411]: E0515 00:56:32.852123 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:33.853015 kubelet[1411]: E0515 00:56:33.852954 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:34.691996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461449687.mount: Deactivated successfully. May 15 00:56:34.728142 update_engine[1205]: I0515 00:56:34.727731 1205 update_attempter.cc:509] Updating boot flags... May 15 00:56:34.853153 kubelet[1411]: E0515 00:56:34.853100 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:35.779680 kubelet[1411]: E0515 00:56:35.779610 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:35.853659 kubelet[1411]: E0515 00:56:35.853609 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:36.854046 kubelet[1411]: E0515 00:56:36.853994 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:37.194921 env[1210]: time="2025-05-15T00:56:37.194868375Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:37.197277 env[1210]: time="2025-05-15T00:56:37.197245421Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:37.199113 env[1210]: time="2025-05-15T00:56:37.199089175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:37.200824 env[1210]: time="2025-05-15T00:56:37.200780822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:37.201568 env[1210]: time="2025-05-15T00:56:37.201538227Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 15 00:56:37.203747 env[1210]: time="2025-05-15T00:56:37.203712037Z" level=info msg="CreateContainer within sandbox \"6a3c6f0dc9dd94c5cc02dec5ab9dc8bd79fe67fae6a4dc8e8cd80b28ec88a994\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 15 00:56:37.220010 env[1210]: time="2025-05-15T00:56:37.219964672Z" level=info msg="CreateContainer within sandbox \"6a3c6f0dc9dd94c5cc02dec5ab9dc8bd79fe67fae6a4dc8e8cd80b28ec88a994\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4e0a13429b70fe42b53b26fc4f47b480e2dc293afe6db110c2baf38efc140cdb\"" May 15 00:56:37.220484 env[1210]: time="2025-05-15T00:56:37.220463898Z" level=info msg="StartContainer for \"4e0a13429b70fe42b53b26fc4f47b480e2dc293afe6db110c2baf38efc140cdb\"" May 15 00:56:37.235947 systemd[1]: run-containerd-runc-k8s.io-4e0a13429b70fe42b53b26fc4f47b480e2dc293afe6db110c2baf38efc140cdb-runc.LAOCF1.mount: Deactivated successfully. May 15 00:56:37.237488 systemd[1]: Started cri-containerd-4e0a13429b70fe42b53b26fc4f47b480e2dc293afe6db110c2baf38efc140cdb.scope. May 15 00:56:37.263588 env[1210]: time="2025-05-15T00:56:37.263540395Z" level=info msg="StartContainer for \"4e0a13429b70fe42b53b26fc4f47b480e2dc293afe6db110c2baf38efc140cdb\" returns successfully" May 15 00:56:37.854559 kubelet[1411]: E0515 00:56:37.854493 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:38.855387 kubelet[1411]: E0515 00:56:38.855314 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:39.856406 kubelet[1411]: E0515 00:56:39.856351 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:40.856645 kubelet[1411]: E0515 00:56:40.856606 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:41.857749 kubelet[1411]: E0515 00:56:41.857674 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:42.858176 kubelet[1411]: E0515 00:56:42.858083 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:43.858708 kubelet[1411]: E0515 00:56:43.858641 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:44.859744 kubelet[1411]: E0515 00:56:44.859655 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:45.860359 kubelet[1411]: E0515 00:56:45.860295 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:46.852567 kubelet[1411]: I0515 00:56:46.852506 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.481452936 podStartE2EDuration="16.852487225s" podCreationTimestamp="2025-05-15 00:56:30 +0000 UTC" firstStartedPulling="2025-05-15 00:56:31.831508191 +0000 UTC m=+36.457979115" lastFinishedPulling="2025-05-15 00:56:37.20254248 +0000 UTC m=+41.829013404" observedRunningTime="2025-05-15 00:56:37.87493265 +0000 UTC m=+42.501403584" watchObservedRunningTime="2025-05-15 00:56:46.852487225 +0000 UTC m=+51.478958159" May 15 00:56:46.857509 systemd[1]: Created slice kubepods-besteffort-podaa91c5bb_b67b_42d8_8784_3d90a0587b5d.slice. May 15 00:56:46.861265 kubelet[1411]: E0515 00:56:46.861242 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:47.009623 kubelet[1411]: I0515 00:56:47.009551 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7jx\" (UniqueName: \"kubernetes.io/projected/aa91c5bb-b67b-42d8-8784-3d90a0587b5d-kube-api-access-6z7jx\") pod \"test-pod-1\" (UID: \"aa91c5bb-b67b-42d8-8784-3d90a0587b5d\") " pod="default/test-pod-1" May 15 00:56:47.009623 kubelet[1411]: I0515 00:56:47.009606 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c779b95d-bba9-45a2-9b74-e3dc791e9e51\" (UniqueName: \"kubernetes.io/nfs/aa91c5bb-b67b-42d8-8784-3d90a0587b5d-pvc-c779b95d-bba9-45a2-9b74-e3dc791e9e51\") pod \"test-pod-1\" (UID: \"aa91c5bb-b67b-42d8-8784-3d90a0587b5d\") " pod="default/test-pod-1" May 15 00:56:47.135869 kernel: FS-Cache: Loaded May 15 00:56:47.179319 kernel: RPC: Registered named UNIX socket transport module. May 15 00:56:47.179480 kernel: RPC: Registered udp transport module. May 15 00:56:47.179505 kernel: RPC: Registered tcp transport module. May 15 00:56:47.179529 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 15 00:56:47.234867 kernel: FS-Cache: Netfs 'nfs' registered for caching May 15 00:56:47.426563 kernel: NFS: Registering the id_resolver key type May 15 00:56:47.426686 kernel: Key type id_resolver registered May 15 00:56:47.426705 kernel: Key type id_legacy registered May 15 00:56:47.450646 nfsidmap[2749]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 15 00:56:47.453277 nfsidmap[2752]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 15 00:56:47.760550 env[1210]: time="2025-05-15T00:56:47.760432498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aa91c5bb-b67b-42d8-8784-3d90a0587b5d,Namespace:default,Attempt:0,}" May 15 00:56:47.786250 systemd-networkd[1035]: lxc10e04fff10a2: Link UP May 15 00:56:47.792873 kernel: eth0: renamed from tmpf3756 May 15 00:56:47.800516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:56:47.800651 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc10e04fff10a2: link becomes ready May 15 00:56:47.800630 systemd-networkd[1035]: lxc10e04fff10a2: Gained carrier May 15 00:56:47.862296 kubelet[1411]: E0515 00:56:47.862257 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:48.087441 env[1210]: time="2025-05-15T00:56:48.087278484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:48.087441 env[1210]: time="2025-05-15T00:56:48.087327657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:48.087441 env[1210]: time="2025-05-15T00:56:48.087367211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:48.087894 env[1210]: time="2025-05-15T00:56:48.087818803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3756fa60a67e3bf65818b4f82febfeb26b0c8e4645a77713f04ef6fe8dc6cf5 pid=2789 runtime=io.containerd.runc.v2 May 15 00:56:48.097245 systemd[1]: Started cri-containerd-f3756fa60a67e3bf65818b4f82febfeb26b0c8e4645a77713f04ef6fe8dc6cf5.scope. May 15 00:56:48.110653 systemd-resolved[1145]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:56:48.133343 env[1210]: time="2025-05-15T00:56:48.133280355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aa91c5bb-b67b-42d8-8784-3d90a0587b5d,Namespace:default,Attempt:0,} returns sandbox id \"f3756fa60a67e3bf65818b4f82febfeb26b0c8e4645a77713f04ef6fe8dc6cf5\"" May 15 00:56:48.135171 env[1210]: time="2025-05-15T00:56:48.135135942Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 15 00:56:48.507005 env[1210]: time="2025-05-15T00:56:48.506927390Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:48.509490 env[1210]: time="2025-05-15T00:56:48.509438013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:48.511470 env[1210]: time="2025-05-15T00:56:48.511435910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:48.513431 env[1210]: time="2025-05-15T00:56:48.513369966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:56:48.513985 env[1210]: time="2025-05-15T00:56:48.513945922Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 15 00:56:48.516443 env[1210]: time="2025-05-15T00:56:48.516394559Z" level=info msg="CreateContainer within sandbox \"f3756fa60a67e3bf65818b4f82febfeb26b0c8e4645a77713f04ef6fe8dc6cf5\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 15 00:56:48.535992 env[1210]: time="2025-05-15T00:56:48.535915263Z" level=info msg="CreateContainer within sandbox \"f3756fa60a67e3bf65818b4f82febfeb26b0c8e4645a77713f04ef6fe8dc6cf5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"dc8b88100d7fe2cfb0702a31407922514aa91b8f3fc22bd57a4a29e48da113be\"" May 15 00:56:48.536497 env[1210]: time="2025-05-15T00:56:48.536447016Z" level=info msg="StartContainer for \"dc8b88100d7fe2cfb0702a31407922514aa91b8f3fc22bd57a4a29e48da113be\"" May 15 00:56:48.551621 systemd[1]: Started cri-containerd-dc8b88100d7fe2cfb0702a31407922514aa91b8f3fc22bd57a4a29e48da113be.scope. May 15 00:56:48.578167 env[1210]: time="2025-05-15T00:56:48.578084999Z" level=info msg="StartContainer for \"dc8b88100d7fe2cfb0702a31407922514aa91b8f3fc22bd57a4a29e48da113be\" returns successfully" May 15 00:56:48.842624 kubelet[1411]: I0515 00:56:48.842489 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.462273471 podStartE2EDuration="18.842471257s" podCreationTimestamp="2025-05-15 00:56:30 +0000 UTC" firstStartedPulling="2025-05-15 00:56:48.134806291 +0000 UTC m=+52.761277205" lastFinishedPulling="2025-05-15 00:56:48.515004067 +0000 UTC m=+53.141474991" observedRunningTime="2025-05-15 00:56:48.842390304 +0000 UTC m=+53.468861228" watchObservedRunningTime="2025-05-15 00:56:48.842471257 +0000 UTC m=+53.468942171" May 15 00:56:48.862581 kubelet[1411]: E0515 00:56:48.862532 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:49.615042 systemd-networkd[1035]: lxc10e04fff10a2: Gained IPv6LL May 15 00:56:49.863501 kubelet[1411]: E0515 00:56:49.863432 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:50.864085 kubelet[1411]: E0515 00:56:50.864040 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:51.865119 kubelet[1411]: E0515 00:56:51.865085 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:52.866169 kubelet[1411]: E0515 00:56:52.866111 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:53.177054 env[1210]: time="2025-05-15T00:56:53.176918603Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:56:53.181684 env[1210]: time="2025-05-15T00:56:53.181649831Z" level=info msg="StopContainer for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" with timeout 2 (s)" May 15 00:56:53.182007 env[1210]: time="2025-05-15T00:56:53.181984931Z" level=info msg="Stop container \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" with signal terminated" May 15 00:56:53.187755 systemd-networkd[1035]: lxc_health: Link DOWN May 15 00:56:53.187765 systemd-networkd[1035]: lxc_health: Lost carrier May 15 00:56:53.224264 systemd[1]: cri-containerd-1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039.scope: Deactivated successfully. May 15 00:56:53.224571 systemd[1]: cri-containerd-1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039.scope: Consumed 6.305s CPU time. May 15 00:56:53.242110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039-rootfs.mount: Deactivated successfully. May 15 00:56:53.503239 env[1210]: time="2025-05-15T00:56:53.503104437Z" level=info msg="shim disconnected" id=1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039 May 15 00:56:53.503239 env[1210]: time="2025-05-15T00:56:53.503163388Z" level=warning msg="cleaning up after shim disconnected" id=1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039 namespace=k8s.io May 15 00:56:53.503239 env[1210]: time="2025-05-15T00:56:53.503180470Z" level=info msg="cleaning up dead shim" May 15 00:56:53.509132 env[1210]: time="2025-05-15T00:56:53.509086390Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n" May 15 00:56:53.513490 env[1210]: time="2025-05-15T00:56:53.513453942Z" level=info msg="StopContainer for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" returns successfully" May 15 00:56:53.514175 env[1210]: time="2025-05-15T00:56:53.514150373Z" level=info msg="StopPodSandbox for \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\"" May 15 00:56:53.514235 env[1210]: time="2025-05-15T00:56:53.514208804Z" level=info msg="Container to stop \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:56:53.514235 env[1210]: time="2025-05-15T00:56:53.514221337Z" level=info msg="Container to stop \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:56:53.514298 env[1210]: time="2025-05-15T00:56:53.514231466Z" level=info msg="Container to stop \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:56:53.514298 env[1210]: time="2025-05-15T00:56:53.514241806Z" level=info msg="Container to stop \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:56:53.514298 env[1210]: time="2025-05-15T00:56:53.514251163Z" level=info msg="Container to stop \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:56:53.516056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845-shm.mount: Deactivated successfully. May 15 00:56:53.518909 systemd[1]: cri-containerd-ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845.scope: Deactivated successfully. May 15 00:56:53.532910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845-rootfs.mount: Deactivated successfully. May 15 00:56:53.537262 env[1210]: time="2025-05-15T00:56:53.537199659Z" level=info msg="shim disconnected" id=ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845 May 15 00:56:53.537262 env[1210]: time="2025-05-15T00:56:53.537242490Z" level=warning msg="cleaning up after shim disconnected" id=ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845 namespace=k8s.io May 15 00:56:53.537262 env[1210]: time="2025-05-15T00:56:53.537252829Z" level=info msg="cleaning up dead shim" May 15 00:56:53.544223 env[1210]: time="2025-05-15T00:56:53.544162047Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2949 runtime=io.containerd.runc.v2\n" May 15 00:56:53.544544 env[1210]: time="2025-05-15T00:56:53.544503069Z" level=info msg="TearDown network for sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" successfully" May 15 00:56:53.544544 env[1210]: time="2025-05-15T00:56:53.544534869Z" level=info msg="StopPodSandbox for \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" returns successfully" May 15 00:56:53.645444 kubelet[1411]: I0515 00:56:53.645402 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.645444 kubelet[1411]: I0515 00:56:53.645316 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-cgroup\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645636 kubelet[1411]: I0515 00:56:53.645472 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-net\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645636 kubelet[1411]: I0515 00:56:53.645503 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-bpf-maps\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645636 kubelet[1411]: I0515 00:56:53.645543 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-config-path\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645636 kubelet[1411]: I0515 00:56:53.645556 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-kernel\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645636 kubelet[1411]: I0515 00:56:53.645573 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn4c7\" (UniqueName: \"kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-kube-api-access-nn4c7\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645636 kubelet[1411]: I0515 00:56:53.645584 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-run\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645778 kubelet[1411]: I0515 00:56:53.645601 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9482f76-4826-48c5-994a-c20dc03ba1e5-clustermesh-secrets\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645778 kubelet[1411]: I0515 00:56:53.645587 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.645778 kubelet[1411]: I0515 00:56:53.645600 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.645778 kubelet[1411]: I0515 00:56:53.645614 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-hubble-tls\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645778 kubelet[1411]: I0515 00:56:53.645704 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-hostproc\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645778 kubelet[1411]: I0515 00:56:53.645724 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cni-path\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645737 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-lib-modules\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645749 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-etc-cni-netd\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645768 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-xtables-lock\") pod \"e9482f76-4826-48c5-994a-c20dc03ba1e5\" (UID: \"e9482f76-4826-48c5-994a-c20dc03ba1e5\") " May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645816 1411 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-net\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645859 1411 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-bpf-maps\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645867 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-cgroup\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.645949 kubelet[1411]: I0515 00:56:53.645889 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.646111 kubelet[1411]: I0515 00:56:53.645906 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.646111 kubelet[1411]: I0515 00:56:53.645918 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.646111 kubelet[1411]: I0515 00:56:53.645931 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.646111 kubelet[1411]: I0515 00:56:53.645943 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.646111 kubelet[1411]: I0515 00:56:53.646075 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.646235 kubelet[1411]: I0515 00:56:53.646118 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:53.649288 kubelet[1411]: I0515 00:56:53.648121 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:56:53.649777 kubelet[1411]: I0515 00:56:53.649752 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:56:53.649977 kubelet[1411]: I0515 00:56:53.649808 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9482f76-4826-48c5-994a-c20dc03ba1e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:56:53.649977 kubelet[1411]: I0515 00:56:53.649909 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-kube-api-access-nn4c7" (OuterVolumeSpecName: "kube-api-access-nn4c7") pod "e9482f76-4826-48c5-994a-c20dc03ba1e5" (UID: "e9482f76-4826-48c5-994a-c20dc03ba1e5"). InnerVolumeSpecName "kube-api-access-nn4c7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:56:53.650199 systemd[1]: var-lib-kubelet-pods-e9482f76\x2d4826\x2d48c5\x2d994a\x2dc20dc03ba1e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746512 1411 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-host-proc-sys-kernel\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746541 1411 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nn4c7\" (UniqueName: \"kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-kube-api-access-nn4c7\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746553 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-run\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746560 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e9482f76-4826-48c5-994a-c20dc03ba1e5-cilium-config-path\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746569 1411 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-cni-path\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746577 1411 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-lib-modules\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746561 kubelet[1411]: I0515 00:56:53.746584 1411 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-etc-cni-netd\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746977 kubelet[1411]: I0515 00:56:53.746592 1411 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e9482f76-4826-48c5-994a-c20dc03ba1e5-clustermesh-secrets\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746977 kubelet[1411]: I0515 00:56:53.746599 1411 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e9482f76-4826-48c5-994a-c20dc03ba1e5-hubble-tls\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746977 kubelet[1411]: I0515 00:56:53.746605 1411 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-hostproc\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.746977 kubelet[1411]: I0515 00:56:53.746611 1411 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9482f76-4826-48c5-994a-c20dc03ba1e5-xtables-lock\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:53.845649 kubelet[1411]: I0515 00:56:53.845553 1411 scope.go:117] "RemoveContainer" containerID="1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039" May 15 00:56:53.847210 env[1210]: time="2025-05-15T00:56:53.847182137Z" level=info msg="RemoveContainer for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\"" May 15 00:56:53.849445 systemd[1]: Removed slice kubepods-burstable-pode9482f76_4826_48c5_994a_c20dc03ba1e5.slice. May 15 00:56:53.849547 systemd[1]: kubepods-burstable-pode9482f76_4826_48c5_994a_c20dc03ba1e5.slice: Consumed 6.591s CPU time. May 15 00:56:53.850612 env[1210]: time="2025-05-15T00:56:53.850577600Z" level=info msg="RemoveContainer for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" returns successfully" May 15 00:56:53.850982 kubelet[1411]: I0515 00:56:53.850958 1411 scope.go:117] "RemoveContainer" containerID="795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37" May 15 00:56:53.852105 env[1210]: time="2025-05-15T00:56:53.852072083Z" level=info msg="RemoveContainer for \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\"" May 15 00:56:53.856581 env[1210]: time="2025-05-15T00:56:53.856542290Z" level=info msg="RemoveContainer for \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\" returns successfully" May 15 00:56:53.856774 kubelet[1411]: I0515 00:56:53.856750 1411 scope.go:117] "RemoveContainer" containerID="fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c" May 15 00:56:53.857776 env[1210]: time="2025-05-15T00:56:53.857748801Z" level=info msg="RemoveContainer for \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\"" May 15 00:56:53.861531 env[1210]: time="2025-05-15T00:56:53.861491327Z" level=info msg="RemoveContainer for \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\" returns successfully" May 15 00:56:53.861720 kubelet[1411]: I0515 00:56:53.861684 1411 scope.go:117] "RemoveContainer" containerID="5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70" May 15 00:56:53.863009 env[1210]: time="2025-05-15T00:56:53.862981573Z" level=info msg="RemoveContainer for \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\"" May 15 00:56:53.865911 env[1210]: time="2025-05-15T00:56:53.865873758Z" level=info msg="RemoveContainer for \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\" returns successfully" May 15 00:56:53.866102 kubelet[1411]: I0515 00:56:53.866047 1411 scope.go:117] "RemoveContainer" containerID="d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae" May 15 00:56:53.866197 kubelet[1411]: E0515 00:56:53.866171 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:53.867023 env[1210]: time="2025-05-15T00:56:53.866995300Z" level=info msg="RemoveContainer for \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\"" May 15 00:56:53.869577 env[1210]: time="2025-05-15T00:56:53.869548246Z" level=info msg="RemoveContainer for \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\" returns successfully" May 15 00:56:53.869676 kubelet[1411]: I0515 00:56:53.869659 1411 scope.go:117] "RemoveContainer" containerID="1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039" May 15 00:56:53.869944 env[1210]: time="2025-05-15T00:56:53.869850726Z" level=error msg="ContainerStatus for \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\": not found" May 15 00:56:53.870042 kubelet[1411]: E0515 00:56:53.870024 1411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\": not found" containerID="1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039" May 15 00:56:53.870121 kubelet[1411]: I0515 00:56:53.870052 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039"} err="failed to get container status \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ebe342329b334e16e35eef85040f47e2adde89cca0774c116fcd1ef399a3039\": not found" May 15 00:56:53.870150 kubelet[1411]: I0515 00:56:53.870123 1411 scope.go:117] "RemoveContainer" containerID="795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37" May 15 00:56:53.870296 env[1210]: time="2025-05-15T00:56:53.870254085Z" level=error msg="ContainerStatus for \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\": not found" May 15 00:56:53.870385 kubelet[1411]: E0515 00:56:53.870364 1411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\": not found" containerID="795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37" May 15 00:56:53.870429 kubelet[1411]: I0515 00:56:53.870385 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37"} err="failed to get container status \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\": rpc error: code = NotFound desc = an error occurred when try to find container \"795ca63a7b7926507908c94b7242f137d8b2f94f7999e3e94643f3aebd406a37\": not found" May 15 00:56:53.870429 kubelet[1411]: I0515 00:56:53.870396 1411 scope.go:117] "RemoveContainer" containerID="fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c" May 15 00:56:53.870573 env[1210]: time="2025-05-15T00:56:53.870537128Z" level=error msg="ContainerStatus for \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\": not found" May 15 00:56:53.870668 kubelet[1411]: E0515 00:56:53.870651 1411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\": not found" containerID="fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c" May 15 00:56:53.870716 kubelet[1411]: I0515 00:56:53.870669 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c"} err="failed to get container status \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\": rpc error: code = NotFound desc = an error occurred when try to find container \"fd60ccc813b2356e27585d78815f19b7ee58e6e7df4bf7893f04873da608551c\": not found" May 15 00:56:53.870716 kubelet[1411]: I0515 00:56:53.870682 1411 scope.go:117] "RemoveContainer" containerID="5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70" May 15 00:56:53.871022 env[1210]: time="2025-05-15T00:56:53.870951098Z" level=error msg="ContainerStatus for \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\": not found" May 15 00:56:53.871126 kubelet[1411]: E0515 00:56:53.871109 1411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\": not found" containerID="5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70" May 15 00:56:53.871170 kubelet[1411]: I0515 00:56:53.871126 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70"} err="failed to get container status \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\": rpc error: code = NotFound desc = an error occurred when try to find container \"5aa40f4ddac66a4ead09c1dd1fc28b88fb143fdb4c52a3be1bc9c0bf446def70\": not found" May 15 00:56:53.871170 kubelet[1411]: I0515 00:56:53.871137 1411 scope.go:117] "RemoveContainer" containerID="d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae" May 15 00:56:53.871307 env[1210]: time="2025-05-15T00:56:53.871261442Z" level=error msg="ContainerStatus for \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\": not found" May 15 00:56:53.871436 kubelet[1411]: E0515 00:56:53.871417 1411 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\": not found" containerID="d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae" May 15 00:56:53.871467 kubelet[1411]: I0515 00:56:53.871440 1411 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae"} err="failed to get container status \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6108b5ef743b66c66b8a80c790a8bc812730b612300400b4e2fabb12438adae\": not found" May 15 00:56:54.161426 systemd[1]: var-lib-kubelet-pods-e9482f76\x2d4826\x2d48c5\x2d994a\x2dc20dc03ba1e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnn4c7.mount: Deactivated successfully. May 15 00:56:54.161541 systemd[1]: var-lib-kubelet-pods-e9482f76\x2d4826\x2d48c5\x2d994a\x2dc20dc03ba1e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:56:54.735897 kubelet[1411]: I0515 00:56:54.735813 1411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" path="/var/lib/kubelet/pods/e9482f76-4826-48c5-994a-c20dc03ba1e5/volumes" May 15 00:56:54.866490 kubelet[1411]: E0515 00:56:54.866442 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:55.569184 kubelet[1411]: E0515 00:56:55.569136 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" containerName="mount-cgroup" May 15 00:56:55.569184 kubelet[1411]: E0515 00:56:55.569164 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" containerName="mount-bpf-fs" May 15 00:56:55.569184 kubelet[1411]: E0515 00:56:55.569171 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" containerName="clean-cilium-state" May 15 00:56:55.569184 kubelet[1411]: E0515 00:56:55.569177 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" containerName="cilium-agent" May 15 00:56:55.569184 kubelet[1411]: E0515 00:56:55.569184 1411 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" containerName="apply-sysctl-overwrites" May 15 00:56:55.569484 kubelet[1411]: I0515 00:56:55.569208 1411 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9482f76-4826-48c5-994a-c20dc03ba1e5" containerName="cilium-agent" May 15 00:56:55.573696 systemd[1]: Created slice kubepods-besteffort-pod87253ceb_0205_40e4_8cb1_c16072d1eeba.slice. May 15 00:56:55.586817 systemd[1]: Created slice kubepods-burstable-pod25a94aaf_0bca_40dd_93d8_9a5610d06256.slice. May 15 00:56:55.736082 kubelet[1411]: E0515 00:56:55.736020 1411 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-f2p67 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-pcqpl" podUID="25a94aaf-0bca-40dd-93d8-9a5610d06256" May 15 00:56:55.757997 kubelet[1411]: I0515 00:56:55.757930 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cni-path\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.757997 kubelet[1411]: I0515 00:56:55.757973 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-kernel\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.757997 kubelet[1411]: I0515 00:56:55.757997 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/87253ceb-0205-40e4-8cb1-c16072d1eeba-cilium-config-path\") pod \"cilium-operator-5d85765b45-qdct2\" (UID: \"87253ceb-0205-40e4-8cb1-c16072d1eeba\") " pod="kube-system/cilium-operator-5d85765b45-qdct2" May 15 00:56:55.757997 kubelet[1411]: I0515 00:56:55.758017 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-run\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758333 kubelet[1411]: I0515 00:56:55.758033 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-hostproc\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758333 kubelet[1411]: I0515 00:56:55.758127 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-cgroup\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758333 kubelet[1411]: I0515 00:56:55.758169 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-etc-cni-netd\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758333 kubelet[1411]: I0515 00:56:55.758194 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-config-path\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758333 kubelet[1411]: I0515 00:56:55.758209 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-ipsec-secrets\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758333 kubelet[1411]: I0515 00:56:55.758233 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-bpf-maps\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758497 kubelet[1411]: I0515 00:56:55.758247 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-hubble-tls\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758497 kubelet[1411]: I0515 00:56:55.758264 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-net\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758497 kubelet[1411]: I0515 00:56:55.758303 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw95z\" (UniqueName: \"kubernetes.io/projected/87253ceb-0205-40e4-8cb1-c16072d1eeba-kube-api-access-gw95z\") pod \"cilium-operator-5d85765b45-qdct2\" (UID: \"87253ceb-0205-40e4-8cb1-c16072d1eeba\") " pod="kube-system/cilium-operator-5d85765b45-qdct2" May 15 00:56:55.758497 kubelet[1411]: I0515 00:56:55.758346 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-lib-modules\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758497 kubelet[1411]: I0515 00:56:55.758368 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-clustermesh-secrets\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758630 kubelet[1411]: I0515 00:56:55.758386 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2p67\" (UniqueName: \"kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-kube-api-access-f2p67\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.758630 kubelet[1411]: I0515 00:56:55.758409 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-xtables-lock\") pod \"cilium-pcqpl\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " pod="kube-system/cilium-pcqpl" May 15 00:56:55.779301 kubelet[1411]: E0515 00:56:55.779209 1411 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:55.832926 env[1210]: time="2025-05-15T00:56:55.832798443Z" level=info msg="StopPodSandbox for \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\"" May 15 00:56:55.833297 env[1210]: time="2025-05-15T00:56:55.832907418Z" level=info msg="TearDown network for sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" successfully" May 15 00:56:55.833297 env[1210]: time="2025-05-15T00:56:55.832948896Z" level=info msg="StopPodSandbox for \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" returns successfully" May 15 00:56:55.833373 env[1210]: time="2025-05-15T00:56:55.833300788Z" level=info msg="RemovePodSandbox for \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\"" May 15 00:56:55.833373 env[1210]: time="2025-05-15T00:56:55.833326386Z" level=info msg="Forcibly stopping sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\"" May 15 00:56:55.833440 env[1210]: time="2025-05-15T00:56:55.833392701Z" level=info msg="TearDown network for sandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" successfully" May 15 00:56:55.836959 env[1210]: time="2025-05-15T00:56:55.836920651Z" level=info msg="RemovePodSandbox \"ea21b88827ecd9cc7453451447c0b4d79b2469876c2caabae0bb2fd864cc7845\" returns successfully" May 15 00:56:55.866824 kubelet[1411]: E0515 00:56:55.866780 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:55.960179 kubelet[1411]: I0515 00:56:55.960128 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-etc-cni-netd\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960350 kubelet[1411]: I0515 00:56:55.960193 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-bpf-maps\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960350 kubelet[1411]: I0515 00:56:55.960222 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-net\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960350 kubelet[1411]: I0515 00:56:55.960247 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-kernel\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960350 kubelet[1411]: I0515 00:56:55.960262 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960350 kubelet[1411]: I0515 00:56:55.960262 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960350 kubelet[1411]: I0515 00:56:55.960274 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cni-path\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960525 kubelet[1411]: I0515 00:56:55.960302 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cni-path" (OuterVolumeSpecName: "cni-path") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960525 kubelet[1411]: I0515 00:56:55.960319 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960525 kubelet[1411]: I0515 00:56:55.960324 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-hostproc\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960525 kubelet[1411]: I0515 00:56:55.960327 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960525 kubelet[1411]: I0515 00:56:55.960340 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-hostproc" (OuterVolumeSpecName: "hostproc") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960637 kubelet[1411]: I0515 00:56:55.960347 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-cgroup\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960637 kubelet[1411]: I0515 00:56:55.960367 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-lib-modules\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960637 kubelet[1411]: I0515 00:56:55.960384 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-xtables-lock\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960637 kubelet[1411]: I0515 00:56:55.960400 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-run\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:55.960637 kubelet[1411]: I0515 00:56:55.960430 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960637 kubelet[1411]: I0515 00:56:55.960441 1411 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cni-path\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960457 1411 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-hostproc\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960466 1411 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-etc-cni-netd\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960476 1411 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-bpf-maps\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960486 1411 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-net\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960495 1411 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-host-proc-sys-kernel\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960443 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960787 kubelet[1411]: I0515 00:56:55.960464 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:55.960989 kubelet[1411]: I0515 00:56:55.960515 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:56:56.061740 kubelet[1411]: I0515 00:56:56.061686 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-hubble-tls\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:56.061740 kubelet[1411]: I0515 00:56:56.061735 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-config-path\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:56.061740 kubelet[1411]: I0515 00:56:56.061752 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-ipsec-secrets\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:56.061740 kubelet[1411]: I0515 00:56:56.061784 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2p67\" (UniqueName: \"kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-kube-api-access-f2p67\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:56.062049 kubelet[1411]: I0515 00:56:56.061804 1411 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-clustermesh-secrets\") pod \"25a94aaf-0bca-40dd-93d8-9a5610d06256\" (UID: \"25a94aaf-0bca-40dd-93d8-9a5610d06256\") " May 15 00:56:56.062049 kubelet[1411]: I0515 00:56:56.061880 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-cgroup\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.062049 kubelet[1411]: I0515 00:56:56.061894 1411 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-lib-modules\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.062049 kubelet[1411]: I0515 00:56:56.061904 1411 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-xtables-lock\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.062049 kubelet[1411]: I0515 00:56:56.061917 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-run\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.063956 kubelet[1411]: I0515 00:56:56.063919 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:56:56.064722 kubelet[1411]: I0515 00:56:56.064688 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:56:56.064829 kubelet[1411]: I0515 00:56:56.064743 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:56:56.065423 kubelet[1411]: I0515 00:56:56.065398 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:56:56.066101 kubelet[1411]: I0515 00:56:56.066067 1411 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-kube-api-access-f2p67" (OuterVolumeSpecName: "kube-api-access-f2p67") pod "25a94aaf-0bca-40dd-93d8-9a5610d06256" (UID: "25a94aaf-0bca-40dd-93d8-9a5610d06256"). InnerVolumeSpecName "kube-api-access-f2p67". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:56:56.162586 kubelet[1411]: I0515 00:56:56.162528 1411 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-hubble-tls\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.162586 kubelet[1411]: I0515 00:56:56.162566 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-config-path\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.162586 kubelet[1411]: I0515 00:56:56.162579 1411 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-cilium-ipsec-secrets\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.162586 kubelet[1411]: I0515 00:56:56.162589 1411 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f2p67\" (UniqueName: \"kubernetes.io/projected/25a94aaf-0bca-40dd-93d8-9a5610d06256-kube-api-access-f2p67\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.162586 kubelet[1411]: I0515 00:56:56.162598 1411 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25a94aaf-0bca-40dd-93d8-9a5610d06256-clustermesh-secrets\") on node \"10.0.0.130\" DevicePath \"\"" May 15 00:56:56.175988 kubelet[1411]: E0515 00:56:56.175964 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:56.176553 env[1210]: time="2025-05-15T00:56:56.176493633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qdct2,Uid:87253ceb-0205-40e4-8cb1-c16072d1eeba,Namespace:kube-system,Attempt:0,}" May 15 00:56:56.364666 env[1210]: time="2025-05-15T00:56:56.364596396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:56.364666 env[1210]: time="2025-05-15T00:56:56.364631101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:56.364666 env[1210]: time="2025-05-15T00:56:56.364640499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:56.364896 env[1210]: time="2025-05-15T00:56:56.364793336Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60a8ad238cc97442938c815204e6a9da1ba54224880e02bcbf72eeb79f412933 pid=2981 runtime=io.containerd.runc.v2 May 15 00:56:56.375378 systemd[1]: Started cri-containerd-60a8ad238cc97442938c815204e6a9da1ba54224880e02bcbf72eeb79f412933.scope. May 15 00:56:56.405693 env[1210]: time="2025-05-15T00:56:56.405639107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qdct2,Uid:87253ceb-0205-40e4-8cb1-c16072d1eeba,Namespace:kube-system,Attempt:0,} returns sandbox id \"60a8ad238cc97442938c815204e6a9da1ba54224880e02bcbf72eeb79f412933\"" May 15 00:56:56.406378 kubelet[1411]: E0515 00:56:56.406349 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:56.407463 env[1210]: time="2025-05-15T00:56:56.407427182Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:56:56.738750 systemd[1]: Removed slice kubepods-burstable-pod25a94aaf_0bca_40dd_93d8_9a5610d06256.slice. May 15 00:56:56.759925 kubelet[1411]: E0515 00:56:56.759880 1411 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:56:56.867414 kubelet[1411]: E0515 00:56:56.867367 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:56.869925 systemd[1]: var-lib-kubelet-pods-25a94aaf\x2d0bca\x2d40dd\x2d93d8\x2d9a5610d06256-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 00:56:56.870026 systemd[1]: var-lib-kubelet-pods-25a94aaf\x2d0bca\x2d40dd\x2d93d8\x2d9a5610d06256-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:56:56.870094 systemd[1]: var-lib-kubelet-pods-25a94aaf\x2d0bca\x2d40dd\x2d93d8\x2d9a5610d06256-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:56:56.870162 systemd[1]: var-lib-kubelet-pods-25a94aaf\x2d0bca\x2d40dd\x2d93d8\x2d9a5610d06256-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df2p67.mount: Deactivated successfully. May 15 00:56:57.006357 systemd[1]: Created slice kubepods-burstable-podedeb3895_2c37_4f12_a74c_2e187a87c1d3.slice. May 15 00:56:57.170038 kubelet[1411]: I0515 00:56:57.169985 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-cilium-run\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170038 kubelet[1411]: I0515 00:56:57.170024 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edeb3895-2c37-4f12-a74c-2e187a87c1d3-clustermesh-secrets\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170236 kubelet[1411]: I0515 00:56:57.170082 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhdp\" (UniqueName: \"kubernetes.io/projected/edeb3895-2c37-4f12-a74c-2e187a87c1d3-kube-api-access-hfhdp\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170236 kubelet[1411]: I0515 00:56:57.170101 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edeb3895-2c37-4f12-a74c-2e187a87c1d3-hubble-tls\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170236 kubelet[1411]: I0515 00:56:57.170178 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/edeb3895-2c37-4f12-a74c-2e187a87c1d3-cilium-ipsec-secrets\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170236 kubelet[1411]: I0515 00:56:57.170229 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-host-proc-sys-net\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170336 kubelet[1411]: I0515 00:56:57.170247 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-host-proc-sys-kernel\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170336 kubelet[1411]: I0515 00:56:57.170264 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-bpf-maps\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170336 kubelet[1411]: I0515 00:56:57.170282 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-etc-cni-netd\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170336 kubelet[1411]: I0515 00:56:57.170302 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-cni-path\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170336 kubelet[1411]: I0515 00:56:57.170319 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-lib-modules\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170336 kubelet[1411]: I0515 00:56:57.170335 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-xtables-lock\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170473 kubelet[1411]: I0515 00:56:57.170355 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edeb3895-2c37-4f12-a74c-2e187a87c1d3-cilium-config-path\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170473 kubelet[1411]: I0515 00:56:57.170379 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-hostproc\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.170473 kubelet[1411]: I0515 00:56:57.170395 1411 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edeb3895-2c37-4f12-a74c-2e187a87c1d3-cilium-cgroup\") pod \"cilium-6phb5\" (UID: \"edeb3895-2c37-4f12-a74c-2e187a87c1d3\") " pod="kube-system/cilium-6phb5" May 15 00:56:57.315213 kubelet[1411]: E0515 00:56:57.315087 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:57.315914 env[1210]: time="2025-05-15T00:56:57.315825964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6phb5,Uid:edeb3895-2c37-4f12-a74c-2e187a87c1d3,Namespace:kube-system,Attempt:0,}" May 15 00:56:57.328546 env[1210]: time="2025-05-15T00:56:57.328456513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:56:57.328546 env[1210]: time="2025-05-15T00:56:57.328503772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:56:57.328546 env[1210]: time="2025-05-15T00:56:57.328514873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:56:57.328773 env[1210]: time="2025-05-15T00:56:57.328646741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d pid=3027 runtime=io.containerd.runc.v2 May 15 00:56:57.339455 systemd[1]: Started cri-containerd-7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d.scope. May 15 00:56:57.359830 env[1210]: time="2025-05-15T00:56:57.359780006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6phb5,Uid:edeb3895-2c37-4f12-a74c-2e187a87c1d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\"" May 15 00:56:57.360523 kubelet[1411]: E0515 00:56:57.360495 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:57.362714 env[1210]: time="2025-05-15T00:56:57.362676445Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:56:57.375734 env[1210]: time="2025-05-15T00:56:57.375664016Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda\"" May 15 00:56:57.376154 env[1210]: time="2025-05-15T00:56:57.376107841Z" level=info msg="StartContainer for \"8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda\"" May 15 00:56:57.389860 systemd[1]: Started cri-containerd-8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda.scope. May 15 00:56:57.417142 env[1210]: time="2025-05-15T00:56:57.417084654Z" level=info msg="StartContainer for \"8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda\" returns successfully" May 15 00:56:57.423858 systemd[1]: cri-containerd-8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda.scope: Deactivated successfully. May 15 00:56:57.452852 env[1210]: time="2025-05-15T00:56:57.452777005Z" level=info msg="shim disconnected" id=8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda May 15 00:56:57.452852 env[1210]: time="2025-05-15T00:56:57.452824134Z" level=warning msg="cleaning up after shim disconnected" id=8135b070668793a6bd37b52eaf4cf75a9b821c243147fb42d5de3a0815675eda namespace=k8s.io May 15 00:56:57.452852 env[1210]: time="2025-05-15T00:56:57.452845985Z" level=info msg="cleaning up dead shim" May 15 00:56:57.460789 env[1210]: time="2025-05-15T00:56:57.460727481Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3110 runtime=io.containerd.runc.v2\n" May 15 00:56:57.857069 kubelet[1411]: E0515 00:56:57.857025 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:57.858852 env[1210]: time="2025-05-15T00:56:57.858789795Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:56:57.868910 kubelet[1411]: E0515 00:56:57.868881 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:58.373380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092518440.mount: Deactivated successfully. May 15 00:56:58.375883 env[1210]: time="2025-05-15T00:56:58.375815516Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332\"" May 15 00:56:58.376513 env[1210]: time="2025-05-15T00:56:58.376470117Z" level=info msg="StartContainer for \"9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332\"" May 15 00:56:58.393725 systemd[1]: Started cri-containerd-9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332.scope. May 15 00:56:58.418825 env[1210]: time="2025-05-15T00:56:58.418769173Z" level=info msg="StartContainer for \"9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332\" returns successfully" May 15 00:56:58.424526 systemd[1]: cri-containerd-9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332.scope: Deactivated successfully. May 15 00:56:58.452356 env[1210]: time="2025-05-15T00:56:58.452289948Z" level=info msg="shim disconnected" id=9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332 May 15 00:56:58.452356 env[1210]: time="2025-05-15T00:56:58.452341355Z" level=warning msg="cleaning up after shim disconnected" id=9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332 namespace=k8s.io May 15 00:56:58.452356 env[1210]: time="2025-05-15T00:56:58.452350903Z" level=info msg="cleaning up dead shim" May 15 00:56:58.456379 kubelet[1411]: I0515 00:56:58.455225 1411 setters.go:600] "Node became not ready" node="10.0.0.130" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:56:58Z","lastTransitionTime":"2025-05-15T00:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:56:58.459703 env[1210]: time="2025-05-15T00:56:58.459611990Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3173 runtime=io.containerd.runc.v2\n" May 15 00:56:58.736003 kubelet[1411]: I0515 00:56:58.735908 1411 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25a94aaf-0bca-40dd-93d8-9a5610d06256" path="/var/lib/kubelet/pods/25a94aaf-0bca-40dd-93d8-9a5610d06256/volumes" May 15 00:56:58.860866 kubelet[1411]: E0515 00:56:58.860819 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:58.863968 env[1210]: time="2025-05-15T00:56:58.863892152Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:56:58.868538 systemd[1]: run-containerd-runc-k8s.io-9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332-runc.JCXkno.mount: Deactivated successfully. May 15 00:56:58.868626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9dcd42f2b22751f859f2105a8b9da497c2a8735ae33ede6605d6903051fee332-rootfs.mount: Deactivated successfully. May 15 00:56:58.870121 kubelet[1411]: E0515 00:56:58.870074 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:56:58.887389 env[1210]: time="2025-05-15T00:56:58.887301331Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176\"" May 15 00:56:58.888073 env[1210]: time="2025-05-15T00:56:58.888040942Z" level=info msg="StartContainer for \"f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176\"" May 15 00:56:58.910063 systemd[1]: Started cri-containerd-f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176.scope. May 15 00:56:58.938028 env[1210]: time="2025-05-15T00:56:58.937972603Z" level=info msg="StartContainer for \"f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176\" returns successfully" May 15 00:56:58.941310 systemd[1]: cri-containerd-f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176.scope: Deactivated successfully. May 15 00:56:58.967633 env[1210]: time="2025-05-15T00:56:58.967578665Z" level=info msg="shim disconnected" id=f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176 May 15 00:56:58.967913 env[1210]: time="2025-05-15T00:56:58.967864082Z" level=warning msg="cleaning up after shim disconnected" id=f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176 namespace=k8s.io May 15 00:56:58.967913 env[1210]: time="2025-05-15T00:56:58.967883979Z" level=info msg="cleaning up dead shim" May 15 00:56:58.975289 env[1210]: time="2025-05-15T00:56:58.975216831Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:56:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3228 runtime=io.containerd.runc.v2\n" May 15 00:56:59.864355 kubelet[1411]: E0515 00:56:59.864319 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:56:59.866297 env[1210]: time="2025-05-15T00:56:59.866236733Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:56:59.868220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f057f7b495ebc8feacd1d07d3c094cac80fc5d0ae078fdc3e3ed123d5b973176-rootfs.mount: Deactivated successfully. May 15 00:56:59.871121 kubelet[1411]: E0515 00:56:59.871089 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:00.020146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount814806164.mount: Deactivated successfully. May 15 00:57:00.246751 env[1210]: time="2025-05-15T00:57:00.246643252Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae\"" May 15 00:57:00.247412 env[1210]: time="2025-05-15T00:57:00.247368425Z" level=info msg="StartContainer for \"f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae\"" May 15 00:57:00.263084 systemd[1]: Started cri-containerd-f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae.scope. May 15 00:57:00.297332 systemd[1]: cri-containerd-f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae.scope: Deactivated successfully. May 15 00:57:00.306410 env[1210]: time="2025-05-15T00:57:00.306355441Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:00.407600 env[1210]: time="2025-05-15T00:57:00.407528233Z" level=info msg="StartContainer for \"f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae\" returns successfully" May 15 00:57:00.413966 env[1210]: time="2025-05-15T00:57:00.413914480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:00.419248 env[1210]: time="2025-05-15T00:57:00.419197914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:00.419431 env[1210]: time="2025-05-15T00:57:00.419395035Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 00:57:00.422535 env[1210]: time="2025-05-15T00:57:00.422456821Z" level=info msg="CreateContainer within sandbox \"60a8ad238cc97442938c815204e6a9da1ba54224880e02bcbf72eeb79f412933\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:57:00.641747 env[1210]: time="2025-05-15T00:57:00.641672945Z" level=info msg="shim disconnected" id=f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae May 15 00:57:00.641747 env[1210]: time="2025-05-15T00:57:00.641724211Z" level=warning msg="cleaning up after shim disconnected" id=f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae namespace=k8s.io May 15 00:57:00.641747 env[1210]: time="2025-05-15T00:57:00.641734981Z" level=info msg="cleaning up dead shim" May 15 00:57:00.648105 env[1210]: time="2025-05-15T00:57:00.648067578Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:57:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3282 runtime=io.containerd.runc.v2\n" May 15 00:57:00.657797 env[1210]: time="2025-05-15T00:57:00.657737700Z" level=info msg="CreateContainer within sandbox \"60a8ad238cc97442938c815204e6a9da1ba54224880e02bcbf72eeb79f412933\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0d92ae1c2d8156256b96d108d0ceacbba6ff620ab5efe6921d5fea304aa1a2e\"" May 15 00:57:00.658337 env[1210]: time="2025-05-15T00:57:00.658308993Z" level=info msg="StartContainer for \"f0d92ae1c2d8156256b96d108d0ceacbba6ff620ab5efe6921d5fea304aa1a2e\"" May 15 00:57:00.671346 systemd[1]: Started cri-containerd-f0d92ae1c2d8156256b96d108d0ceacbba6ff620ab5efe6921d5fea304aa1a2e.scope. May 15 00:57:00.695233 env[1210]: time="2025-05-15T00:57:00.695173125Z" level=info msg="StartContainer for \"f0d92ae1c2d8156256b96d108d0ceacbba6ff620ab5efe6921d5fea304aa1a2e\" returns successfully" May 15 00:57:00.868913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f93503088f577eb4144c05b5add4772b25cfe32497b51b920d59e8278faea7ae-rootfs.mount: Deactivated successfully. May 15 00:57:00.870032 kubelet[1411]: E0515 00:57:00.870005 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:00.871192 kubelet[1411]: E0515 00:57:00.871174 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:00.873109 kubelet[1411]: E0515 00:57:00.873066 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:00.874863 env[1210]: time="2025-05-15T00:57:00.874801486Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:57:00.880689 kubelet[1411]: I0515 00:57:00.880650 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qdct2" podStartSLOduration=1.867051552 podStartE2EDuration="5.88062794s" podCreationTimestamp="2025-05-15 00:56:55 +0000 UTC" firstStartedPulling="2025-05-15 00:56:56.407117739 +0000 UTC m=+61.033588663" lastFinishedPulling="2025-05-15 00:57:00.420694127 +0000 UTC m=+65.047165051" observedRunningTime="2025-05-15 00:57:00.880372901 +0000 UTC m=+65.506843825" watchObservedRunningTime="2025-05-15 00:57:00.88062794 +0000 UTC m=+65.507098864" May 15 00:57:00.891456 env[1210]: time="2025-05-15T00:57:00.891416775Z" level=info msg="CreateContainer within sandbox \"7ae4403f16ed8449c692aba21040e6c65ba667d0b52cd126a1d6363b837abe0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c868f52d4b4e4d15d1ac6a125f220408bb54dda41c21d2245573291ae2a64e84\"" May 15 00:57:00.891986 env[1210]: time="2025-05-15T00:57:00.891923328Z" level=info msg="StartContainer for \"c868f52d4b4e4d15d1ac6a125f220408bb54dda41c21d2245573291ae2a64e84\"" May 15 00:57:00.908968 systemd[1]: Started cri-containerd-c868f52d4b4e4d15d1ac6a125f220408bb54dda41c21d2245573291ae2a64e84.scope. May 15 00:57:00.940966 env[1210]: time="2025-05-15T00:57:00.940886447Z" level=info msg="StartContainer for \"c868f52d4b4e4d15d1ac6a125f220408bb54dda41c21d2245573291ae2a64e84\" returns successfully" May 15 00:57:01.207892 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 15 00:57:01.871412 kubelet[1411]: E0515 00:57:01.871347 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:01.877572 kubelet[1411]: E0515 00:57:01.877545 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:01.877696 kubelet[1411]: E0515 00:57:01.877616 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:02.009859 kubelet[1411]: I0515 00:57:02.009782 1411 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6phb5" podStartSLOduration=6.009761079 podStartE2EDuration="6.009761079s" podCreationTimestamp="2025-05-15 00:56:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:02.009723719 +0000 UTC m=+66.636194683" watchObservedRunningTime="2025-05-15 00:57:02.009761079 +0000 UTC m=+66.636231993" May 15 00:57:02.871741 kubelet[1411]: E0515 00:57:02.871675 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:03.316032 kubelet[1411]: E0515 00:57:03.315928 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:03.808903 systemd-networkd[1035]: lxc_health: Link UP May 15 00:57:03.824209 systemd-networkd[1035]: lxc_health: Gained carrier May 15 00:57:03.824875 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 00:57:03.872495 kubelet[1411]: E0515 00:57:03.872440 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:04.489742 kubelet[1411]: E0515 00:57:04.489596 1411 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51894->127.0.0.1:44185: write tcp 127.0.0.1:51894->127.0.0.1:44185: write: connection reset by peer May 15 00:57:04.873304 kubelet[1411]: E0515 00:57:04.873251 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:05.103032 systemd-networkd[1035]: lxc_health: Gained IPv6LL May 15 00:57:05.317397 kubelet[1411]: E0515 00:57:05.316989 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:05.873761 kubelet[1411]: E0515 00:57:05.873712 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:05.884238 kubelet[1411]: E0515 00:57:05.884220 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:06.539506 systemd[1]: run-containerd-runc-k8s.io-c868f52d4b4e4d15d1ac6a125f220408bb54dda41c21d2245573291ae2a64e84-runc.ZNQ98X.mount: Deactivated successfully. May 15 00:57:06.874225 kubelet[1411]: E0515 00:57:06.874161 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:06.886050 kubelet[1411]: E0515 00:57:06.886018 1411 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:07.874975 kubelet[1411]: E0515 00:57:07.874895 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:08.875429 kubelet[1411]: E0515 00:57:08.875379 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:09.875755 kubelet[1411]: E0515 00:57:09.875693 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:10.876602 kubelet[1411]: E0515 00:57:10.876568 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:11.877152 kubelet[1411]: E0515 00:57:11.877082 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:12.877575 kubelet[1411]: E0515 00:57:12.877509 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:12.966714 systemd[1]: run-containerd-runc-k8s.io-c868f52d4b4e4d15d1ac6a125f220408bb54dda41c21d2245573291ae2a64e84-runc.NbXLUp.mount: Deactivated successfully. May 15 00:57:13.877793 kubelet[1411]: E0515 00:57:13.877721 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 00:57:14.878510 kubelet[1411]: E0515 00:57:14.878450 1411 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"