May 8 00:46:45.160695 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 7 23:10:51 -00 2025 May 8 00:46:45.160725 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:46:45.160735 kernel: BIOS-provided physical RAM map: May 8 00:46:45.160742 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 8 00:46:45.160749 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 8 00:46:45.160756 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 8 00:46:45.160764 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 8 00:46:45.160772 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 8 00:46:45.160790 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 8 00:46:45.160797 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 8 00:46:45.160804 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 8 00:46:45.160811 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 8 00:46:45.160818 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 8 00:46:45.160826 kernel: NX (Execute Disable) protection: active May 8 00:46:45.160837 kernel: SMBIOS 2.8 present. May 8 00:46:45.160845 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 8 00:46:45.160852 kernel: Hypervisor detected: KVM May 8 00:46:45.160860 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 8 00:46:45.160868 kernel: kvm-clock: cpu 0, msr 15198001, primary cpu clock May 8 00:46:45.160875 kernel: kvm-clock: using sched offset of 3621263686 cycles May 8 00:46:45.160884 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 8 00:46:45.160892 kernel: tsc: Detected 2794.748 MHz processor May 8 00:46:45.160900 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 8 00:46:45.160910 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 8 00:46:45.160918 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 8 00:46:45.160926 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 8 00:46:45.160934 kernel: Using GB pages for direct mapping May 8 00:46:45.160942 kernel: ACPI: Early table checksum verification disabled May 8 00:46:45.160950 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 8 00:46:45.160958 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.160969 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.160978 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.160987 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 8 00:46:45.160995 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.161003 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.161012 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.161019 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:46:45.161028 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 8 00:46:45.161035 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 8 00:46:45.161044 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 8 00:46:45.161057 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 8 00:46:45.161065 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 8 00:46:45.161073 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 8 00:46:45.161082 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 8 00:46:45.161090 kernel: No NUMA configuration found May 8 00:46:45.161099 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 8 00:46:45.161110 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 8 00:46:45.161118 kernel: Zone ranges: May 8 00:46:45.161127 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 8 00:46:45.161135 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 8 00:46:45.161143 kernel: Normal empty May 8 00:46:45.161152 kernel: Movable zone start for each node May 8 00:46:45.161160 kernel: Early memory node ranges May 8 00:46:45.161168 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 8 00:46:45.161177 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 8 00:46:45.161188 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 8 00:46:45.161199 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 8 00:46:45.161208 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 8 00:46:45.161216 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 8 00:46:45.161225 kernel: ACPI: PM-Timer IO Port: 0x608 May 8 00:46:45.161233 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 8 00:46:45.161241 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 8 00:46:45.161250 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 8 00:46:45.161258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 8 00:46:45.161267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 8 00:46:45.161277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 8 00:46:45.161286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 8 00:46:45.161295 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 8 00:46:45.161303 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 8 00:46:45.161311 kernel: TSC deadline timer available May 8 00:46:45.161320 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 8 00:46:45.161328 kernel: kvm-guest: KVM setup pv remote TLB flush May 8 00:46:45.161337 kernel: kvm-guest: setup PV sched yield May 8 00:46:45.161345 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 8 00:46:45.161356 kernel: Booting paravirtualized kernel on KVM May 8 00:46:45.161364 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 8 00:46:45.161376 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 8 00:46:45.161385 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 8 00:46:45.161394 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 8 00:46:45.161402 kernel: pcpu-alloc: [0] 0 1 2 3 May 8 00:46:45.161411 kernel: kvm-guest: setup async PF for cpu 0 May 8 00:46:45.161419 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 8 00:46:45.161428 kernel: kvm-guest: PV spinlocks enabled May 8 00:46:45.161438 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 8 00:46:45.161447 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 8 00:46:45.161456 kernel: Policy zone: DMA32 May 8 00:46:45.161478 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:46:45.161488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:46:45.161496 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:46:45.161505 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:46:45.161513 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:46:45.161525 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2279K rwdata, 13724K rodata, 47464K init, 4116K bss, 134796K reserved, 0K cma-reserved) May 8 00:46:45.161534 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:46:45.161542 kernel: ftrace: allocating 34584 entries in 136 pages May 8 00:46:45.161551 kernel: ftrace: allocated 136 pages with 2 groups May 8 00:46:45.161559 kernel: rcu: Hierarchical RCU implementation. May 8 00:46:45.161568 kernel: rcu: RCU event tracing is enabled. May 8 00:46:45.161577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:46:45.161586 kernel: Rude variant of Tasks RCU enabled. May 8 00:46:45.161594 kernel: Tracing variant of Tasks RCU enabled. May 8 00:46:45.161605 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:46:45.161614 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:46:45.161622 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 8 00:46:45.161631 kernel: random: crng init done May 8 00:46:45.161639 kernel: Console: colour VGA+ 80x25 May 8 00:46:45.161647 kernel: printk: console [ttyS0] enabled May 8 00:46:45.161656 kernel: ACPI: Core revision 20210730 May 8 00:46:45.161665 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 8 00:46:45.161673 kernel: APIC: Switch to symmetric I/O mode setup May 8 00:46:45.161683 kernel: x2apic enabled May 8 00:46:45.161692 kernel: Switched APIC routing to physical x2apic. May 8 00:46:45.161700 kernel: kvm-guest: setup PV IPIs May 8 00:46:45.161709 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 8 00:46:45.161717 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 8 00:46:45.161729 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 8 00:46:45.161738 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 8 00:46:45.161747 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 8 00:46:45.161755 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 8 00:46:45.161772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 8 00:46:45.161789 kernel: Spectre V2 : Mitigation: Retpolines May 8 00:46:45.161798 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch May 8 00:46:45.161809 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT May 8 00:46:45.161818 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 8 00:46:45.161830 kernel: RETBleed: Mitigation: untrained return thunk May 8 00:46:45.161839 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 8 00:46:45.161849 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 8 00:46:45.161858 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 8 00:46:45.161869 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 8 00:46:45.161878 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 8 00:46:45.161887 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 8 00:46:45.161896 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 8 00:46:45.161905 kernel: Freeing SMP alternatives memory: 32K May 8 00:46:45.161914 kernel: pid_max: default: 32768 minimum: 301 May 8 00:46:45.161923 kernel: LSM: Security Framework initializing May 8 00:46:45.161934 kernel: SELinux: Initializing. May 8 00:46:45.161943 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:46:45.161952 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:46:45.161961 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 8 00:46:45.161970 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 8 00:46:45.161979 kernel: ... version: 0 May 8 00:46:45.161988 kernel: ... bit width: 48 May 8 00:46:45.161997 kernel: ... generic registers: 6 May 8 00:46:45.162006 kernel: ... value mask: 0000ffffffffffff May 8 00:46:45.162017 kernel: ... max period: 00007fffffffffff May 8 00:46:45.162026 kernel: ... fixed-purpose events: 0 May 8 00:46:45.162035 kernel: ... event mask: 000000000000003f May 8 00:46:45.162044 kernel: signal: max sigframe size: 1776 May 8 00:46:45.162052 kernel: rcu: Hierarchical SRCU implementation. May 8 00:46:45.162061 kernel: smp: Bringing up secondary CPUs ... May 8 00:46:45.162070 kernel: x86: Booting SMP configuration: May 8 00:46:45.162079 kernel: .... node #0, CPUs: #1 May 8 00:46:45.162088 kernel: kvm-clock: cpu 1, msr 15198041, secondary cpu clock May 8 00:46:45.162097 kernel: kvm-guest: setup async PF for cpu 1 May 8 00:46:45.162108 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 8 00:46:45.162116 kernel: #2 May 8 00:46:45.162125 kernel: kvm-clock: cpu 2, msr 15198081, secondary cpu clock May 8 00:46:45.162134 kernel: kvm-guest: setup async PF for cpu 2 May 8 00:46:45.162143 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 8 00:46:45.162152 kernel: #3 May 8 00:46:45.162161 kernel: kvm-clock: cpu 3, msr 151980c1, secondary cpu clock May 8 00:46:45.162169 kernel: kvm-guest: setup async PF for cpu 3 May 8 00:46:45.162178 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 8 00:46:45.162189 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:46:45.162197 kernel: smpboot: Max logical packages: 1 May 8 00:46:45.162206 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 8 00:46:45.162216 kernel: devtmpfs: initialized May 8 00:46:45.162225 kernel: x86/mm: Memory block size: 128MB May 8 00:46:45.162241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:46:45.162265 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:46:45.162306 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:46:45.162321 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:46:45.162334 kernel: audit: initializing netlink subsys (disabled) May 8 00:46:45.162343 kernel: audit: type=2000 audit(1746665203.836:1): state=initialized audit_enabled=0 res=1 May 8 00:46:45.162352 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:46:45.162362 kernel: thermal_sys: Registered thermal governor 'user_space' May 8 00:46:45.162371 kernel: cpuidle: using governor menu May 8 00:46:45.162380 kernel: ACPI: bus type PCI registered May 8 00:46:45.162390 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:46:45.162399 kernel: dca service started, version 1.12.1 May 8 00:46:45.162408 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 8 00:46:45.162420 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 8 00:46:45.162429 kernel: PCI: Using configuration type 1 for base access May 8 00:46:45.162438 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 8 00:46:45.162447 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:46:45.162456 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:46:45.162465 kernel: ACPI: Added _OSI(Module Device) May 8 00:46:45.162486 kernel: ACPI: Added _OSI(Processor Device) May 8 00:46:45.162495 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:46:45.162504 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:46:45.162515 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:46:45.162524 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:46:45.162533 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:46:45.162543 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:46:45.162552 kernel: ACPI: Interpreter enabled May 8 00:46:45.162561 kernel: ACPI: PM: (supports S0 S3 S5) May 8 00:46:45.162589 kernel: ACPI: Using IOAPIC for interrupt routing May 8 00:46:45.162600 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 8 00:46:45.162609 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 8 00:46:45.162621 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:46:45.162936 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:46:45.163032 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 8 00:46:45.163110 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 8 00:46:45.163119 kernel: PCI host bridge to bus 0000:00 May 8 00:46:45.163213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 8 00:46:45.163283 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 8 00:46:45.163356 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 8 00:46:45.163421 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 8 00:46:45.163507 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 8 00:46:45.163575 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 8 00:46:45.163641 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:46:45.163743 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 8 00:46:45.163855 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 8 00:46:45.163932 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 8 00:46:45.164006 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 8 00:46:45.164080 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 8 00:46:45.164155 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 8 00:46:45.164252 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:46:45.164330 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 8 00:46:45.164417 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 8 00:46:45.164555 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 8 00:46:45.164677 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 8 00:46:45.164764 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 8 00:46:45.164854 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 8 00:46:45.164930 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 8 00:46:45.165023 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 8 00:46:45.165151 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 8 00:46:45.165231 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 8 00:46:45.165410 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 8 00:46:45.165504 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 8 00:46:45.165612 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 8 00:46:45.165695 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 8 00:46:45.165845 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 8 00:46:45.165963 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 8 00:46:45.166062 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 8 00:46:45.166188 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 8 00:46:45.166301 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 8 00:46:45.166313 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 8 00:46:45.166320 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 8 00:46:45.166342 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 8 00:46:45.166353 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 8 00:46:45.166360 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 8 00:46:45.166367 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 8 00:46:45.166374 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 8 00:46:45.166382 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 8 00:46:45.166389 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 8 00:46:45.166396 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 8 00:46:45.166403 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 8 00:46:45.166410 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 8 00:46:45.166419 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 8 00:46:45.166426 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 8 00:46:45.166433 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 8 00:46:45.166440 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 8 00:46:45.166460 kernel: iommu: Default domain type: Translated May 8 00:46:45.166531 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 8 00:46:45.166636 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 8 00:46:45.166732 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 8 00:46:45.166840 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 8 00:46:45.166852 kernel: vgaarb: loaded May 8 00:46:45.166860 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:46:45.166869 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:46:45.166878 kernel: PTP clock support registered May 8 00:46:45.166887 kernel: PCI: Using ACPI for IRQ routing May 8 00:46:45.166897 kernel: PCI: pci_cache_line_size set to 64 bytes May 8 00:46:45.166906 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 8 00:46:45.166932 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 8 00:46:45.166944 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 8 00:46:45.166951 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 8 00:46:45.166958 kernel: clocksource: Switched to clocksource kvm-clock May 8 00:46:45.166966 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:46:45.166973 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:46:45.166981 kernel: pnp: PnP ACPI init May 8 00:46:45.167128 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 8 00:46:45.167145 kernel: pnp: PnP ACPI: found 6 devices May 8 00:46:45.167156 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 8 00:46:45.167163 kernel: NET: Registered PF_INET protocol family May 8 00:46:45.167171 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:46:45.167178 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:46:45.167186 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:46:45.167193 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:46:45.167213 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:46:45.167221 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:46:45.167228 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:46:45.167237 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:46:45.167245 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:46:45.167265 kernel: NET: Registered PF_XDP protocol family May 8 00:46:45.167375 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 8 00:46:45.167496 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 8 00:46:45.167587 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 8 00:46:45.167691 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 8 00:46:45.167802 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 8 00:46:45.167906 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 8 00:46:45.167921 kernel: PCI: CLS 0 bytes, default 64 May 8 00:46:45.167929 kernel: Initialise system trusted keyrings May 8 00:46:45.167944 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:46:45.167957 kernel: Key type asymmetric registered May 8 00:46:45.167964 kernel: Asymmetric key parser 'x509' registered May 8 00:46:45.167971 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:46:45.167979 kernel: io scheduler mq-deadline registered May 8 00:46:45.167986 kernel: io scheduler kyber registered May 8 00:46:45.167993 kernel: io scheduler bfq registered May 8 00:46:45.168017 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 8 00:46:45.168025 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 8 00:46:45.168032 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 8 00:46:45.168039 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 8 00:46:45.168047 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:46:45.168054 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 8 00:46:45.168067 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 8 00:46:45.168074 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 8 00:46:45.168081 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 8 00:46:45.168178 kernel: rtc_cmos 00:04: RTC can wake from S4 May 8 00:46:45.168190 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 8 00:46:45.168257 kernel: rtc_cmos 00:04: registered as rtc0 May 8 00:46:45.168326 kernel: rtc_cmos 00:04: setting system clock to 2025-05-08T00:46:44 UTC (1746665204) May 8 00:46:45.168395 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 8 00:46:45.168404 kernel: NET: Registered PF_INET6 protocol family May 8 00:46:45.168412 kernel: Segment Routing with IPv6 May 8 00:46:45.168419 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:46:45.168429 kernel: NET: Registered PF_PACKET protocol family May 8 00:46:45.168436 kernel: Key type dns_resolver registered May 8 00:46:45.168443 kernel: IPI shorthand broadcast: enabled May 8 00:46:45.168451 kernel: sched_clock: Marking stable (487088515, 212000315)->(795231287, -96142457) May 8 00:46:45.168458 kernel: registered taskstats version 1 May 8 00:46:45.168465 kernel: Loading compiled-in X.509 certificates May 8 00:46:45.168485 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: c9ff13353458e6fa2786638fdd3dcad841d1075c' May 8 00:46:45.168493 kernel: Key type .fscrypt registered May 8 00:46:45.168500 kernel: Key type fscrypt-provisioning registered May 8 00:46:45.168510 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:46:45.168517 kernel: ima: Allocated hash algorithm: sha1 May 8 00:46:45.168524 kernel: ima: No architecture policies found May 8 00:46:45.168532 kernel: clk: Disabling unused clocks May 8 00:46:45.168539 kernel: Freeing unused kernel image (initmem) memory: 47464K May 8 00:46:45.168546 kernel: Write protecting the kernel read-only data: 28672k May 8 00:46:45.168554 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 8 00:46:45.168561 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 8 00:46:45.168570 kernel: Run /init as init process May 8 00:46:45.168577 kernel: with arguments: May 8 00:46:45.168584 kernel: /init May 8 00:46:45.168591 kernel: with environment: May 8 00:46:45.168598 kernel: HOME=/ May 8 00:46:45.168605 kernel: TERM=linux May 8 00:46:45.168612 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:46:45.168622 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:46:45.168632 systemd[1]: Detected virtualization kvm. May 8 00:46:45.168642 systemd[1]: Detected architecture x86-64. May 8 00:46:45.168649 systemd[1]: Running in initrd. May 8 00:46:45.168657 systemd[1]: No hostname configured, using default hostname. May 8 00:46:45.168664 systemd[1]: Hostname set to . May 8 00:46:45.168672 systemd[1]: Initializing machine ID from VM UUID. May 8 00:46:45.168680 systemd[1]: Queued start job for default target initrd.target. May 8 00:46:45.168687 systemd[1]: Started systemd-ask-password-console.path. May 8 00:46:45.168695 systemd[1]: Reached target cryptsetup.target. May 8 00:46:45.168704 systemd[1]: Reached target paths.target. May 8 00:46:45.168718 systemd[1]: Reached target slices.target. May 8 00:46:45.168727 systemd[1]: Reached target swap.target. May 8 00:46:45.168735 systemd[1]: Reached target timers.target. May 8 00:46:45.168743 systemd[1]: Listening on iscsid.socket. May 8 00:46:45.168753 systemd[1]: Listening on iscsiuio.socket. May 8 00:46:45.168761 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:46:45.168769 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:46:45.168785 systemd[1]: Listening on systemd-journald.socket. May 8 00:46:45.168793 systemd[1]: Listening on systemd-networkd.socket. May 8 00:46:45.168801 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:46:45.168808 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:46:45.168816 systemd[1]: Reached target sockets.target. May 8 00:46:45.168824 systemd[1]: Starting kmod-static-nodes.service... May 8 00:46:45.168834 systemd[1]: Finished network-cleanup.service. May 8 00:46:45.168842 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:46:45.168850 systemd[1]: Starting systemd-journald.service... May 8 00:46:45.168858 systemd[1]: Starting systemd-modules-load.service... May 8 00:46:45.168866 systemd[1]: Starting systemd-resolved.service... May 8 00:46:45.168876 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:46:45.168884 systemd[1]: Finished kmod-static-nodes.service. May 8 00:46:45.168905 systemd-journald[198]: Journal started May 8 00:46:45.168965 systemd-journald[198]: Runtime Journal (/run/log/journal/45d75c614bee4dbe926918445469d286) is 6.0M, max 48.5M, 42.5M free. May 8 00:46:45.162615 systemd-modules-load[199]: Inserted module 'overlay' May 8 00:46:45.176764 systemd-resolved[200]: Positive Trust Anchors: May 8 00:46:45.200986 kernel: audit: type=1130 audit(1746665205.192:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.201027 systemd[1]: Started systemd-journald.service. May 8 00:46:45.201047 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:46:45.201063 kernel: audit: type=1130 audit(1746665205.200:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.176774 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:46:45.210394 kernel: audit: type=1130 audit(1746665205.203:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.176811 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:46:45.220557 kernel: Bridge firewalling registered May 8 00:46:45.220590 kernel: audit: type=1130 audit(1746665205.211:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.179866 systemd-resolved[200]: Defaulting to hostname 'linux'. May 8 00:46:45.225991 kernel: audit: type=1130 audit(1746665205.221:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.201012 systemd[1]: Started systemd-resolved.service. May 8 00:46:45.204683 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:46:45.212138 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:46:45.220521 systemd-modules-load[199]: Inserted module 'br_netfilter' May 8 00:46:45.222211 systemd[1]: Reached target nss-lookup.target. May 8 00:46:45.227728 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:46:45.232957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:46:45.243166 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:46:45.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.249520 kernel: audit: type=1130 audit(1746665205.245:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.249631 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:46:45.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.253647 systemd[1]: Starting dracut-cmdline.service... May 8 00:46:45.257969 kernel: audit: type=1130 audit(1746665205.251:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.258003 kernel: SCSI subsystem initialized May 8 00:46:45.265459 dracut-cmdline[216]: dracut-dracut-053 May 8 00:46:45.270557 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:46:45.270606 kernel: device-mapper: uevent: version 1.0.3 May 8 00:46:45.270619 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:46:45.270864 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a54efb5fced97d6fa50818abcad373184ba88ccc0f58664d2cd82270befba488 May 8 00:46:45.277861 systemd-modules-load[199]: Inserted module 'dm_multipath' May 8 00:46:45.278857 systemd[1]: Finished systemd-modules-load.service. May 8 00:46:45.281144 systemd[1]: Starting systemd-sysctl.service... May 8 00:46:45.286564 kernel: audit: type=1130 audit(1746665205.279:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.293531 systemd[1]: Finished systemd-sysctl.service. May 8 00:46:45.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.299513 kernel: audit: type=1130 audit(1746665205.294:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.354530 kernel: Loading iSCSI transport class v2.0-870. May 8 00:46:45.371515 kernel: iscsi: registered transport (tcp) May 8 00:46:45.393904 kernel: iscsi: registered transport (qla4xxx) May 8 00:46:45.393990 kernel: QLogic iSCSI HBA Driver May 8 00:46:45.429722 systemd[1]: Finished dracut-cmdline.service. May 8 00:46:45.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.432341 systemd[1]: Starting dracut-pre-udev.service... May 8 00:46:45.481522 kernel: raid6: avx2x4 gen() 23822 MB/s May 8 00:46:45.498533 kernel: raid6: avx2x4 xor() 6749 MB/s May 8 00:46:45.515578 kernel: raid6: avx2x2 gen() 22831 MB/s May 8 00:46:45.532525 kernel: raid6: avx2x2 xor() 16002 MB/s May 8 00:46:45.549531 kernel: raid6: avx2x1 gen() 23169 MB/s May 8 00:46:45.566546 kernel: raid6: avx2x1 xor() 12418 MB/s May 8 00:46:45.583563 kernel: raid6: sse2x4 gen() 12426 MB/s May 8 00:46:45.600544 kernel: raid6: sse2x4 xor() 5827 MB/s May 8 00:46:45.617547 kernel: raid6: sse2x2 gen() 12505 MB/s May 8 00:46:45.634550 kernel: raid6: sse2x2 xor() 9455 MB/s May 8 00:46:45.675533 kernel: raid6: sse2x1 gen() 11627 MB/s May 8 00:46:45.699383 kernel: raid6: sse2x1 xor() 7079 MB/s May 8 00:46:45.699527 kernel: raid6: using algorithm avx2x4 gen() 23822 MB/s May 8 00:46:45.699541 kernel: raid6: .... xor() 6749 MB/s, rmw enabled May 8 00:46:45.700078 kernel: raid6: using avx2x2 recovery algorithm May 8 00:46:45.715517 kernel: xor: automatically using best checksumming function avx May 8 00:46:45.814538 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 8 00:46:45.825232 systemd[1]: Finished dracut-pre-udev.service. May 8 00:46:45.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.826000 audit: BPF prog-id=7 op=LOAD May 8 00:46:45.826000 audit: BPF prog-id=8 op=LOAD May 8 00:46:45.827494 systemd[1]: Starting systemd-udevd.service... May 8 00:46:45.843522 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 8 00:46:45.879643 systemd[1]: Started systemd-udevd.service. May 8 00:46:45.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.881043 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:46:45.891744 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation May 8 00:46:45.923213 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:46:45.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:45.924611 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:46:45.965246 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:46:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:46.016497 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:46:46.028501 kernel: libata version 3.00 loaded. May 8 00:46:46.034184 kernel: AVX2 version of gcm_enc/dec engaged. May 8 00:46:46.034219 kernel: AES CTR mode by8 optimization enabled May 8 00:46:46.055956 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:46:46.066414 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:46:46.066430 kernel: ahci 0000:00:1f.2: version 3.0 May 8 00:46:46.091174 kernel: GPT:9289727 != 19775487 May 8 00:46:46.091195 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 8 00:46:46.091210 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:46:46.091222 kernel: GPT:9289727 != 19775487 May 8 00:46:46.091234 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:46:46.091244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:46.091256 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 8 00:46:46.091398 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 8 00:46:46.091564 kernel: scsi host0: ahci May 8 00:46:46.091700 kernel: scsi host1: ahci May 8 00:46:46.091912 kernel: scsi host2: ahci May 8 00:46:46.092049 kernel: scsi host3: ahci May 8 00:46:46.092172 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) May 8 00:46:46.092190 kernel: scsi host4: ahci May 8 00:46:46.092307 kernel: scsi host5: ahci May 8 00:46:46.092395 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 8 00:46:46.092405 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 8 00:46:46.092414 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 8 00:46:46.092423 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 8 00:46:46.092432 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 8 00:46:46.092440 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 8 00:46:46.090386 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:46:46.135107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:46:46.141679 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:46:46.144951 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:46:46.145411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:46:46.149403 systemd[1]: Starting disk-uuid.service... May 8 00:46:46.405505 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 8 00:46:46.405590 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 8 00:46:46.405600 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 8 00:46:46.408503 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 8 00:46:46.408592 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 8 00:46:46.408603 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 8 00:46:46.409504 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 8 00:46:46.411340 kernel: ata3.00: applying bridge limits May 8 00:46:46.411407 kernel: ata3.00: configured for UDMA/100 May 8 00:46:46.416125 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 8 00:46:46.492542 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 8 00:46:46.510511 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 8 00:46:46.510533 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 8 00:46:46.696159 disk-uuid[527]: Primary Header is updated. May 8 00:46:46.696159 disk-uuid[527]: Secondary Entries is updated. May 8 00:46:46.696159 disk-uuid[527]: Secondary Header is updated. May 8 00:46:46.729912 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:46.733506 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:46.737527 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:47.737518 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:46:47.737946 disk-uuid[538]: The operation has completed successfully. May 8 00:46:47.865402 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:46:47.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:47.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:47.865535 systemd[1]: Finished disk-uuid.service. May 8 00:46:47.867233 systemd[1]: Starting verity-setup.service... May 8 00:46:47.889498 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 8 00:46:47.916155 systemd[1]: Found device dev-mapper-usr.device. May 8 00:46:47.920188 systemd[1]: Mounting sysusr-usr.mount... May 8 00:46:47.922628 systemd[1]: Finished verity-setup.service. May 8 00:46:47.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.003986 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:46:48.004007 systemd[1]: Mounted sysusr-usr.mount. May 8 00:46:48.006108 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:46:48.008885 systemd[1]: Starting ignition-setup.service... May 8 00:46:48.011728 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:46:48.020825 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:48.020895 kernel: BTRFS info (device vda6): using free space tree May 8 00:46:48.020911 kernel: BTRFS info (device vda6): has skinny extents May 8 00:46:48.033769 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:46:48.048213 systemd[1]: Finished ignition-setup.service. May 8 00:46:48.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.051548 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:46:48.186731 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:46:48.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.188000 audit: BPF prog-id=9 op=LOAD May 8 00:46:48.189556 systemd[1]: Starting systemd-networkd.service... May 8 00:46:48.210165 ignition[644]: Ignition 2.14.0 May 8 00:46:48.210184 ignition[644]: Stage: fetch-offline May 8 00:46:48.210271 ignition[644]: no configs at "/usr/lib/ignition/base.d" May 8 00:46:48.210280 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:48.210436 ignition[644]: parsed url from cmdline: "" May 8 00:46:48.210439 ignition[644]: no config URL provided May 8 00:46:48.210444 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:46:48.210450 ignition[644]: no config at "/usr/lib/ignition/user.ign" May 8 00:46:48.210491 ignition[644]: op(1): [started] loading QEMU firmware config module May 8 00:46:48.210507 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:46:48.237628 ignition[644]: op(1): [finished] loading QEMU firmware config module May 8 00:46:48.239056 ignition[644]: parsing config with SHA512: 727e560622e7dd01abc70eb8061eea5f772d0aa4575ca05af128b89eda0ef0d2b8af1a389c9123ca60249944e8a6bffb263a333114a58a9cc4f7bfdf66583a9c May 8 00:46:48.296705 unknown[644]: fetched base config from "system" May 8 00:46:48.296718 unknown[644]: fetched user config from "qemu" May 8 00:46:48.297310 systemd-networkd[714]: lo: Link UP May 8 00:46:48.297314 systemd-networkd[714]: lo: Gained carrier May 8 00:46:48.297818 systemd-networkd[714]: Enumeration completed May 8 00:46:48.298106 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:46:48.298257 systemd[1]: Started systemd-networkd.service. May 8 00:46:48.315656 systemd-networkd[714]: eth0: Link UP May 8 00:46:48.315660 systemd-networkd[714]: eth0: Gained carrier May 8 00:46:48.321909 ignition[644]: fetch-offline: fetch-offline passed May 8 00:46:48.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.322056 systemd[1]: Reached target network.target. May 8 00:46:48.323763 systemd[1]: Starting iscsiuio.service... May 8 00:46:48.325492 ignition[644]: Ignition finished successfully May 8 00:46:48.327236 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:46:48.327912 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:46:48.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.329079 systemd[1]: Starting ignition-kargs.service... May 8 00:46:48.452863 ignition[720]: Ignition 2.14.0 May 8 00:46:48.452875 ignition[720]: Stage: kargs May 8 00:46:48.452971 ignition[720]: no configs at "/usr/lib/ignition/base.d" May 8 00:46:48.452980 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:48.501572 ignition[720]: kargs: kargs passed May 8 00:46:48.501632 ignition[720]: Ignition finished successfully May 8 00:46:48.503675 systemd[1]: Started iscsiuio.service. May 8 00:46:48.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.505413 systemd[1]: Finished ignition-kargs.service. May 8 00:46:48.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.507951 systemd[1]: Starting ignition-disks.service... May 8 00:46:48.510125 systemd[1]: Starting iscsid.service... May 8 00:46:48.512607 systemd-networkd[714]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:46:48.515499 iscsid[729]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:46:48.515499 iscsid[729]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:46:48.515499 iscsid[729]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:46:48.515499 iscsid[729]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:46:48.515499 iscsid[729]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:46:48.515499 iscsid[729]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:46:48.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.516946 ignition[728]: Ignition 2.14.0 May 8 00:46:48.518237 systemd[1]: Started iscsid.service. May 8 00:46:48.516953 ignition[728]: Stage: disks May 8 00:46:48.523349 systemd[1]: Finished ignition-disks.service. May 8 00:46:48.517075 ignition[728]: no configs at "/usr/lib/ignition/base.d" May 8 00:46:48.524960 systemd[1]: Reached target initrd-root-device.target. May 8 00:46:48.517086 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:48.526889 systemd[1]: Reached target local-fs-pre.target. May 8 00:46:48.517879 ignition[728]: disks: disks passed May 8 00:46:48.554199 systemd[1]: Reached target local-fs.target. May 8 00:46:48.517916 ignition[728]: Ignition finished successfully May 8 00:46:48.556317 systemd[1]: Reached target sysinit.target. May 8 00:46:48.557194 systemd[1]: Reached target basic.target. May 8 00:46:48.558911 systemd[1]: Starting dracut-initqueue.service... May 8 00:46:48.634618 systemd[1]: Finished dracut-initqueue.service. May 8 00:46:48.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.647381 systemd[1]: Reached target remote-fs-pre.target. May 8 00:46:48.648996 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:46:48.649974 systemd[1]: Reached target remote-fs.target. May 8 00:46:48.651907 systemd[1]: Starting dracut-pre-mount.service... May 8 00:46:48.659106 systemd[1]: Finished dracut-pre-mount.service. May 8 00:46:48.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:48.661103 systemd[1]: Starting systemd-fsck-root.service... May 8 00:46:48.706889 systemd-fsck[750]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 8 00:46:49.003332 systemd[1]: Finished systemd-fsck-root.service. May 8 00:46:49.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:49.006170 systemd[1]: Mounting sysroot.mount... May 8 00:46:49.039502 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:46:49.039704 systemd[1]: Mounted sysroot.mount. May 8 00:46:49.040274 systemd[1]: Reached target initrd-root-fs.target. May 8 00:46:49.042412 systemd[1]: Mounting sysroot-usr.mount... May 8 00:46:49.043612 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:46:49.043645 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:46:49.043668 systemd[1]: Reached target ignition-diskful.target. May 8 00:46:49.045693 systemd[1]: Mounted sysroot-usr.mount. May 8 00:46:49.048562 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:46:49.050524 systemd[1]: Starting initrd-setup-root.service... May 8 00:46:49.055016 initrd-setup-root[761]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:46:49.058748 initrd-setup-root[769]: cut: /sysroot/etc/group: No such file or directory May 8 00:46:49.060611 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (756) May 8 00:46:49.061973 initrd-setup-root[777]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:46:49.064867 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:49.064892 kernel: BTRFS info (device vda6): using free space tree May 8 00:46:49.064902 kernel: BTRFS info (device vda6): has skinny extents May 8 00:46:49.066258 initrd-setup-root[801]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:46:49.076768 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:46:49.104807 systemd[1]: Finished initrd-setup-root.service. May 8 00:46:49.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:49.107461 systemd[1]: Starting ignition-mount.service... May 8 00:46:49.109820 systemd[1]: Starting sysroot-boot.service... May 8 00:46:49.113841 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 8 00:46:49.113937 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 8 00:46:49.180408 systemd[1]: Finished sysroot-boot.service. May 8 00:46:49.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:49.182898 kernel: kauditd_printk_skb: 23 callbacks suppressed May 8 00:46:49.182927 kernel: audit: type=1130 audit(1746665209.182:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:49.186162 ignition[821]: INFO : Ignition 2.14.0 May 8 00:46:49.186162 ignition[821]: INFO : Stage: mount May 8 00:46:49.187813 ignition[821]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:46:49.187813 ignition[821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:49.187813 ignition[821]: INFO : mount: mount passed May 8 00:46:49.187813 ignition[821]: INFO : Ignition finished successfully May 8 00:46:49.192435 systemd[1]: Finished ignition-mount.service. May 8 00:46:49.196828 kernel: audit: type=1130 audit(1746665209.192:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:49.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:49.193654 systemd[1]: Starting ignition-files.service... May 8 00:46:49.200121 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:46:49.209510 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (832) May 8 00:46:49.209568 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 8 00:46:49.230833 kernel: BTRFS info (device vda6): using free space tree May 8 00:46:49.230879 kernel: BTRFS info (device vda6): has skinny extents May 8 00:46:49.236311 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:46:49.250937 ignition[851]: INFO : Ignition 2.14.0 May 8 00:46:49.250937 ignition[851]: INFO : Stage: files May 8 00:46:49.257377 ignition[851]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:46:49.257377 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:49.260282 ignition[851]: DEBUG : files: compiled without relabeling support, skipping May 8 00:46:49.262232 ignition[851]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:46:49.262232 ignition[851]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:46:49.265407 ignition[851]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:46:49.267029 ignition[851]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:46:49.267029 ignition[851]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:46:49.266578 unknown[851]: wrote ssh authorized keys file for user: core May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:46:49.271610 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 8 00:46:49.604755 systemd-networkd[714]: eth0: Gained IPv6LL May 8 00:46:49.847590 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 8 00:46:51.286497 ignition[851]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 8 00:46:51.286497 ignition[851]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 8 00:46:51.414110 ignition[851]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:46:51.417105 ignition[851]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:46:51.417105 ignition[851]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 8 00:46:51.417105 ignition[851]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:46:51.417105 ignition[851]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:46:51.452925 ignition[851]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:46:51.455192 ignition[851]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:46:51.455192 ignition[851]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:46:51.455192 ignition[851]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:46:51.455192 ignition[851]: INFO : files: files passed May 8 00:46:51.455192 ignition[851]: INFO : Ignition finished successfully May 8 00:46:51.465179 systemd[1]: Finished ignition-files.service. May 8 00:46:51.470906 kernel: audit: type=1130 audit(1746665211.465:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.467008 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:46:51.471432 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:46:51.472574 systemd[1]: Starting ignition-quench.service... May 8 00:46:51.484647 kernel: audit: type=1130 audit(1746665211.476:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.484687 kernel: audit: type=1131 audit(1746665211.476:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.475751 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:46:51.475868 systemd[1]: Finished ignition-quench.service. May 8 00:46:51.488941 initrd-setup-root-after-ignition[876]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:46:51.491815 initrd-setup-root-after-ignition[878]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:46:51.492562 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:46:51.524294 kernel: audit: type=1130 audit(1746665211.493:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.494393 systemd[1]: Reached target ignition-complete.target. May 8 00:46:51.526427 systemd[1]: Starting initrd-parse-etc.service... May 8 00:46:51.544730 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:46:51.544831 systemd[1]: Finished initrd-parse-etc.service. May 8 00:46:51.556234 kernel: audit: type=1130 audit(1746665211.547:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.556270 kernel: audit: type=1131 audit(1746665211.547:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.548374 systemd[1]: Reached target initrd-fs.target. May 8 00:46:51.558261 systemd[1]: Reached target initrd.target. May 8 00:46:51.560237 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:46:51.563038 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:46:51.575079 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:46:51.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.597336 systemd[1]: Starting initrd-cleanup.service... May 8 00:46:51.601164 kernel: audit: type=1130 audit(1746665211.595:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.608344 systemd[1]: Stopped target nss-lookup.target. May 8 00:46:51.610510 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:46:51.612886 systemd[1]: Stopped target timers.target. May 8 00:46:51.614922 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:46:51.616149 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:46:51.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.618204 systemd[1]: Stopped target initrd.target. May 8 00:46:51.622680 kernel: audit: type=1131 audit(1746665211.617:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.622734 systemd[1]: Stopped target basic.target. May 8 00:46:51.624236 systemd[1]: Stopped target ignition-complete.target. May 8 00:46:51.626242 systemd[1]: Stopped target ignition-diskful.target. May 8 00:46:51.628066 systemd[1]: Stopped target initrd-root-device.target. May 8 00:46:51.629914 systemd[1]: Stopped target remote-fs.target. May 8 00:46:51.631541 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:46:51.651807 systemd[1]: Stopped target sysinit.target. May 8 00:46:51.653351 systemd[1]: Stopped target local-fs.target. May 8 00:46:51.654924 systemd[1]: Stopped target local-fs-pre.target. May 8 00:46:51.656589 systemd[1]: Stopped target swap.target. May 8 00:46:51.658065 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:46:51.659112 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:46:51.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.661177 systemd[1]: Stopped target cryptsetup.target. May 8 00:46:51.662927 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:46:51.663988 systemd[1]: Stopped dracut-initqueue.service. May 8 00:46:51.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.665833 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:46:51.667076 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:46:51.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.693301 systemd[1]: Stopped target paths.target. May 8 00:46:51.694826 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:46:51.699534 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:46:51.701463 systemd[1]: Stopped target slices.target. May 8 00:46:51.703187 systemd[1]: Stopped target sockets.target. May 8 00:46:51.704857 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:46:51.705810 systemd[1]: Closed iscsid.socket. May 8 00:46:51.707234 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:46:51.708495 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:46:51.710704 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:46:51.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.710814 systemd[1]: Stopped ignition-files.service. May 8 00:46:51.714359 systemd[1]: Stopping ignition-mount.service... May 8 00:46:51.716213 systemd[1]: Stopping iscsiuio.service... May 8 00:46:51.717638 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:46:51.718781 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:46:51.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.721686 systemd[1]: Stopping sysroot-boot.service... May 8 00:46:51.723199 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:46:51.724366 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:46:51.726176 ignition[891]: INFO : Ignition 2.14.0 May 8 00:46:51.726176 ignition[891]: INFO : Stage: umount May 8 00:46:51.726176 ignition[891]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:46:51.726176 ignition[891]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:46:51.726176 ignition[891]: INFO : umount: umount passed May 8 00:46:51.726176 ignition[891]: INFO : Ignition finished successfully May 8 00:46:51.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.726318 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:46:51.727963 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:46:51.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.741052 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:46:51.742703 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:46:51.743718 systemd[1]: Stopped iscsiuio.service. May 8 00:46:51.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.745806 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:46:51.746945 systemd[1]: Stopped ignition-mount.service. May 8 00:46:51.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.749037 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:46:51.750101 systemd[1]: Stopped sysroot-boot.service. May 8 00:46:51.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.752339 systemd[1]: Stopped target network.target. May 8 00:46:51.754238 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:46:51.754282 systemd[1]: Closed iscsiuio.socket. May 8 00:46:51.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.755983 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:46:51.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.756024 systemd[1]: Stopped ignition-disks.service. May 8 00:46:51.757658 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:46:51.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.757695 systemd[1]: Stopped ignition-kargs.service. May 8 00:46:51.759589 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:46:51.759639 systemd[1]: Stopped ignition-setup.service. May 8 00:46:51.800665 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:46:51.800886 systemd[1]: Stopped initrd-setup-root.service. May 8 00:46:51.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.807318 systemd[1]: Stopping systemd-networkd.service... May 8 00:46:51.809608 systemd[1]: Stopping systemd-resolved.service... May 8 00:46:51.812040 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:46:51.812564 systemd-networkd[714]: eth0: DHCPv6 lease lost May 8 00:46:51.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.813624 systemd[1]: Finished initrd-cleanup.service. May 8 00:46:51.817414 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:46:51.819907 systemd[1]: Stopped systemd-networkd.service. May 8 00:46:51.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.822929 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:46:51.824316 systemd[1]: Stopped systemd-resolved.service. May 8 00:46:51.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.827000 audit: BPF prog-id=9 op=UNLOAD May 8 00:46:51.828233 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:46:51.828289 systemd[1]: Closed systemd-networkd.socket. May 8 00:46:51.832648 systemd[1]: Stopping network-cleanup.service... May 8 00:46:51.833000 audit: BPF prog-id=6 op=UNLOAD May 8 00:46:51.834883 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:46:51.836153 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:46:51.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.838714 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:46:51.839912 systemd[1]: Stopped systemd-sysctl.service. May 8 00:46:51.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.842183 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:46:51.842262 systemd[1]: Stopped systemd-modules-load.service. May 8 00:46:51.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.845952 systemd[1]: Stopping systemd-udevd.service... May 8 00:46:51.849989 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:46:51.852557 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:46:51.852676 systemd[1]: Stopped network-cleanup.service. May 8 00:46:51.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.857420 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:46:51.857610 systemd[1]: Stopped systemd-udevd.service. May 8 00:46:51.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.860739 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:46:51.860864 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:46:51.861905 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:46:51.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.861934 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:46:51.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.864169 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:46:51.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.864247 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:46:51.866281 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:46:51.866318 systemd[1]: Stopped dracut-cmdline.service. May 8 00:46:51.868557 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:46:51.868593 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:46:51.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:51.870579 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:46:51.871753 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:46:51.871883 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:46:51.877925 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:46:51.878025 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:46:51.879855 systemd[1]: Reached target initrd-switch-root.target. May 8 00:46:51.881782 systemd[1]: Starting initrd-switch-root.service... May 8 00:46:51.896970 systemd[1]: Switching root. May 8 00:46:51.918204 iscsid[729]: iscsid shutting down. May 8 00:46:51.919085 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). May 8 00:46:51.919119 systemd-journald[198]: Journal stopped May 8 00:46:57.005541 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:46:57.005605 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:46:57.005620 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:46:57.005636 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:46:57.005649 kernel: SELinux: policy capability open_perms=1 May 8 00:46:57.005662 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:46:57.005675 kernel: SELinux: policy capability always_check_network=0 May 8 00:46:57.005688 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:46:57.005706 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:46:57.005719 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:46:57.005731 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:46:57.005745 systemd[1]: Successfully loaded SELinux policy in 51.911ms. May 8 00:46:57.005771 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.036ms. May 8 00:46:57.005786 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:46:57.005800 systemd[1]: Detected virtualization kvm. May 8 00:46:57.005814 systemd[1]: Detected architecture x86-64. May 8 00:46:57.005828 systemd[1]: Detected first boot. May 8 00:46:57.005842 systemd[1]: Initializing machine ID from VM UUID. May 8 00:46:57.005862 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:46:57.005876 systemd[1]: Populated /etc with preset unit settings. May 8 00:46:57.005896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:46:57.005916 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:46:57.005931 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:46:57.005946 kernel: kauditd_printk_skb: 48 callbacks suppressed May 8 00:46:57.005958 kernel: audit: type=1334 audit(1746665216.824:85): prog-id=12 op=LOAD May 8 00:46:57.005971 kernel: audit: type=1334 audit(1746665216.824:86): prog-id=3 op=UNLOAD May 8 00:46:57.005984 kernel: audit: type=1334 audit(1746665216.827:87): prog-id=13 op=LOAD May 8 00:46:57.005999 kernel: audit: type=1334 audit(1746665216.828:88): prog-id=14 op=LOAD May 8 00:46:57.006012 kernel: audit: type=1334 audit(1746665216.828:89): prog-id=4 op=UNLOAD May 8 00:46:57.006024 kernel: audit: type=1334 audit(1746665216.828:90): prog-id=5 op=UNLOAD May 8 00:46:57.006037 kernel: audit: type=1334 audit(1746665216.831:91): prog-id=15 op=LOAD May 8 00:46:57.006050 kernel: audit: type=1334 audit(1746665216.831:92): prog-id=12 op=UNLOAD May 8 00:46:57.006063 kernel: audit: type=1334 audit(1746665216.834:93): prog-id=16 op=LOAD May 8 00:46:57.006076 kernel: audit: type=1334 audit(1746665216.836:94): prog-id=17 op=LOAD May 8 00:46:57.006089 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:46:57.006103 systemd[1]: Stopped iscsid.service. May 8 00:46:57.006119 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:46:57.006133 systemd[1]: Stopped initrd-switch-root.service. May 8 00:46:57.006146 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:46:57.006165 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:46:57.006179 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:46:57.006194 systemd[1]: Created slice system-getty.slice. May 8 00:46:57.006207 systemd[1]: Created slice system-modprobe.slice. May 8 00:46:57.006224 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:46:57.006238 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:46:57.006252 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:46:57.006266 systemd[1]: Created slice user.slice. May 8 00:46:57.006280 systemd[1]: Started systemd-ask-password-console.path. May 8 00:46:57.006294 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:46:57.006308 systemd[1]: Set up automount boot.automount. May 8 00:46:57.006322 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:46:57.006338 systemd[1]: Stopped target initrd-switch-root.target. May 8 00:46:57.006352 systemd[1]: Stopped target initrd-fs.target. May 8 00:46:57.006370 systemd[1]: Stopped target initrd-root-fs.target. May 8 00:46:57.006384 systemd[1]: Reached target integritysetup.target. May 8 00:46:57.006399 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:46:57.006413 systemd[1]: Reached target remote-fs.target. May 8 00:46:57.006427 systemd[1]: Reached target slices.target. May 8 00:46:57.006443 systemd[1]: Reached target swap.target. May 8 00:46:57.006467 systemd[1]: Reached target torcx.target. May 8 00:46:57.006499 systemd[1]: Reached target veritysetup.target. May 8 00:46:57.006513 systemd[1]: Listening on systemd-coredump.socket. May 8 00:46:57.006528 systemd[1]: Listening on systemd-initctl.socket. May 8 00:46:57.006542 systemd[1]: Listening on systemd-networkd.socket. May 8 00:46:57.006556 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:46:57.006570 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:46:57.006584 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:46:57.006599 systemd[1]: Mounting dev-hugepages.mount... May 8 00:46:57.006616 systemd[1]: Mounting dev-mqueue.mount... May 8 00:46:57.006629 systemd[1]: Mounting media.mount... May 8 00:46:57.006644 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:57.006658 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:46:57.006671 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:46:57.006685 systemd[1]: Mounting tmp.mount... May 8 00:46:57.006699 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:46:57.006713 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:46:57.006728 systemd[1]: Starting kmod-static-nodes.service... May 8 00:46:57.006744 systemd[1]: Starting modprobe@configfs.service... May 8 00:46:57.006758 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:46:57.006772 systemd[1]: Starting modprobe@drm.service... May 8 00:46:57.006786 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:46:57.006806 systemd[1]: Starting modprobe@fuse.service... May 8 00:46:57.006819 systemd[1]: Starting modprobe@loop.service... May 8 00:46:57.006834 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:46:57.006848 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:46:57.006862 systemd[1]: Stopped systemd-fsck-root.service. May 8 00:46:57.006879 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:46:57.006893 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:46:57.006907 systemd[1]: Stopped systemd-journald.service. May 8 00:46:57.006920 kernel: fuse: init (API version 7.34) May 8 00:46:57.006934 systemd[1]: Starting systemd-journald.service... May 8 00:46:57.006948 kernel: loop: module loaded May 8 00:46:57.006961 systemd[1]: Starting systemd-modules-load.service... May 8 00:46:57.006975 systemd[1]: Starting systemd-network-generator.service... May 8 00:46:57.006990 systemd[1]: Starting systemd-remount-fs.service... May 8 00:46:57.007006 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:46:57.007020 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:46:57.007034 systemd[1]: Stopped verity-setup.service. May 8 00:46:57.007049 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:57.007066 systemd-journald[1007]: Journal started May 8 00:46:57.007114 systemd-journald[1007]: Runtime Journal (/run/log/journal/45d75c614bee4dbe926918445469d286) is 6.0M, max 48.5M, 42.5M free. May 8 00:46:51.999000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:46:52.725000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:46:52.725000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:46:52.725000 audit: BPF prog-id=10 op=LOAD May 8 00:46:52.725000 audit: BPF prog-id=10 op=UNLOAD May 8 00:46:52.725000 audit: BPF prog-id=11 op=LOAD May 8 00:46:52.725000 audit: BPF prog-id=11 op=UNLOAD May 8 00:46:52.760000 audit[924]: AVC avc: denied { associate } for pid=924 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 8 00:46:52.760000 audit[924]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558a4 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:46:52.760000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:46:52.762000 audit[924]: AVC avc: denied { associate } for pid=924 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 8 00:46:52.762000 audit[924]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155979 a2=1ed a3=0 items=2 ppid=907 pid=924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:46:52.762000 audit: CWD cwd="/" May 8 00:46:52.762000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:52.762000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:52.762000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:46:56.824000 audit: BPF prog-id=12 op=LOAD May 8 00:46:56.824000 audit: BPF prog-id=3 op=UNLOAD May 8 00:46:56.827000 audit: BPF prog-id=13 op=LOAD May 8 00:46:56.828000 audit: BPF prog-id=14 op=LOAD May 8 00:46:56.828000 audit: BPF prog-id=4 op=UNLOAD May 8 00:46:56.828000 audit: BPF prog-id=5 op=UNLOAD May 8 00:46:56.831000 audit: BPF prog-id=15 op=LOAD May 8 00:46:56.831000 audit: BPF prog-id=12 op=UNLOAD May 8 00:46:56.834000 audit: BPF prog-id=16 op=LOAD May 8 00:46:56.836000 audit: BPF prog-id=17 op=LOAD May 8 00:46:56.836000 audit: BPF prog-id=13 op=UNLOAD May 8 00:46:56.836000 audit: BPF prog-id=14 op=UNLOAD May 8 00:46:56.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.850000 audit: BPF prog-id=15 op=UNLOAD May 8 00:46:56.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:56.981000 audit: BPF prog-id=18 op=LOAD May 8 00:46:56.981000 audit: BPF prog-id=19 op=LOAD May 8 00:46:56.981000 audit: BPF prog-id=20 op=LOAD May 8 00:46:56.981000 audit: BPF prog-id=16 op=UNLOAD May 8 00:46:56.981000 audit: BPF prog-id=17 op=UNLOAD May 8 00:46:57.003000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:46:57.003000 audit[1007]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd86fefde0 a2=4000 a3=7ffd86fefe7c items=0 ppid=1 pid=1007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:46:57.003000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:46:57.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:52.759586 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:46:56.822254 systemd[1]: Queued start job for default target multi-user.target. May 8 00:46:52.759815 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:46:56.822271 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:46:52.759858 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:46:56.839039 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:46:52.759889 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 8 00:46:52.759898 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="skipped missing lower profile" missing profile=oem May 8 00:46:52.759926 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 8 00:46:52.759939 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 8 00:46:52.760173 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 8 00:46:52.760212 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:46:52.760226 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:46:52.760976 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 8 00:46:52.761015 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 8 00:46:52.761038 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 8 00:46:52.761055 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 8 00:46:52.761070 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 8 00:46:52.761082 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 8 00:46:56.452978 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:46:56.453326 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:46:57.010574 systemd[1]: Started systemd-journald.service. May 8 00:46:56.453529 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:46:56.453808 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:46:56.453883 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 8 00:46:56.453971 /usr/lib/systemd/system-generators/torcx-generator[924]: time="2025-05-08T00:46:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 8 00:46:57.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.011836 systemd[1]: Mounted dev-hugepages.mount. May 8 00:46:57.012834 systemd[1]: Mounted dev-mqueue.mount. May 8 00:46:57.013815 systemd[1]: Mounted media.mount. May 8 00:46:57.015401 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:46:57.016721 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:46:57.017695 systemd[1]: Mounted tmp.mount. May 8 00:46:57.018783 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:46:57.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.020055 systemd[1]: Finished kmod-static-nodes.service. May 8 00:46:57.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.021236 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:46:57.021442 systemd[1]: Finished modprobe@configfs.service. May 8 00:46:57.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.022674 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:46:57.022795 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:46:57.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.023967 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:46:57.024158 systemd[1]: Finished modprobe@drm.service. May 8 00:46:57.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.025271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:46:57.025536 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:46:57.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.026954 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:46:57.027184 systemd[1]: Finished modprobe@fuse.service. May 8 00:46:57.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.028851 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:46:57.029117 systemd[1]: Finished modprobe@loop.service. May 8 00:46:57.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.030467 systemd[1]: Finished systemd-modules-load.service. May 8 00:46:57.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.032124 systemd[1]: Finished systemd-network-generator.service. May 8 00:46:57.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.033556 systemd[1]: Finished systemd-remount-fs.service. May 8 00:46:57.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.035092 systemd[1]: Reached target network-pre.target. May 8 00:46:57.037636 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:46:57.039939 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:46:57.040879 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:46:57.044221 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:46:57.046307 systemd[1]: Starting systemd-journal-flush.service... May 8 00:46:57.047512 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:46:57.049077 systemd[1]: Starting systemd-random-seed.service... May 8 00:46:57.050125 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:46:57.051667 systemd[1]: Starting systemd-sysctl.service... May 8 00:46:57.053416 systemd-journald[1007]: Time spent on flushing to /var/log/journal/45d75c614bee4dbe926918445469d286 is 20.437ms for 1100 entries. May 8 00:46:57.053416 systemd-journald[1007]: System Journal (/var/log/journal/45d75c614bee4dbe926918445469d286) is 8.0M, max 195.6M, 187.6M free. May 8 00:46:57.092216 systemd-journald[1007]: Received client request to flush runtime journal. May 8 00:46:57.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:57.054891 systemd[1]: Starting systemd-sysusers.service... May 8 00:46:57.059117 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:46:57.094097 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:46:57.060239 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:46:57.063696 systemd[1]: Finished systemd-random-seed.service. May 8 00:46:57.064871 systemd[1]: Reached target first-boot-complete.target. May 8 00:46:57.076868 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:46:57.078404 systemd[1]: Finished systemd-sysctl.service. May 8 00:46:57.079695 systemd[1]: Finished systemd-sysusers.service. May 8 00:46:57.081962 systemd[1]: Starting systemd-udev-settle.service... May 8 00:46:57.093695 systemd[1]: Finished systemd-journal-flush.service. May 8 00:46:57.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.061800 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:46:58.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.072000 audit: BPF prog-id=21 op=LOAD May 8 00:46:58.072000 audit: BPF prog-id=22 op=LOAD May 8 00:46:58.072000 audit: BPF prog-id=7 op=UNLOAD May 8 00:46:58.072000 audit: BPF prog-id=8 op=UNLOAD May 8 00:46:58.074198 systemd[1]: Starting systemd-udevd.service... May 8 00:46:58.098208 systemd-udevd[1031]: Using default interface naming scheme 'v252'. May 8 00:46:58.114144 systemd[1]: Started systemd-udevd.service. May 8 00:46:58.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.149000 audit: BPF prog-id=23 op=LOAD May 8 00:46:58.150569 systemd[1]: Starting systemd-networkd.service... May 8 00:46:58.155846 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 8 00:46:58.157000 audit: BPF prog-id=24 op=LOAD May 8 00:46:58.157000 audit: BPF prog-id=25 op=LOAD May 8 00:46:58.157000 audit: BPF prog-id=26 op=LOAD May 8 00:46:58.158695 systemd[1]: Starting systemd-userdbd.service... May 8 00:46:58.181529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:46:58.199507 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 8 00:46:58.201615 systemd[1]: Started systemd-userdbd.service. May 8 00:46:58.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.212496 kernel: ACPI: button: Power Button [PWRF] May 8 00:46:58.233000 audit[1036]: AVC avc: denied { confidentiality } for pid=1036 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 8 00:46:58.233000 audit[1036]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556c612b0190 a1=338ac a2=7f73b9717bc5 a3=5 items=110 ppid=1031 pid=1036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:46:58.233000 audit: CWD cwd="/" May 8 00:46:58.233000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=1 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=2 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=3 name=(null) inode=12809 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=4 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=5 name=(null) inode=12810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=6 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=7 name=(null) inode=12811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=8 name=(null) inode=12811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=9 name=(null) inode=12812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=10 name=(null) inode=12811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=11 name=(null) inode=12813 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=12 name=(null) inode=12811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=13 name=(null) inode=12814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=14 name=(null) inode=12811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=15 name=(null) inode=12815 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=16 name=(null) inode=12811 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=17 name=(null) inode=12816 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=18 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=19 name=(null) inode=12817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=20 name=(null) inode=12817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=21 name=(null) inode=12818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=22 name=(null) inode=12817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=23 name=(null) inode=12819 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=24 name=(null) inode=12817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=25 name=(null) inode=12820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=26 name=(null) inode=12817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=27 name=(null) inode=12821 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=28 name=(null) inode=12817 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=29 name=(null) inode=12822 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=30 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=31 name=(null) inode=12823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=32 name=(null) inode=12823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=33 name=(null) inode=12824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=34 name=(null) inode=12823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=35 name=(null) inode=12825 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=36 name=(null) inode=12823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=37 name=(null) inode=12826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=38 name=(null) inode=12823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=39 name=(null) inode=12827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=40 name=(null) inode=12823 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=41 name=(null) inode=12828 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=42 name=(null) inode=12808 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=43 name=(null) inode=12829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=44 name=(null) inode=12829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=45 name=(null) inode=12830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=46 name=(null) inode=12829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=47 name=(null) inode=12831 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=48 name=(null) inode=12829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=49 name=(null) inode=12832 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=50 name=(null) inode=12829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=51 name=(null) inode=12833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=52 name=(null) inode=12829 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=53 name=(null) inode=12834 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=55 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=56 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=57 name=(null) inode=12836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=58 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=59 name=(null) inode=12837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=60 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=61 name=(null) inode=12838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=62 name=(null) inode=12838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=63 name=(null) inode=12839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=64 name=(null) inode=12838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=65 name=(null) inode=12840 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=66 name=(null) inode=12838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=67 name=(null) inode=12841 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=68 name=(null) inode=12838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=69 name=(null) inode=12842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=70 name=(null) inode=12838 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=71 name=(null) inode=12843 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=72 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=73 name=(null) inode=12844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=74 name=(null) inode=12844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=75 name=(null) inode=12845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=76 name=(null) inode=12844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=77 name=(null) inode=12846 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=78 name=(null) inode=12844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=79 name=(null) inode=12847 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=80 name=(null) inode=12844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=81 name=(null) inode=12848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=82 name=(null) inode=12844 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=83 name=(null) inode=12849 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=84 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=85 name=(null) inode=12850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=86 name=(null) inode=12850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=87 name=(null) inode=12851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=88 name=(null) inode=12850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=89 name=(null) inode=12852 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=90 name=(null) inode=12850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=91 name=(null) inode=12853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=92 name=(null) inode=12850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=93 name=(null) inode=12854 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=94 name=(null) inode=12850 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=95 name=(null) inode=12855 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=96 name=(null) inode=12835 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=97 name=(null) inode=12856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=98 name=(null) inode=12856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=99 name=(null) inode=12857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=100 name=(null) inode=12856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=101 name=(null) inode=12858 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=102 name=(null) inode=12856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=103 name=(null) inode=12859 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=104 name=(null) inode=12856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=105 name=(null) inode=12860 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=106 name=(null) inode=12856 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=107 name=(null) inode=12861 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PATH item=109 name=(null) inode=12864 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:46:58.233000 audit: PROCTITLE proctitle="(udev-worker)" May 8 00:46:58.302807 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 8 00:46:58.304328 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 8 00:46:58.304533 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 8 00:46:58.303865 systemd-networkd[1050]: lo: Link UP May 8 00:46:58.303870 systemd-networkd[1050]: lo: Gained carrier May 8 00:46:58.304309 systemd-networkd[1050]: Enumeration completed May 8 00:46:58.304408 systemd[1]: Started systemd-networkd.service. May 8 00:46:58.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.305736 systemd-networkd[1050]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:46:58.307057 systemd-networkd[1050]: eth0: Link UP May 8 00:46:58.307065 systemd-networkd[1050]: eth0: Gained carrier May 8 00:46:58.345503 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 8 00:46:58.361499 kernel: mousedev: PS/2 mouse device common for all mice May 8 00:46:58.372519 kernel: kvm: Nested Virtualization enabled May 8 00:46:58.372801 kernel: SVM: kvm: Nested Paging enabled May 8 00:46:58.372836 kernel: SVM: Virtual VMLOAD VMSAVE supported May 8 00:46:58.372868 kernel: SVM: Virtual GIF supported May 8 00:46:58.377975 systemd-networkd[1050]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:46:58.393511 kernel: EDAC MC: Ver: 3.0.0 May 8 00:46:58.428097 systemd[1]: Finished systemd-udev-settle.service. May 8 00:46:58.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.434730 systemd[1]: Starting lvm2-activation-early.service... May 8 00:46:58.442285 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:46:58.473635 systemd[1]: Finished lvm2-activation-early.service. May 8 00:46:58.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.474890 systemd[1]: Reached target cryptsetup.target. May 8 00:46:58.477596 systemd[1]: Starting lvm2-activation.service... May 8 00:46:58.481943 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:46:58.506916 systemd[1]: Finished lvm2-activation.service. May 8 00:46:58.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.508306 systemd[1]: Reached target local-fs-pre.target. May 8 00:46:58.509563 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:46:58.509597 systemd[1]: Reached target local-fs.target. May 8 00:46:58.510691 systemd[1]: Reached target machines.target. May 8 00:46:58.513396 systemd[1]: Starting ldconfig.service... May 8 00:46:58.514784 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:46:58.514824 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:58.515812 systemd[1]: Starting systemd-boot-update.service... May 8 00:46:58.518409 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:46:58.539665 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:46:58.542188 systemd[1]: Starting systemd-sysext.service... May 8 00:46:58.543656 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) May 8 00:46:58.545256 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:46:58.548249 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:46:58.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.559407 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:46:58.564661 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:46:58.564874 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:46:58.583506 kernel: loop0: detected capacity change from 0 to 218376 May 8 00:46:58.603982 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) May 8 00:46:58.603982 systemd-fsck[1077]: /dev/vda1: 790 files, 120710/258078 clusters May 8 00:46:58.605700 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:46:58.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:58.609674 systemd[1]: Mounting boot.mount... May 8 00:46:59.179507 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:46:59.206340 systemd[1]: Mounted boot.mount. May 8 00:46:59.208099 kernel: loop1: detected capacity change from 0 to 218376 May 8 00:46:59.214628 (sd-sysext)[1082]: Using extensions 'kubernetes'. May 8 00:46:59.215091 (sd-sysext)[1082]: Merged extensions into '/usr'. May 8 00:46:59.243275 systemd[1]: Finished systemd-boot-update.service. May 8 00:46:59.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.245036 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.246634 systemd[1]: Mounting usr-share-oem.mount... May 8 00:46:59.251046 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:46:59.252754 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:46:59.255124 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:46:59.257673 systemd[1]: Starting modprobe@loop.service... May 8 00:46:59.258807 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:46:59.258990 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:59.259129 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.262570 systemd[1]: Mounted usr-share-oem.mount. May 8 00:46:59.264170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:46:59.264343 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:46:59.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.266240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:46:59.266440 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:46:59.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.268038 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:46:59.268182 systemd[1]: Finished modprobe@loop.service. May 8 00:46:59.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.270021 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:46:59.270114 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:46:59.271254 systemd[1]: Finished systemd-sysext.service. May 8 00:46:59.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.274027 systemd[1]: Starting ensure-sysext.service... May 8 00:46:59.276294 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:46:59.340353 systemd[1]: Reloading. May 8 00:46:59.344196 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:46:59.347497 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:46:59.353646 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:46:59.372360 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:46:59.401894 systemd-networkd[1050]: eth0: Gained IPv6LL May 8 00:46:59.430326 /usr/lib/systemd/system-generators/torcx-generator[1123]: time="2025-05-08T00:46:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:46:59.430355 /usr/lib/systemd/system-generators/torcx-generator[1123]: time="2025-05-08T00:46:59Z" level=info msg="torcx already run" May 8 00:46:59.618626 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:46:59.618652 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:46:59.638299 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:46:59.692000 audit: BPF prog-id=27 op=LOAD May 8 00:46:59.692000 audit: BPF prog-id=18 op=UNLOAD May 8 00:46:59.692000 audit: BPF prog-id=28 op=LOAD May 8 00:46:59.692000 audit: BPF prog-id=29 op=LOAD May 8 00:46:59.692000 audit: BPF prog-id=19 op=UNLOAD May 8 00:46:59.692000 audit: BPF prog-id=20 op=UNLOAD May 8 00:46:59.693000 audit: BPF prog-id=30 op=LOAD May 8 00:46:59.693000 audit: BPF prog-id=24 op=UNLOAD May 8 00:46:59.693000 audit: BPF prog-id=31 op=LOAD May 8 00:46:59.693000 audit: BPF prog-id=32 op=LOAD May 8 00:46:59.693000 audit: BPF prog-id=25 op=UNLOAD May 8 00:46:59.693000 audit: BPF prog-id=26 op=UNLOAD May 8 00:46:59.694000 audit: BPF prog-id=33 op=LOAD May 8 00:46:59.694000 audit: BPF prog-id=34 op=LOAD May 8 00:46:59.694000 audit: BPF prog-id=21 op=UNLOAD May 8 00:46:59.694000 audit: BPF prog-id=22 op=UNLOAD May 8 00:46:59.696000 audit: BPF prog-id=35 op=LOAD May 8 00:46:59.696000 audit: BPF prog-id=23 op=UNLOAD May 8 00:46:59.698754 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:46:59.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.705539 systemd[1]: Starting audit-rules.service... May 8 00:46:59.709162 systemd[1]: Starting clean-ca-certificates.service... May 8 00:46:59.711675 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:46:59.713000 audit: BPF prog-id=36 op=LOAD May 8 00:46:59.714611 systemd[1]: Starting systemd-resolved.service... May 8 00:46:59.716000 audit: BPF prog-id=37 op=LOAD May 8 00:46:59.717378 systemd[1]: Starting systemd-timesyncd.service... May 8 00:46:59.719206 systemd[1]: Starting systemd-update-utmp.service... May 8 00:46:59.720711 systemd[1]: Finished clean-ca-certificates.service. May 8 00:46:59.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.724509 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:46:59.726537 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.726720 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:46:59.728257 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:46:59.727000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:46:59.730594 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:46:59.733036 systemd[1]: Starting modprobe@loop.service... May 8 00:46:59.734130 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:46:59.734405 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:59.734616 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:46:59.734748 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.735793 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:46:59.735925 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:46:59.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.737540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:46:59.737662 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:46:59.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.738914 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:46:59.739042 systemd[1]: Finished modprobe@loop.service. May 8 00:46:59.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.740408 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:46:59.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.743640 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:46:59.743780 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:46:59.745840 systemd[1]: Finished systemd-update-utmp.service. May 8 00:46:59.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.749358 systemd[1]: Finished ldconfig.service. May 8 00:46:59.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.751645 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.751901 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:46:59.753643 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:46:59.756426 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:46:59.758763 systemd[1]: Starting modprobe@loop.service... May 8 00:46:59.759648 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:46:59.759795 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:59.761606 systemd[1]: Starting systemd-update-done.service... May 8 00:46:59.762749 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:46:59.762922 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.764454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:46:59.764694 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:46:59.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.766287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:46:59.766503 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:46:59.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.768009 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:46:59.768170 systemd[1]: Finished modprobe@loop.service. May 8 00:46:59.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.772275 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.772692 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:46:59.774621 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:46:59.778835 systemd[1]: Starting modprobe@drm.service... May 8 00:46:59.781233 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:46:59.784010 systemd[1]: Starting modprobe@loop.service... May 8 00:46:59.784978 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:46:59.785105 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:59.786742 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:46:59.787856 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:46:59.788088 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 8 00:46:59.789820 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:46:59.790023 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:46:59.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.791436 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:46:59.791593 systemd[1]: Finished modprobe@drm.service. May 8 00:46:59.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.795371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:46:59.795505 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:46:59.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.796780 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:46:59.796892 systemd[1]: Finished modprobe@loop.service. May 8 00:46:59.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.798215 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:46:59.798316 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:46:59.799393 systemd[1]: Finished ensure-sysext.service. May 8 00:46:59.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.811433 systemd[1]: Started systemd-timesyncd.service. May 8 00:46:59.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.812729 systemd[1]: Finished systemd-update-done.service. May 8 00:46:59.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:46:59.814551 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:46:59.814000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:46:59.814000 audit[1184]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf0a0dd60 a2=420 a3=0 items=0 ppid=1152 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:46:59.814000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:46:59.815422 augenrules[1184]: No rules May 8 00:46:59.816142 systemd[1]: Finished audit-rules.service. May 8 00:46:59.817306 systemd-timesyncd[1159]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:46:59.817346 systemd[1]: Reached target time-set.target. May 8 00:46:59.817371 systemd-timesyncd[1159]: Initial clock synchronization to Thu 2025-05-08 00:46:59.829962 UTC. May 8 00:46:59.833844 systemd-resolved[1158]: Positive Trust Anchors: May 8 00:46:59.833868 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:46:59.833904 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:46:59.866953 systemd-resolved[1158]: Defaulting to hostname 'linux'. May 8 00:46:59.868865 systemd[1]: Started systemd-resolved.service. May 8 00:46:59.869961 systemd[1]: Reached target network.target. May 8 00:46:59.870894 systemd[1]: Reached target network-online.target. May 8 00:46:59.871915 systemd[1]: Reached target nss-lookup.target. May 8 00:46:59.872906 systemd[1]: Reached target sysinit.target. May 8 00:46:59.874177 systemd[1]: Started motdgen.path. May 8 00:46:59.875341 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:46:59.876921 systemd[1]: Started logrotate.timer. May 8 00:46:59.879956 systemd[1]: Started mdadm.timer. May 8 00:46:59.880766 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:46:59.881719 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:46:59.881751 systemd[1]: Reached target paths.target. May 8 00:46:59.882654 systemd[1]: Reached target timers.target. May 8 00:46:59.883922 systemd[1]: Listening on dbus.socket. May 8 00:46:59.885874 systemd[1]: Starting docker.socket... May 8 00:46:59.888974 systemd[1]: Listening on sshd.socket. May 8 00:46:59.889833 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:59.891185 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:46:59.891686 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:46:59.892805 systemd[1]: Listening on docker.socket. May 8 00:46:59.893710 systemd[1]: Reached target sockets.target. May 8 00:46:59.894570 systemd[1]: Reached target basic.target. May 8 00:46:59.895368 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:46:59.895402 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:46:59.896329 systemd[1]: Starting containerd.service... May 8 00:46:59.898069 systemd[1]: Starting dbus.service... May 8 00:46:59.900068 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:46:59.902111 systemd[1]: Starting extend-filesystems.service... May 8 00:46:59.903151 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:46:59.904549 systemd[1]: Starting kubelet.service... May 8 00:46:59.906889 jq[1194]: false May 8 00:46:59.906683 systemd[1]: Starting motdgen.service... May 8 00:46:59.908977 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:46:59.911057 systemd[1]: Starting sshd-keygen.service... May 8 00:46:59.937947 dbus-daemon[1193]: [system] SELinux support is enabled May 8 00:46:59.939820 extend-filesystems[1195]: Found loop1 May 8 00:46:59.939820 extend-filesystems[1195]: Found sr0 May 8 00:46:59.939820 extend-filesystems[1195]: Found vda May 8 00:46:59.939820 extend-filesystems[1195]: Found vda1 May 8 00:46:59.939820 extend-filesystems[1195]: Found vda2 May 8 00:46:59.941589 systemd[1]: Starting systemd-logind.service... May 8 00:46:59.945958 extend-filesystems[1195]: Found vda3 May 8 00:46:59.945958 extend-filesystems[1195]: Found usr May 8 00:46:59.945958 extend-filesystems[1195]: Found vda4 May 8 00:46:59.945958 extend-filesystems[1195]: Found vda6 May 8 00:46:59.945958 extend-filesystems[1195]: Found vda7 May 8 00:46:59.945958 extend-filesystems[1195]: Found vda9 May 8 00:46:59.945958 extend-filesystems[1195]: Checking size of /dev/vda9 May 8 00:46:59.943595 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:46:59.943655 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:46:59.948080 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:46:59.950324 systemd[1]: Starting update-engine.service... May 8 00:46:59.955105 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:46:59.960366 systemd[1]: Started dbus.service. May 8 00:46:59.962348 jq[1215]: true May 8 00:46:59.968424 extend-filesystems[1195]: Resized partition /dev/vda9 May 8 00:46:59.983288 extend-filesystems[1218]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:46:59.987006 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:46:59.987207 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:46:59.988503 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:46:59.988725 systemd[1]: Finished motdgen.service. May 8 00:47:00.011935 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:47:00.012148 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:47:00.018509 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:47:00.027725 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:47:00.028899 systemd[1]: Reached target system-config.target. May 8 00:47:00.029992 jq[1222]: true May 8 00:47:00.030551 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:47:00.030574 systemd[1]: Reached target user-config.target. May 8 00:47:00.165723 update_engine[1214]: I0508 00:47:00.165462 1214 main.cc:92] Flatcar Update Engine starting May 8 00:47:00.168102 systemd[1]: Started update-engine.service. May 8 00:47:00.338369 update_engine[1214]: I0508 00:47:00.172855 1214 update_check_scheduler.cc:74] Next update check in 2m43s May 8 00:47:00.170957 systemd[1]: Started locksmithd.service. May 8 00:47:00.334065 systemd-logind[1212]: Watching system buttons on /dev/input/event1 (Power Button) May 8 00:47:00.334089 systemd-logind[1212]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 8 00:47:00.334315 systemd-logind[1212]: New seat seat0. May 8 00:47:00.337343 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:47:00.339371 systemd[1]: Started systemd-logind.service. May 8 00:47:00.347121 env[1223]: time="2025-05-08T00:47:00.347023470Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:47:00.367566 env[1223]: time="2025-05-08T00:47:00.367464328Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:47:00.367885 env[1223]: time="2025-05-08T00:47:00.367840644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:00.369606 env[1223]: time="2025-05-08T00:47:00.369560699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:00.369671 env[1223]: time="2025-05-08T00:47:00.369601479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:00.369880 env[1223]: time="2025-05-08T00:47:00.369850504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:00.369880 env[1223]: time="2025-05-08T00:47:00.369870759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:47:00.369963 env[1223]: time="2025-05-08T00:47:00.369882700Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:47:00.369963 env[1223]: time="2025-05-08T00:47:00.369891223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:47:00.370047 env[1223]: time="2025-05-08T00:47:00.370028479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:00.370361 env[1223]: time="2025-05-08T00:47:00.370329393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:47:00.370578 env[1223]: time="2025-05-08T00:47:00.370541461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:47:00.370578 env[1223]: time="2025-05-08T00:47:00.370567961Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:47:00.370694 env[1223]: time="2025-05-08T00:47:00.370650813Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:47:00.370694 env[1223]: time="2025-05-08T00:47:00.370681043Z" level=info msg="metadata content store policy set" policy=shared May 8 00:47:00.441531 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:47:00.497808 extend-filesystems[1218]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:47:00.497808 extend-filesystems[1218]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:47:00.497808 extend-filesystems[1218]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:47:00.506448 extend-filesystems[1195]: Resized filesystem in /dev/vda9 May 8 00:47:00.503517 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:47:00.503708 systemd[1]: Finished extend-filesystems.service. May 8 00:47:00.543044 env[1223]: time="2025-05-08T00:47:00.542964595Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:47:00.543044 env[1223]: time="2025-05-08T00:47:00.543037008Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:47:00.543044 env[1223]: time="2025-05-08T00:47:00.543054034Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543128051Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543165672Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543185285Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543196735Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543210131Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543222073Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543236732Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543247992Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:47:00.543308 env[1223]: time="2025-05-08T00:47:00.543259523Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:47:00.543620 env[1223]: time="2025-05-08T00:47:00.543418167Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:47:00.543620 env[1223]: time="2025-05-08T00:47:00.543534066Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:47:00.544065 env[1223]: time="2025-05-08T00:47:00.543985042Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:47:00.544065 env[1223]: time="2025-05-08T00:47:00.544086081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544275 env[1223]: time="2025-05-08T00:47:00.544107539Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:47:00.544275 env[1223]: time="2025-05-08T00:47:00.544211778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544275 env[1223]: time="2025-05-08T00:47:00.544232482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544275 env[1223]: time="2025-05-08T00:47:00.544248055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544275 env[1223]: time="2025-05-08T00:47:00.544264198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544378 env[1223]: time="2025-05-08T00:47:00.544280320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544378 env[1223]: time="2025-05-08T00:47:00.544307794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544378 env[1223]: time="2025-05-08T00:47:00.544324277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544378 env[1223]: time="2025-05-08T00:47:00.544339499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:47:00.544378 env[1223]: time="2025-05-08T00:47:00.544358108Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:47:00.544538 bash[1238]: Updated "/home/core/.ssh/authorized_keys" May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.544907095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.544941858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.544959144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.544973744Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.544994088Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.545011585Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.545049355Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:47:00.547575 env[1223]: time="2025-05-08T00:47:00.545106457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:47:00.545676 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:47:00.548008 systemd[1]: Started containerd.service. May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.546267661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.546356498Z" level=info msg="Connect containerd service" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.546435249Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.547053139Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.547816981Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.547865741Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.547922653Z" level=info msg="containerd successfully booted in 0.202021s" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.549626576Z" level=info msg="Start subscribing containerd event" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.549696522Z" level=info msg="Start recovering state" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.549823753Z" level=info msg="Start event monitor" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.549860410Z" level=info msg="Start snapshots syncer" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.549880604Z" level=info msg="Start cni network conf syncer for default" May 8 00:47:00.551786 env[1223]: time="2025-05-08T00:47:00.549891574Z" level=info msg="Start streaming server" May 8 00:47:00.862674 sshd_keygen[1207]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:47:00.888944 systemd[1]: Finished sshd-keygen.service. May 8 00:47:00.898359 systemd[1]: Starting issuegen.service... May 8 00:47:00.907643 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:47:00.908026 systemd[1]: Finished issuegen.service. May 8 00:47:00.911553 systemd[1]: Starting systemd-user-sessions.service... May 8 00:47:00.920286 systemd[1]: Finished systemd-user-sessions.service. May 8 00:47:00.923330 systemd[1]: Started getty@tty1.service. May 8 00:47:00.925662 systemd[1]: Started serial-getty@ttyS0.service. May 8 00:47:00.927238 systemd[1]: Reached target getty.target. May 8 00:47:01.993924 systemd[1]: Started kubelet.service. May 8 00:47:01.995891 systemd[1]: Reached target multi-user.target. May 8 00:47:01.998392 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:47:02.006182 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:47:02.006392 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:47:02.007737 systemd[1]: Startup finished in 954ms (kernel) + 6.960s (initrd) + 10.063s (userspace) = 17.978s. May 8 00:47:02.462025 kubelet[1271]: E0508 00:47:02.461811 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:47:02.464123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:47:02.464268 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:47:02.464575 systemd[1]: kubelet.service: Consumed 1.765s CPU time. May 8 00:47:09.216643 systemd[1]: Created slice system-sshd.slice. May 8 00:47:09.217738 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:33596.service. May 8 00:47:09.262225 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 33596 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:09.264111 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.272765 systemd[1]: Created slice user-500.slice. May 8 00:47:09.273966 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:47:09.275573 systemd-logind[1212]: New session 1 of user core. May 8 00:47:09.283436 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:47:09.285025 systemd[1]: Starting user@500.service... May 8 00:47:09.288023 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.368103 systemd[1284]: Queued start job for default target default.target. May 8 00:47:09.368600 systemd[1284]: Reached target paths.target. May 8 00:47:09.368632 systemd[1284]: Reached target sockets.target. May 8 00:47:09.368654 systemd[1284]: Reached target timers.target. May 8 00:47:09.368670 systemd[1284]: Reached target basic.target. May 8 00:47:09.368723 systemd[1284]: Reached target default.target. May 8 00:47:09.368756 systemd[1284]: Startup finished in 74ms. May 8 00:47:09.369008 systemd[1]: Started user@500.service. May 8 00:47:09.370480 systemd[1]: Started session-1.scope. May 8 00:47:09.421672 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:33604.service. May 8 00:47:09.459176 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 33604 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:09.460570 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.464622 systemd-logind[1212]: New session 2 of user core. May 8 00:47:09.466211 systemd[1]: Started session-2.scope. May 8 00:47:09.521443 sshd[1293]: pam_unix(sshd:session): session closed for user core May 8 00:47:09.524627 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:33614.service. May 8 00:47:09.525072 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:33604.service: Deactivated successfully. May 8 00:47:09.525782 systemd-logind[1212]: Session 2 logged out. Waiting for processes to exit. May 8 00:47:09.525838 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:47:09.526605 systemd-logind[1212]: Removed session 2. May 8 00:47:09.559958 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 33614 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:09.561244 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.564697 systemd-logind[1212]: New session 3 of user core. May 8 00:47:09.565560 systemd[1]: Started session-3.scope. May 8 00:47:09.614812 sshd[1298]: pam_unix(sshd:session): session closed for user core May 8 00:47:09.617654 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:33614.service: Deactivated successfully. May 8 00:47:09.618137 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:47:09.618724 systemd-logind[1212]: Session 3 logged out. Waiting for processes to exit. May 8 00:47:09.619724 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:33624.service. May 8 00:47:09.620312 systemd-logind[1212]: Removed session 3. May 8 00:47:09.655954 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 33624 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:09.657445 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.661188 systemd-logind[1212]: New session 4 of user core. May 8 00:47:09.661985 systemd[1]: Started session-4.scope. May 8 00:47:09.715644 sshd[1305]: pam_unix(sshd:session): session closed for user core May 8 00:47:09.718154 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:33624.service: Deactivated successfully. May 8 00:47:09.718683 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:47:09.719141 systemd-logind[1212]: Session 4 logged out. Waiting for processes to exit. May 8 00:47:09.720165 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:33628.service. May 8 00:47:09.720876 systemd-logind[1212]: Removed session 4. May 8 00:47:09.755995 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:7kNtsjKndXT6BkWKn4/gWKPiKnt49pZMw1UDp15jA9U May 8 00:47:09.757272 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:47:09.760785 systemd-logind[1212]: New session 5 of user core. May 8 00:47:09.761636 systemd[1]: Started session-5.scope. May 8 00:47:09.818006 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:47:09.818308 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:47:09.830231 systemd[1]: Starting coreos-metadata.service... May 8 00:47:09.835670 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:47:09.835804 systemd[1]: Finished coreos-metadata.service. May 8 00:47:10.456060 systemd[1]: Stopped kubelet.service. May 8 00:47:10.456205 systemd[1]: kubelet.service: Consumed 1.765s CPU time. May 8 00:47:10.458924 systemd[1]: Starting kubelet.service... May 8 00:47:10.490191 systemd[1]: Reloading. May 8 00:47:10.614774 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2025-05-08T00:47:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:47:10.614806 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2025-05-08T00:47:10Z" level=info msg="torcx already run" May 8 00:47:10.901586 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:47:10.901603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:47:10.925519 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:47:11.010355 systemd[1]: Started kubelet.service. May 8 00:47:11.013493 systemd[1]: Stopping kubelet.service... May 8 00:47:11.013813 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:47:11.013984 systemd[1]: Stopped kubelet.service. May 8 00:47:11.015999 systemd[1]: Starting kubelet.service... May 8 00:47:11.102864 systemd[1]: Started kubelet.service. May 8 00:47:11.273272 kubelet[1422]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:47:11.273272 kubelet[1422]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 8 00:47:11.273272 kubelet[1422]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:47:11.273272 kubelet[1422]: I0508 00:47:11.273226 1422 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:47:12.525693 kubelet[1422]: I0508 00:47:12.525627 1422 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 8 00:47:12.525693 kubelet[1422]: I0508 00:47:12.525670 1422 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:47:12.526058 kubelet[1422]: I0508 00:47:12.525925 1422 server.go:954] "Client rotation is on, will bootstrap in background" May 8 00:47:12.552843 kubelet[1422]: I0508 00:47:12.552759 1422 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:47:12.565576 kubelet[1422]: E0508 00:47:12.565516 1422 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:47:12.565576 kubelet[1422]: I0508 00:47:12.565553 1422 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:47:12.569388 kubelet[1422]: I0508 00:47:12.569338 1422 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:47:12.570540 kubelet[1422]: I0508 00:47:12.570491 1422 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:47:12.570709 kubelet[1422]: I0508 00:47:12.570534 1422 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:47:12.570709 kubelet[1422]: I0508 00:47:12.570707 1422 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:47:12.570851 kubelet[1422]: I0508 00:47:12.570717 1422 container_manager_linux.go:304] "Creating device plugin manager" May 8 00:47:12.570851 kubelet[1422]: I0508 00:47:12.570846 1422 state_mem.go:36] "Initialized new in-memory state store" May 8 00:47:12.573223 kubelet[1422]: I0508 00:47:12.573203 1422 kubelet.go:446] "Attempting to sync node with API server" May 8 00:47:12.573223 kubelet[1422]: I0508 00:47:12.573224 1422 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:47:12.573305 kubelet[1422]: I0508 00:47:12.573247 1422 kubelet.go:352] "Adding apiserver pod source" May 8 00:47:12.573305 kubelet[1422]: I0508 00:47:12.573257 1422 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:47:12.573436 kubelet[1422]: E0508 00:47:12.573406 1422 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:12.582622 kubelet[1422]: E0508 00:47:12.582532 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:12.585192 kubelet[1422]: I0508 00:47:12.585158 1422 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:47:12.585652 kubelet[1422]: I0508 00:47:12.585623 1422 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:47:12.586195 kubelet[1422]: W0508 00:47:12.586167 1422 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:47:12.588482 kubelet[1422]: W0508 00:47:12.588414 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.83" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 00:47:12.588622 kubelet[1422]: E0508 00:47:12.588498 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.83\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 8 00:47:12.588622 kubelet[1422]: W0508 00:47:12.588415 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 00:47:12.588622 kubelet[1422]: E0508 00:47:12.588529 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 8 00:47:12.590120 kubelet[1422]: I0508 00:47:12.590070 1422 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 8 00:47:12.590196 kubelet[1422]: I0508 00:47:12.590168 1422 server.go:1287] "Started kubelet" May 8 00:47:12.592931 kubelet[1422]: I0508 00:47:12.592868 1422 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:47:12.593880 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 00:47:12.594129 kubelet[1422]: I0508 00:47:12.594101 1422 server.go:490] "Adding debug handlers to kubelet server" May 8 00:47:12.594304 kubelet[1422]: I0508 00:47:12.594263 1422 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:47:12.594559 kubelet[1422]: I0508 00:47:12.594497 1422 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:47:12.594885 kubelet[1422]: I0508 00:47:12.594860 1422 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:47:12.595925 kubelet[1422]: I0508 00:47:12.595902 1422 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:47:12.597760 kubelet[1422]: E0508 00:47:12.597732 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:12.597821 kubelet[1422]: I0508 00:47:12.597769 1422 volume_manager.go:297] "Starting Kubelet Volume Manager" May 8 00:47:12.597957 kubelet[1422]: E0508 00:47:12.596621 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.83.183d66c65a0ee250 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.83,UID:10.0.0.83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.83,},FirstTimestamp:2025-05-08 00:47:12.59010312 +0000 UTC m=+1.480434431,LastTimestamp:2025-05-08 00:47:12.59010312 +0000 UTC m=+1.480434431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.83,}" May 8 00:47:12.598131 kubelet[1422]: I0508 00:47:12.598060 1422 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 8 00:47:12.598234 kubelet[1422]: I0508 00:47:12.598120 1422 reconciler.go:26] "Reconciler: start to sync state" May 8 00:47:12.598902 kubelet[1422]: E0508 00:47:12.598865 1422 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:47:12.603420 kubelet[1422]: I0508 00:47:12.602169 1422 factory.go:221] Registration of the containerd container factory successfully May 8 00:47:12.603420 kubelet[1422]: I0508 00:47:12.602197 1422 factory.go:221] Registration of the systemd container factory successfully May 8 00:47:12.603420 kubelet[1422]: I0508 00:47:12.602329 1422 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:47:12.605121 kubelet[1422]: E0508 00:47:12.605024 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.83.183d66c65a94486c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.83,UID:10.0.0.83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.83,},FirstTimestamp:2025-05-08 00:47:12.598845548 +0000 UTC m=+1.489176890,LastTimestamp:2025-05-08 00:47:12.598845548 +0000 UTC m=+1.489176890,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.83,}" May 8 00:47:12.605379 kubelet[1422]: W0508 00:47:12.605357 1422 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 8 00:47:12.605503 kubelet[1422]: E0508 00:47:12.605463 1422 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 8 00:47:12.605691 kubelet[1422]: E0508 00:47:12.605672 1422 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.83\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 8 00:47:12.617625 kubelet[1422]: I0508 00:47:12.617601 1422 cpu_manager.go:221] "Starting CPU manager" policy="none" May 8 00:47:12.617783 kubelet[1422]: I0508 00:47:12.617766 1422 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 8 00:47:12.617863 kubelet[1422]: I0508 00:47:12.617849 1422 state_mem.go:36] "Initialized new in-memory state store" May 8 00:47:12.618445 kubelet[1422]: E0508 00:47:12.618372 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.83.183d66c65ba85bbe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.83,UID:10.0.0.83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.83 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.83,},FirstTimestamp:2025-05-08 00:47:12.61693843 +0000 UTC m=+1.507269741,LastTimestamp:2025-05-08 00:47:12.61693843 +0000 UTC m=+1.507269741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.83,}" May 8 00:47:12.623660 kubelet[1422]: E0508 00:47:12.623571 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.83.183d66c65ba871f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.83,UID:10.0.0.83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.83 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.83,},FirstTimestamp:2025-05-08 00:47:12.616944113 +0000 UTC m=+1.507275424,LastTimestamp:2025-05-08 00:47:12.616944113 +0000 UTC m=+1.507275424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.83,}" May 8 00:47:12.627665 kubelet[1422]: E0508 00:47:12.627591 1422 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.83.183d66c65ba88334 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.83,UID:10.0.0.83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.83 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.83,},FirstTimestamp:2025-05-08 00:47:12.616948532 +0000 UTC m=+1.507279843,LastTimestamp:2025-05-08 00:47:12.616948532 +0000 UTC m=+1.507279843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.83,}" May 8 00:47:12.634257 kubelet[1422]: I0508 00:47:12.634237 1422 policy_none.go:49] "None policy: Start" May 8 00:47:12.634363 kubelet[1422]: I0508 00:47:12.634346 1422 memory_manager.go:186] "Starting memorymanager" policy="None" May 8 00:47:12.634501 kubelet[1422]: I0508 00:47:12.634485 1422 state_mem.go:35] "Initializing new in-memory state store" May 8 00:47:12.647437 systemd[1]: Created slice kubepods.slice. May 8 00:47:12.652953 systemd[1]: Created slice kubepods-burstable.slice. May 8 00:47:12.656428 systemd[1]: Created slice kubepods-besteffort.slice. May 8 00:47:12.663436 kubelet[1422]: I0508 00:47:12.663387 1422 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:47:12.663626 kubelet[1422]: I0508 00:47:12.663601 1422 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:47:12.663669 kubelet[1422]: I0508 00:47:12.663620 1422 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:47:12.664795 kubelet[1422]: I0508 00:47:12.664437 1422 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:47:12.665724 kubelet[1422]: E0508 00:47:12.665692 1422 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 8 00:47:12.665786 kubelet[1422]: E0508 00:47:12.665740 1422 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.83\" not found" May 8 00:47:12.703156 kubelet[1422]: I0508 00:47:12.703065 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:47:12.704040 kubelet[1422]: I0508 00:47:12.704016 1422 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:47:12.704092 kubelet[1422]: I0508 00:47:12.704048 1422 status_manager.go:227] "Starting to sync pod status with apiserver" May 8 00:47:12.704092 kubelet[1422]: I0508 00:47:12.704077 1422 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 8 00:47:12.704092 kubelet[1422]: I0508 00:47:12.704086 1422 kubelet.go:2388] "Starting kubelet main sync loop" May 8 00:47:12.704368 kubelet[1422]: E0508 00:47:12.704337 1422 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 8 00:47:12.765101 kubelet[1422]: I0508 00:47:12.765058 1422 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.83" May 8 00:47:12.770587 kubelet[1422]: I0508 00:47:12.770507 1422 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.83" May 8 00:47:12.770587 kubelet[1422]: E0508 00:47:12.770544 1422 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.83\": node \"10.0.0.83\" not found" May 8 00:47:12.775418 kubelet[1422]: E0508 00:47:12.775384 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:12.876612 kubelet[1422]: E0508 00:47:12.876409 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:12.977041 kubelet[1422]: E0508 00:47:12.976957 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.078129 kubelet[1422]: E0508 00:47:13.078061 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.178317 kubelet[1422]: E0508 00:47:13.178227 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.279206 kubelet[1422]: E0508 00:47:13.279123 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.380099 kubelet[1422]: E0508 00:47:13.380015 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.481013 kubelet[1422]: E0508 00:47:13.480852 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.527919 kubelet[1422]: I0508 00:47:13.527819 1422 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 8 00:47:13.528338 kubelet[1422]: W0508 00:47:13.528061 1422 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 00:47:13.581581 kubelet[1422]: E0508 00:47:13.581509 1422 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.83\" not found" May 8 00:47:13.583744 kubelet[1422]: E0508 00:47:13.583707 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:13.634891 sudo[1315]: pam_unix(sudo:session): session closed for user root May 8 00:47:13.636577 sshd[1311]: pam_unix(sshd:session): session closed for user core May 8 00:47:13.639541 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:33628.service: Deactivated successfully. May 8 00:47:13.640544 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:47:13.641239 systemd-logind[1212]: Session 5 logged out. Waiting for processes to exit. May 8 00:47:13.642343 systemd-logind[1212]: Removed session 5. May 8 00:47:13.683219 kubelet[1422]: I0508 00:47:13.683183 1422 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 8 00:47:13.683527 env[1223]: time="2025-05-08T00:47:13.683455403Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:47:13.683794 kubelet[1422]: I0508 00:47:13.683777 1422 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 8 00:47:14.584735 kubelet[1422]: I0508 00:47:14.584665 1422 apiserver.go:52] "Watching apiserver" May 8 00:47:14.584735 kubelet[1422]: E0508 00:47:14.584726 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:14.593975 systemd[1]: Created slice kubepods-besteffort-podc2de5f8b_c7eb_40e8_a6e4_0aea6319ac61.slice. May 8 00:47:14.599153 kubelet[1422]: I0508 00:47:14.599106 1422 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 8 00:47:14.611411 kubelet[1422]: I0508 00:47:14.611248 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-bpf-maps\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611411 kubelet[1422]: I0508 00:47:14.611318 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hostproc\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611411 kubelet[1422]: I0508 00:47:14.611340 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cni-path\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611411 kubelet[1422]: I0508 00:47:14.611355 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-etc-cni-netd\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611411 kubelet[1422]: I0508 00:47:14.611386 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-kernel\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611411 kubelet[1422]: I0508 00:47:14.611405 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61-lib-modules\") pod \"kube-proxy-skqcl\" (UID: \"c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61\") " pod="kube-system/kube-proxy-skqcl" May 8 00:47:14.611714 kubelet[1422]: I0508 00:47:14.611421 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp586\" (UniqueName: \"kubernetes.io/projected/c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61-kube-api-access-qp586\") pod \"kube-proxy-skqcl\" (UID: \"c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61\") " pod="kube-system/kube-proxy-skqcl" May 8 00:47:14.611714 kubelet[1422]: I0508 00:47:14.611439 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8vdc\" (UniqueName: \"kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-kube-api-access-w8vdc\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611714 kubelet[1422]: I0508 00:47:14.611456 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61-kube-proxy\") pod \"kube-proxy-skqcl\" (UID: \"c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61\") " pod="kube-system/kube-proxy-skqcl" May 8 00:47:14.611714 kubelet[1422]: I0508 00:47:14.611529 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-cgroup\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611714 kubelet[1422]: I0508 00:47:14.611550 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-lib-modules\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611889 kubelet[1422]: I0508 00:47:14.611566 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-xtables-lock\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611889 kubelet[1422]: I0508 00:47:14.611582 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fab8730-02dd-4dff-a1dd-63d4a80165ca-clustermesh-secrets\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611889 kubelet[1422]: I0508 00:47:14.611596 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hubble-tls\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611889 kubelet[1422]: I0508 00:47:14.611611 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-run\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611889 kubelet[1422]: I0508 00:47:14.611627 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-config-path\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.611889 kubelet[1422]: I0508 00:47:14.611643 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-net\") pod \"cilium-hhb6l\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " pod="kube-system/cilium-hhb6l" May 8 00:47:14.612037 kubelet[1422]: I0508 00:47:14.611658 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61-xtables-lock\") pod \"kube-proxy-skqcl\" (UID: \"c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61\") " pod="kube-system/kube-proxy-skqcl" May 8 00:47:14.616264 systemd[1]: Created slice kubepods-burstable-pod0fab8730_02dd_4dff_a1dd_63d4a80165ca.slice. May 8 00:47:14.712634 kubelet[1422]: I0508 00:47:14.712575 1422 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 8 00:47:14.914803 kubelet[1422]: E0508 00:47:14.914762 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:14.915464 env[1223]: time="2025-05-08T00:47:14.915415284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skqcl,Uid:c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61,Namespace:kube-system,Attempt:0,}" May 8 00:47:14.923583 kubelet[1422]: E0508 00:47:14.923556 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:14.931590 env[1223]: time="2025-05-08T00:47:14.931540133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhb6l,Uid:0fab8730-02dd-4dff-a1dd-63d4a80165ca,Namespace:kube-system,Attempt:0,}" May 8 00:47:15.585877 kubelet[1422]: E0508 00:47:15.585804 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:16.562025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094670513.mount: Deactivated successfully. May 8 00:47:16.586172 kubelet[1422]: E0508 00:47:16.586095 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:16.674272 env[1223]: time="2025-05-08T00:47:16.674177832Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.677836 env[1223]: time="2025-05-08T00:47:16.677760539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.682730 env[1223]: time="2025-05-08T00:47:16.682639334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.684281 env[1223]: time="2025-05-08T00:47:16.684225439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.686058 env[1223]: time="2025-05-08T00:47:16.686006846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.690220 env[1223]: time="2025-05-08T00:47:16.690139771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.691799 env[1223]: time="2025-05-08T00:47:16.691727589Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.693725 env[1223]: time="2025-05-08T00:47:16.693677941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:16.726626 env[1223]: time="2025-05-08T00:47:16.726540895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:16.726626 env[1223]: time="2025-05-08T00:47:16.726593379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:16.726626 env[1223]: time="2025-05-08T00:47:16.726606868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:16.726890 env[1223]: time="2025-05-08T00:47:16.726839190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8839cbf429117abcc216d50f855657d510d5c9063087c58dba919a09738c2a44 pid=1483 runtime=io.containerd.runc.v2 May 8 00:47:16.728260 env[1223]: time="2025-05-08T00:47:16.728193394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:16.728354 env[1223]: time="2025-05-08T00:47:16.728274790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:16.728354 env[1223]: time="2025-05-08T00:47:16.728306799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:16.730802 env[1223]: time="2025-05-08T00:47:16.730697841Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad pid=1491 runtime=io.containerd.runc.v2 May 8 00:47:16.741439 systemd[1]: Started cri-containerd-8839cbf429117abcc216d50f855657d510d5c9063087c58dba919a09738c2a44.scope. May 8 00:47:16.747676 systemd[1]: Started cri-containerd-0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad.scope. May 8 00:47:16.771816 env[1223]: time="2025-05-08T00:47:16.771759771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-skqcl,Uid:c2de5f8b-c7eb-40e8-a6e4-0aea6319ac61,Namespace:kube-system,Attempt:0,} returns sandbox id \"8839cbf429117abcc216d50f855657d510d5c9063087c58dba919a09738c2a44\"" May 8 00:47:16.773018 kubelet[1422]: E0508 00:47:16.772989 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:16.774685 env[1223]: time="2025-05-08T00:47:16.774643448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 8 00:47:16.776951 env[1223]: time="2025-05-08T00:47:16.776919974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhb6l,Uid:0fab8730-02dd-4dff-a1dd-63d4a80165ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\"" May 8 00:47:16.777701 kubelet[1422]: E0508 00:47:16.777521 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:17.586640 kubelet[1422]: E0508 00:47:17.586428 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:18.587691 kubelet[1422]: E0508 00:47:18.587611 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:19.588741 kubelet[1422]: E0508 00:47:19.588609 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:20.191867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412751499.mount: Deactivated successfully. May 8 00:47:20.589504 kubelet[1422]: E0508 00:47:20.588952 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:21.487628 env[1223]: time="2025-05-08T00:47:21.487518666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:21.489583 env[1223]: time="2025-05-08T00:47:21.489507264Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:21.490959 env[1223]: time="2025-05-08T00:47:21.490908280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:21.492217 env[1223]: time="2025-05-08T00:47:21.492158141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:21.492588 env[1223]: time="2025-05-08T00:47:21.492532721Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 8 00:47:21.493962 env[1223]: time="2025-05-08T00:47:21.493937865Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:47:21.495451 env[1223]: time="2025-05-08T00:47:21.495422024Z" level=info msg="CreateContainer within sandbox \"8839cbf429117abcc216d50f855657d510d5c9063087c58dba919a09738c2a44\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:47:21.511131 env[1223]: time="2025-05-08T00:47:21.511079590Z" level=info msg="CreateContainer within sandbox \"8839cbf429117abcc216d50f855657d510d5c9063087c58dba919a09738c2a44\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09ea34b617e8357dd59b3b32f3914216a47347f4ee7fca977c55af9caf218891\"" May 8 00:47:21.511729 env[1223]: time="2025-05-08T00:47:21.511703378Z" level=info msg="StartContainer for \"09ea34b617e8357dd59b3b32f3914216a47347f4ee7fca977c55af9caf218891\"" May 8 00:47:21.550950 systemd[1]: Started cri-containerd-09ea34b617e8357dd59b3b32f3914216a47347f4ee7fca977c55af9caf218891.scope. May 8 00:47:21.589776 kubelet[1422]: E0508 00:47:21.589689 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:21.643437 env[1223]: time="2025-05-08T00:47:21.643375099Z" level=info msg="StartContainer for \"09ea34b617e8357dd59b3b32f3914216a47347f4ee7fca977c55af9caf218891\" returns successfully" May 8 00:47:21.727934 kubelet[1422]: E0508 00:47:21.727877 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:21.786913 kubelet[1422]: I0508 00:47:21.786704 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-skqcl" podStartSLOduration=5.066977708 podStartE2EDuration="9.786683348s" podCreationTimestamp="2025-05-08 00:47:12 +0000 UTC" firstStartedPulling="2025-05-08 00:47:16.774041117 +0000 UTC m=+5.664372428" lastFinishedPulling="2025-05-08 00:47:21.493746757 +0000 UTC m=+10.384078068" observedRunningTime="2025-05-08 00:47:21.78284954 +0000 UTC m=+10.673180851" watchObservedRunningTime="2025-05-08 00:47:21.786683348 +0000 UTC m=+10.677014680" May 8 00:47:22.590122 kubelet[1422]: E0508 00:47:22.590042 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:22.729180 kubelet[1422]: E0508 00:47:22.729143 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:23.590792 kubelet[1422]: E0508 00:47:23.590700 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:24.602170 kubelet[1422]: E0508 00:47:24.601942 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:25.603141 kubelet[1422]: E0508 00:47:25.603056 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:26.604111 kubelet[1422]: E0508 00:47:26.604045 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:27.604506 kubelet[1422]: E0508 00:47:27.604401 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:28.605488 kubelet[1422]: E0508 00:47:28.605386 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:29.606249 kubelet[1422]: E0508 00:47:29.606175 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:30.607232 kubelet[1422]: E0508 00:47:30.607161 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:31.607712 kubelet[1422]: E0508 00:47:31.607653 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:32.174869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245589879.mount: Deactivated successfully. May 8 00:47:32.574228 kubelet[1422]: E0508 00:47:32.574073 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:32.608314 kubelet[1422]: E0508 00:47:32.608282 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:33.608411 kubelet[1422]: E0508 00:47:33.608373 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:34.609183 kubelet[1422]: E0508 00:47:34.609045 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:35.609924 kubelet[1422]: E0508 00:47:35.609817 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:36.610865 kubelet[1422]: E0508 00:47:36.610809 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:37.610971 kubelet[1422]: E0508 00:47:37.610915 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:38.379811 env[1223]: time="2025-05-08T00:47:38.379731618Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:38.381936 env[1223]: time="2025-05-08T00:47:38.381887069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:38.383744 env[1223]: time="2025-05-08T00:47:38.383691127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:38.384266 env[1223]: time="2025-05-08T00:47:38.384229043Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 8 00:47:38.386785 env[1223]: time="2025-05-08T00:47:38.386745776Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:47:38.404610 env[1223]: time="2025-05-08T00:47:38.404552517Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\"" May 8 00:47:38.405285 env[1223]: time="2025-05-08T00:47:38.405226709Z" level=info msg="StartContainer for \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\"" May 8 00:47:38.424922 systemd[1]: Started cri-containerd-6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a.scope. May 8 00:47:38.455957 env[1223]: time="2025-05-08T00:47:38.455894322Z" level=info msg="StartContainer for \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\" returns successfully" May 8 00:47:38.466164 systemd[1]: cri-containerd-6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a.scope: Deactivated successfully. May 8 00:47:38.611684 kubelet[1422]: E0508 00:47:38.611632 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:38.762760 kubelet[1422]: E0508 00:47:38.762725 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:39.272755 env[1223]: time="2025-05-08T00:47:39.272641452Z" level=info msg="shim disconnected" id=6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a May 8 00:47:39.272755 env[1223]: time="2025-05-08T00:47:39.272720706Z" level=warning msg="cleaning up after shim disconnected" id=6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a namespace=k8s.io May 8 00:47:39.272755 env[1223]: time="2025-05-08T00:47:39.272736116Z" level=info msg="cleaning up dead shim" May 8 00:47:39.284376 env[1223]: time="2025-05-08T00:47:39.284301092Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1772 runtime=io.containerd.runc.v2\n" May 8 00:47:39.396791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a-rootfs.mount: Deactivated successfully. May 8 00:47:39.612237 kubelet[1422]: E0508 00:47:39.611996 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:39.768557 kubelet[1422]: E0508 00:47:39.768509 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:39.770437 env[1223]: time="2025-05-08T00:47:39.770391248Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:47:40.278928 env[1223]: time="2025-05-08T00:47:40.278841251Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\"" May 8 00:47:40.279611 env[1223]: time="2025-05-08T00:47:40.279526948Z" level=info msg="StartContainer for \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\"" May 8 00:47:40.297439 systemd[1]: Started cri-containerd-3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372.scope. May 8 00:47:40.382721 env[1223]: time="2025-05-08T00:47:40.382648492Z" level=info msg="StartContainer for \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\" returns successfully" May 8 00:47:40.384512 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:47:40.384816 systemd[1]: Stopped systemd-sysctl.service. May 8 00:47:40.385065 systemd[1]: Stopping systemd-sysctl.service... May 8 00:47:40.387359 systemd[1]: Starting systemd-sysctl.service... May 8 00:47:40.387859 systemd[1]: cri-containerd-3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372.scope: Deactivated successfully. May 8 00:47:40.397231 systemd[1]: Finished systemd-sysctl.service. May 8 00:47:40.406800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372-rootfs.mount: Deactivated successfully. May 8 00:47:40.415608 env[1223]: time="2025-05-08T00:47:40.415523498Z" level=info msg="shim disconnected" id=3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372 May 8 00:47:40.415608 env[1223]: time="2025-05-08T00:47:40.415587331Z" level=warning msg="cleaning up after shim disconnected" id=3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372 namespace=k8s.io May 8 00:47:40.415608 env[1223]: time="2025-05-08T00:47:40.415604143Z" level=info msg="cleaning up dead shim" May 8 00:47:40.422083 env[1223]: time="2025-05-08T00:47:40.422008152Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1834 runtime=io.containerd.runc.v2\n" May 8 00:47:40.613337 kubelet[1422]: E0508 00:47:40.613171 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:40.772636 kubelet[1422]: E0508 00:47:40.772592 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:40.774752 env[1223]: time="2025-05-08T00:47:40.774679385Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:47:40.794176 env[1223]: time="2025-05-08T00:47:40.794084353Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\"" May 8 00:47:40.794862 env[1223]: time="2025-05-08T00:47:40.794803966Z" level=info msg="StartContainer for \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\"" May 8 00:47:40.813166 systemd[1]: Started cri-containerd-dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851.scope. May 8 00:47:40.842934 env[1223]: time="2025-05-08T00:47:40.842727206Z" level=info msg="StartContainer for \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\" returns successfully" May 8 00:47:40.843831 systemd[1]: cri-containerd-dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851.scope: Deactivated successfully. May 8 00:47:40.899006 env[1223]: time="2025-05-08T00:47:40.898928582Z" level=info msg="shim disconnected" id=dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851 May 8 00:47:40.899220 env[1223]: time="2025-05-08T00:47:40.899018315Z" level=warning msg="cleaning up after shim disconnected" id=dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851 namespace=k8s.io May 8 00:47:40.899220 env[1223]: time="2025-05-08T00:47:40.899033996Z" level=info msg="cleaning up dead shim" May 8 00:47:40.906015 env[1223]: time="2025-05-08T00:47:40.905928694Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1889 runtime=io.containerd.runc.v2\n" May 8 00:47:41.396413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851-rootfs.mount: Deactivated successfully. May 8 00:47:41.614370 kubelet[1422]: E0508 00:47:41.614293 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:41.777144 kubelet[1422]: E0508 00:47:41.777001 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:41.779103 env[1223]: time="2025-05-08T00:47:41.779053499Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:47:41.809962 env[1223]: time="2025-05-08T00:47:41.809860945Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\"" May 8 00:47:41.810565 env[1223]: time="2025-05-08T00:47:41.810527514Z" level=info msg="StartContainer for \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\"" May 8 00:47:41.828182 systemd[1]: Started cri-containerd-f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3.scope. May 8 00:47:41.854757 systemd[1]: cri-containerd-f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3.scope: Deactivated successfully. May 8 00:47:41.859296 env[1223]: time="2025-05-08T00:47:41.859256740Z" level=info msg="StartContainer for \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\" returns successfully" May 8 00:47:41.886330 env[1223]: time="2025-05-08T00:47:41.886261400Z" level=info msg="shim disconnected" id=f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3 May 8 00:47:41.886330 env[1223]: time="2025-05-08T00:47:41.886307699Z" level=warning msg="cleaning up after shim disconnected" id=f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3 namespace=k8s.io May 8 00:47:41.886330 env[1223]: time="2025-05-08T00:47:41.886318019Z" level=info msg="cleaning up dead shim" May 8 00:47:41.892780 env[1223]: time="2025-05-08T00:47:41.892713987Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:47:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1947 runtime=io.containerd.runc.v2\n" May 8 00:47:42.396582 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3-rootfs.mount: Deactivated successfully. May 8 00:47:42.614706 kubelet[1422]: E0508 00:47:42.614645 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:42.780693 kubelet[1422]: E0508 00:47:42.780555 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:42.782236 env[1223]: time="2025-05-08T00:47:42.782178762Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:47:42.919943 env[1223]: time="2025-05-08T00:47:42.919880979Z" level=info msg="CreateContainer within sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\"" May 8 00:47:42.920592 env[1223]: time="2025-05-08T00:47:42.920557934Z" level=info msg="StartContainer for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\"" May 8 00:47:42.940304 systemd[1]: Started cri-containerd-58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2.scope. May 8 00:47:42.968153 env[1223]: time="2025-05-08T00:47:42.968092375Z" level=info msg="StartContainer for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" returns successfully" May 8 00:47:43.078402 kubelet[1422]: I0508 00:47:43.077771 1422 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 8 00:47:43.370502 kernel: Initializing XFRM netlink socket May 8 00:47:43.397886 systemd[1]: run-containerd-runc-k8s.io-58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2-runc.EMDGVe.mount: Deactivated successfully. May 8 00:47:43.615569 kubelet[1422]: E0508 00:47:43.615467 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:43.784775 kubelet[1422]: E0508 00:47:43.784650 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:44.616306 kubelet[1422]: E0508 00:47:44.616205 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:44.787160 kubelet[1422]: E0508 00:47:44.787111 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:45.031784 update_engine[1214]: I0508 00:47:45.031693 1214 update_attempter.cc:509] Updating boot flags... May 8 00:47:45.045049 systemd-networkd[1050]: cilium_host: Link UP May 8 00:47:45.045440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 8 00:47:45.045508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 00:47:45.045259 systemd-networkd[1050]: cilium_net: Link UP May 8 00:47:45.045457 systemd-networkd[1050]: cilium_net: Gained carrier May 8 00:47:45.045704 systemd-networkd[1050]: cilium_host: Gained carrier May 8 00:47:45.151173 systemd-networkd[1050]: cilium_vxlan: Link UP May 8 00:47:45.151185 systemd-networkd[1050]: cilium_vxlan: Gained carrier May 8 00:47:45.244698 systemd-networkd[1050]: cilium_host: Gained IPv6LL May 8 00:47:45.328708 kubelet[1422]: I0508 00:47:45.328504 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hhb6l" podStartSLOduration=11.72107381 podStartE2EDuration="33.32844881s" podCreationTimestamp="2025-05-08 00:47:12 +0000 UTC" firstStartedPulling="2025-05-08 00:47:16.778022134 +0000 UTC m=+5.668353445" lastFinishedPulling="2025-05-08 00:47:38.385397134 +0000 UTC m=+27.275728445" observedRunningTime="2025-05-08 00:47:43.852560511 +0000 UTC m=+32.742891843" watchObservedRunningTime="2025-05-08 00:47:45.32844881 +0000 UTC m=+34.218780111" May 8 00:47:45.338611 systemd[1]: Created slice kubepods-besteffort-podb83ff2ae_974b_4a09_8962_ba4bd7582c10.slice. May 8 00:47:45.375545 kernel: NET: Registered PF_ALG protocol family May 8 00:47:45.462618 kubelet[1422]: I0508 00:47:45.462537 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgdm5\" (UniqueName: \"kubernetes.io/projected/b83ff2ae-974b-4a09-8962-ba4bd7582c10-kube-api-access-vgdm5\") pod \"nginx-deployment-7fcdb87857-g8tl7\" (UID: \"b83ff2ae-974b-4a09-8962-ba4bd7582c10\") " pod="default/nginx-deployment-7fcdb87857-g8tl7" May 8 00:47:45.617103 kubelet[1422]: E0508 00:47:45.616942 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:45.641896 env[1223]: time="2025-05-08T00:47:45.641842560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-g8tl7,Uid:b83ff2ae-974b-4a09-8962-ba4bd7582c10,Namespace:default,Attempt:0,}" May 8 00:47:45.789362 kubelet[1422]: E0508 00:47:45.789308 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:45.925717 systemd-networkd[1050]: cilium_net: Gained IPv6LL May 8 00:47:45.978246 systemd-networkd[1050]: lxc_health: Link UP May 8 00:47:45.989771 systemd-networkd[1050]: lxc_health: Gained carrier May 8 00:47:45.990500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:47:46.185168 systemd-networkd[1050]: lxc92999efbd4ea: Link UP May 8 00:47:46.201524 kernel: eth0: renamed from tmp34554 May 8 00:47:46.211966 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:47:46.212119 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc92999efbd4ea: link becomes ready May 8 00:47:46.212022 systemd-networkd[1050]: lxc92999efbd4ea: Gained carrier May 8 00:47:46.503581 systemd-networkd[1050]: cilium_vxlan: Gained IPv6LL May 8 00:47:46.617682 kubelet[1422]: E0508 00:47:46.617619 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:46.925046 kubelet[1422]: E0508 00:47:46.925004 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:47.396866 systemd-networkd[1050]: lxc_health: Gained IPv6LL May 8 00:47:47.618176 kubelet[1422]: E0508 00:47:47.618093 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:47.793601 kubelet[1422]: E0508 00:47:47.793447 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:48.164770 systemd-networkd[1050]: lxc92999efbd4ea: Gained IPv6LL May 8 00:47:48.619449 kubelet[1422]: E0508 00:47:48.619281 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:48.795788 kubelet[1422]: E0508 00:47:48.795729 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:47:49.619902 kubelet[1422]: E0508 00:47:49.619831 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:50.260049 env[1223]: time="2025-05-08T00:47:50.259958371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:47:50.260049 env[1223]: time="2025-05-08T00:47:50.259998057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:47:50.260049 env[1223]: time="2025-05-08T00:47:50.260011152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:47:50.260547 env[1223]: time="2025-05-08T00:47:50.260487842Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34554444090631f95f729dc91fa4d4c26a00e4d131df91461de019dd76b154c3 pid=2501 runtime=io.containerd.runc.v2 May 8 00:47:50.290000 systemd[1]: Started cri-containerd-34554444090631f95f729dc91fa4d4c26a00e4d131df91461de019dd76b154c3.scope. May 8 00:47:50.319983 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:47:50.344671 env[1223]: time="2025-05-08T00:47:50.344619570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-g8tl7,Uid:b83ff2ae-974b-4a09-8962-ba4bd7582c10,Namespace:default,Attempt:0,} returns sandbox id \"34554444090631f95f729dc91fa4d4c26a00e4d131df91461de019dd76b154c3\"" May 8 00:47:50.346114 env[1223]: time="2025-05-08T00:47:50.346076338Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:47:50.620435 kubelet[1422]: E0508 00:47:50.620218 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:51.621111 kubelet[1422]: E0508 00:47:51.620836 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:52.574257 kubelet[1422]: E0508 00:47:52.574179 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:52.621214 kubelet[1422]: E0508 00:47:52.621130 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:53.621640 kubelet[1422]: E0508 00:47:53.621563 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:53.622784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829636772.mount: Deactivated successfully. May 8 00:47:54.621938 kubelet[1422]: E0508 00:47:54.621873 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:55.622513 kubelet[1422]: E0508 00:47:55.622391 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:56.622866 kubelet[1422]: E0508 00:47:56.622777 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:57.623960 kubelet[1422]: E0508 00:47:57.623859 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:58.319056 env[1223]: time="2025-05-08T00:47:58.318987976Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:58.322499 env[1223]: time="2025-05-08T00:47:58.322451008Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:58.324769 env[1223]: time="2025-05-08T00:47:58.324685962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:58.327048 env[1223]: time="2025-05-08T00:47:58.327001078Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:47:58.327879 env[1223]: time="2025-05-08T00:47:58.327842062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 8 00:47:58.330625 env[1223]: time="2025-05-08T00:47:58.330570741Z" level=info msg="CreateContainer within sandbox \"34554444090631f95f729dc91fa4d4c26a00e4d131df91461de019dd76b154c3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 8 00:47:58.346684 env[1223]: time="2025-05-08T00:47:58.346611412Z" level=info msg="CreateContainer within sandbox \"34554444090631f95f729dc91fa4d4c26a00e4d131df91461de019dd76b154c3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4b7e54113a05ac7ed1a46910d36b31b63fa956b249520e18734e30b794f98739\"" May 8 00:47:58.347315 env[1223]: time="2025-05-08T00:47:58.347263737Z" level=info msg="StartContainer for \"4b7e54113a05ac7ed1a46910d36b31b63fa956b249520e18734e30b794f98739\"" May 8 00:47:58.399743 systemd[1]: Started cri-containerd-4b7e54113a05ac7ed1a46910d36b31b63fa956b249520e18734e30b794f98739.scope. May 8 00:47:58.435978 env[1223]: time="2025-05-08T00:47:58.435918682Z" level=info msg="StartContainer for \"4b7e54113a05ac7ed1a46910d36b31b63fa956b249520e18734e30b794f98739\" returns successfully" May 8 00:47:58.624743 kubelet[1422]: E0508 00:47:58.624560 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:47:58.825101 kubelet[1422]: I0508 00:47:58.825020 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-g8tl7" podStartSLOduration=5.84162028 podStartE2EDuration="13.825000098s" podCreationTimestamp="2025-05-08 00:47:45 +0000 UTC" firstStartedPulling="2025-05-08 00:47:50.345531599 +0000 UTC m=+39.235862910" lastFinishedPulling="2025-05-08 00:47:58.328911417 +0000 UTC m=+47.219242728" observedRunningTime="2025-05-08 00:47:58.824959992 +0000 UTC m=+47.715291303" watchObservedRunningTime="2025-05-08 00:47:58.825000098 +0000 UTC m=+47.715331399" May 8 00:47:59.625410 kubelet[1422]: E0508 00:47:59.625325 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:00.625804 kubelet[1422]: E0508 00:48:00.625685 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:01.626868 kubelet[1422]: E0508 00:48:01.626792 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:02.627640 kubelet[1422]: E0508 00:48:02.627561 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:03.628735 kubelet[1422]: E0508 00:48:03.628644 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:03.785021 systemd[1]: Created slice kubepods-besteffort-podaa8db464_09fd_4316_8bd6_944ef00b2b4c.slice. May 8 00:48:03.918783 kubelet[1422]: I0508 00:48:03.918728 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/aa8db464-09fd-4316-8bd6-944ef00b2b4c-data\") pod \"nfs-server-provisioner-0\" (UID: \"aa8db464-09fd-4316-8bd6-944ef00b2b4c\") " pod="default/nfs-server-provisioner-0" May 8 00:48:03.918783 kubelet[1422]: I0508 00:48:03.918770 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbb8v\" (UniqueName: \"kubernetes.io/projected/aa8db464-09fd-4316-8bd6-944ef00b2b4c-kube-api-access-rbb8v\") pod \"nfs-server-provisioner-0\" (UID: \"aa8db464-09fd-4316-8bd6-944ef00b2b4c\") " pod="default/nfs-server-provisioner-0" May 8 00:48:04.088855 env[1223]: time="2025-05-08T00:48:04.088770575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aa8db464-09fd-4316-8bd6-944ef00b2b4c,Namespace:default,Attempt:0,}" May 8 00:48:04.629410 kubelet[1422]: E0508 00:48:04.629334 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:05.341022 systemd-networkd[1050]: lxc085f5a91dcde: Link UP May 8 00:48:05.348489 kernel: eth0: renamed from tmp75c4d May 8 00:48:05.373791 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:48:05.373936 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc085f5a91dcde: link becomes ready May 8 00:48:05.374063 systemd-networkd[1050]: lxc085f5a91dcde: Gained carrier May 8 00:48:05.602596 env[1223]: time="2025-05-08T00:48:05.602417597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:05.602596 env[1223]: time="2025-05-08T00:48:05.602466592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:05.602596 env[1223]: time="2025-05-08T00:48:05.602495388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:05.603424 env[1223]: time="2025-05-08T00:48:05.603291797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75c4d0d1df6376c0c356645107d046049680174783ee56167c132f3d8f4f1865 pid=2632 runtime=io.containerd.runc.v2 May 8 00:48:05.617176 systemd[1]: Started cri-containerd-75c4d0d1df6376c0c356645107d046049680174783ee56167c132f3d8f4f1865.scope. May 8 00:48:05.630324 kubelet[1422]: E0508 00:48:05.630263 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:05.637182 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:05.661225 env[1223]: time="2025-05-08T00:48:05.661163288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aa8db464-09fd-4316-8bd6-944ef00b2b4c,Namespace:default,Attempt:0,} returns sandbox id \"75c4d0d1df6376c0c356645107d046049680174783ee56167c132f3d8f4f1865\"" May 8 00:48:05.662707 env[1223]: time="2025-05-08T00:48:05.662674599Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 8 00:48:06.630990 kubelet[1422]: E0508 00:48:06.630917 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:07.236868 systemd-networkd[1050]: lxc085f5a91dcde: Gained IPv6LL May 8 00:48:07.631496 kubelet[1422]: E0508 00:48:07.631311 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:08.631555 kubelet[1422]: E0508 00:48:08.631462 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:09.631840 kubelet[1422]: E0508 00:48:09.631720 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:10.632755 kubelet[1422]: E0508 00:48:10.632666 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:11.633663 kubelet[1422]: E0508 00:48:11.633595 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:12.116941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264413461.mount: Deactivated successfully. May 8 00:48:12.574160 kubelet[1422]: E0508 00:48:12.574071 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:12.634260 kubelet[1422]: E0508 00:48:12.634177 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:13.635230 kubelet[1422]: E0508 00:48:13.635148 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:14.635558 kubelet[1422]: E0508 00:48:14.635452 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:15.636405 kubelet[1422]: E0508 00:48:15.636314 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:16.190230 env[1223]: time="2025-05-08T00:48:16.190085038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:16.287201 env[1223]: time="2025-05-08T00:48:16.287104034Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:16.345816 env[1223]: time="2025-05-08T00:48:16.345743888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:16.376360 env[1223]: time="2025-05-08T00:48:16.376210190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:16.377540 env[1223]: time="2025-05-08T00:48:16.377156645Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 8 00:48:16.383579 env[1223]: time="2025-05-08T00:48:16.383501103Z" level=info msg="CreateContainer within sandbox \"75c4d0d1df6376c0c356645107d046049680174783ee56167c132f3d8f4f1865\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 8 00:48:16.411240 env[1223]: time="2025-05-08T00:48:16.411146417Z" level=info msg="CreateContainer within sandbox \"75c4d0d1df6376c0c356645107d046049680174783ee56167c132f3d8f4f1865\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"863a8382aaba75cbbe12fdbb4a03e6afac7c37877bdbdb3b4f03fa1284be4427\"" May 8 00:48:16.411919 env[1223]: time="2025-05-08T00:48:16.411881805Z" level=info msg="StartContainer for \"863a8382aaba75cbbe12fdbb4a03e6afac7c37877bdbdb3b4f03fa1284be4427\"" May 8 00:48:16.436319 systemd[1]: Started cri-containerd-863a8382aaba75cbbe12fdbb4a03e6afac7c37877bdbdb3b4f03fa1284be4427.scope. May 8 00:48:16.469589 env[1223]: time="2025-05-08T00:48:16.469446577Z" level=info msg="StartContainer for \"863a8382aaba75cbbe12fdbb4a03e6afac7c37877bdbdb3b4f03fa1284be4427\" returns successfully" May 8 00:48:16.636814 kubelet[1422]: E0508 00:48:16.636724 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:16.869427 kubelet[1422]: I0508 00:48:16.869231 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=3.150703867 podStartE2EDuration="13.869210244s" podCreationTimestamp="2025-05-08 00:48:03 +0000 UTC" firstStartedPulling="2025-05-08 00:48:05.662435573 +0000 UTC m=+54.552766884" lastFinishedPulling="2025-05-08 00:48:16.38094195 +0000 UTC m=+65.271273261" observedRunningTime="2025-05-08 00:48:16.868698387 +0000 UTC m=+65.759029718" watchObservedRunningTime="2025-05-08 00:48:16.869210244 +0000 UTC m=+65.759541555" May 8 00:48:17.637658 kubelet[1422]: E0508 00:48:17.637576 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:18.638846 kubelet[1422]: E0508 00:48:18.638758 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:19.639645 kubelet[1422]: E0508 00:48:19.639576 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:20.640305 kubelet[1422]: E0508 00:48:20.640220 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:21.640603 kubelet[1422]: E0508 00:48:21.640532 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:22.641145 kubelet[1422]: E0508 00:48:22.641076 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:23.641317 kubelet[1422]: E0508 00:48:23.641247 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:24.641803 kubelet[1422]: E0508 00:48:24.641731 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:25.642546 kubelet[1422]: E0508 00:48:25.642487 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:26.643214 kubelet[1422]: E0508 00:48:26.643130 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:26.660901 systemd[1]: Created slice kubepods-besteffort-podf217f116_ee34_4cc9_8360_9e71979b7822.slice. May 8 00:48:26.758059 kubelet[1422]: I0508 00:48:26.757963 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n4mr\" (UniqueName: \"kubernetes.io/projected/f217f116-ee34-4cc9-8360-9e71979b7822-kube-api-access-2n4mr\") pod \"test-pod-1\" (UID: \"f217f116-ee34-4cc9-8360-9e71979b7822\") " pod="default/test-pod-1" May 8 00:48:26.758059 kubelet[1422]: I0508 00:48:26.758030 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fab79188-8977-4151-8859-6c0fd7e8cd39\" (UniqueName: \"kubernetes.io/nfs/f217f116-ee34-4cc9-8360-9e71979b7822-pvc-fab79188-8977-4151-8859-6c0fd7e8cd39\") pod \"test-pod-1\" (UID: \"f217f116-ee34-4cc9-8360-9e71979b7822\") " pod="default/test-pod-1" May 8 00:48:26.892517 kernel: FS-Cache: Loaded May 8 00:48:26.961571 kernel: RPC: Registered named UNIX socket transport module. May 8 00:48:26.961733 kernel: RPC: Registered udp transport module. May 8 00:48:26.961755 kernel: RPC: Registered tcp transport module. May 8 00:48:26.963621 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 8 00:48:27.030530 kernel: FS-Cache: Netfs 'nfs' registered for caching May 8 00:48:27.225041 kernel: NFS: Registering the id_resolver key type May 8 00:48:27.225221 kernel: Key type id_resolver registered May 8 00:48:27.225249 kernel: Key type id_legacy registered May 8 00:48:27.256689 nfsidmap[2752]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:48:27.260686 nfsidmap[2755]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:48:27.564375 env[1223]: time="2025-05-08T00:48:27.564190613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f217f116-ee34-4cc9-8360-9e71979b7822,Namespace:default,Attempt:0,}" May 8 00:48:27.619972 systemd-networkd[1050]: lxc3776e41f0b81: Link UP May 8 00:48:27.627540 kernel: eth0: renamed from tmp19831 May 8 00:48:27.634964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:48:27.636549 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3776e41f0b81: link becomes ready May 8 00:48:27.636638 systemd-networkd[1050]: lxc3776e41f0b81: Gained carrier May 8 00:48:27.643542 kubelet[1422]: E0508 00:48:27.643432 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:27.864693 env[1223]: time="2025-05-08T00:48:27.864514667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:27.864693 env[1223]: time="2025-05-08T00:48:27.864560475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:27.864693 env[1223]: time="2025-05-08T00:48:27.864575844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:27.864905 env[1223]: time="2025-05-08T00:48:27.864794353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19831313aad11eaba8749d22f5a1d106b7930c825bae9258aac5dcb0f3d99a14 pid=2788 runtime=io.containerd.runc.v2 May 8 00:48:27.880010 systemd[1]: run-containerd-runc-k8s.io-19831313aad11eaba8749d22f5a1d106b7930c825bae9258aac5dcb0f3d99a14-runc.A7G8Hg.mount: Deactivated successfully. May 8 00:48:27.881933 systemd[1]: Started cri-containerd-19831313aad11eaba8749d22f5a1d106b7930c825bae9258aac5dcb0f3d99a14.scope. May 8 00:48:27.894355 systemd-resolved[1158]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:48:27.918008 env[1223]: time="2025-05-08T00:48:27.917332378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f217f116-ee34-4cc9-8360-9e71979b7822,Namespace:default,Attempt:0,} returns sandbox id \"19831313aad11eaba8749d22f5a1d106b7930c825bae9258aac5dcb0f3d99a14\"" May 8 00:48:27.918463 env[1223]: time="2025-05-08T00:48:27.918435149Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:48:28.645270 kubelet[1422]: E0508 00:48:28.645197 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:28.804751 systemd-networkd[1050]: lxc3776e41f0b81: Gained IPv6LL May 8 00:48:29.450262 env[1223]: time="2025-05-08T00:48:29.450158288Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:29.543240 env[1223]: time="2025-05-08T00:48:29.543155307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:29.593835 env[1223]: time="2025-05-08T00:48:29.593768191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:29.624971 env[1223]: time="2025-05-08T00:48:29.624874648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:29.625690 env[1223]: time="2025-05-08T00:48:29.625608501Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 8 00:48:29.628271 env[1223]: time="2025-05-08T00:48:29.628218603Z" level=info msg="CreateContainer within sandbox \"19831313aad11eaba8749d22f5a1d106b7930c825bae9258aac5dcb0f3d99a14\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 8 00:48:29.646418 kubelet[1422]: E0508 00:48:29.646344 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:29.660686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2981086407.mount: Deactivated successfully. May 8 00:48:29.673361 env[1223]: time="2025-05-08T00:48:29.673271784Z" level=info msg="CreateContainer within sandbox \"19831313aad11eaba8749d22f5a1d106b7930c825bae9258aac5dcb0f3d99a14\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7c19f204eb7e8237d2128dfc5041910c6e7629f31492d6ec3671ff2d1bdb4d2e\"" May 8 00:48:29.674113 env[1223]: time="2025-05-08T00:48:29.674057506Z" level=info msg="StartContainer for \"7c19f204eb7e8237d2128dfc5041910c6e7629f31492d6ec3671ff2d1bdb4d2e\"" May 8 00:48:29.689465 systemd[1]: Started cri-containerd-7c19f204eb7e8237d2128dfc5041910c6e7629f31492d6ec3671ff2d1bdb4d2e.scope. May 8 00:48:29.717883 env[1223]: time="2025-05-08T00:48:29.717209571Z" level=info msg="StartContainer for \"7c19f204eb7e8237d2128dfc5041910c6e7629f31492d6ec3671ff2d1bdb4d2e\" returns successfully" May 8 00:48:29.892786 kubelet[1422]: I0508 00:48:29.892706 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=24.184231865 podStartE2EDuration="25.892685054s" podCreationTimestamp="2025-05-08 00:48:04 +0000 UTC" firstStartedPulling="2025-05-08 00:48:27.91818441 +0000 UTC m=+76.808515721" lastFinishedPulling="2025-05-08 00:48:29.626637599 +0000 UTC m=+78.516968910" observedRunningTime="2025-05-08 00:48:29.892203683 +0000 UTC m=+78.782534994" watchObservedRunningTime="2025-05-08 00:48:29.892685054 +0000 UTC m=+78.783016365" May 8 00:48:30.647136 kubelet[1422]: E0508 00:48:30.646995 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:31.647192 kubelet[1422]: E0508 00:48:31.647132 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:32.573455 kubelet[1422]: E0508 00:48:32.573387 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:32.647848 kubelet[1422]: E0508 00:48:32.647782 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:32.739925 env[1223]: time="2025-05-08T00:48:32.739825826Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:48:32.747705 env[1223]: time="2025-05-08T00:48:32.747654960Z" level=info msg="StopContainer for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" with timeout 2 (s)" May 8 00:48:32.747891 env[1223]: time="2025-05-08T00:48:32.747843019Z" level=info msg="Stop container \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" with signal terminated" May 8 00:48:32.756372 systemd-networkd[1050]: lxc_health: Link DOWN May 8 00:48:32.756383 systemd-networkd[1050]: lxc_health: Lost carrier May 8 00:48:32.795996 systemd[1]: cri-containerd-58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2.scope: Deactivated successfully. May 8 00:48:32.796314 systemd[1]: cri-containerd-58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2.scope: Consumed 7.454s CPU time. May 8 00:48:32.813542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2-rootfs.mount: Deactivated successfully. May 8 00:48:32.826534 env[1223]: time="2025-05-08T00:48:32.826353094Z" level=info msg="shim disconnected" id=58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2 May 8 00:48:32.826534 env[1223]: time="2025-05-08T00:48:32.826406516Z" level=warning msg="cleaning up after shim disconnected" id=58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2 namespace=k8s.io May 8 00:48:32.826534 env[1223]: time="2025-05-08T00:48:32.826415164Z" level=info msg="cleaning up dead shim" May 8 00:48:32.834020 env[1223]: time="2025-05-08T00:48:32.833951999Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2922 runtime=io.containerd.runc.v2\n" May 8 00:48:32.839029 env[1223]: time="2025-05-08T00:48:32.838966096Z" level=info msg="StopContainer for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" returns successfully" May 8 00:48:32.839994 env[1223]: time="2025-05-08T00:48:32.839954423Z" level=info msg="StopPodSandbox for \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\"" May 8 00:48:32.840055 env[1223]: time="2025-05-08T00:48:32.840035438Z" level=info msg="Container to stop \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.840055 env[1223]: time="2025-05-08T00:48:32.840049875Z" level=info msg="Container to stop \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.840107 env[1223]: time="2025-05-08T00:48:32.840061147Z" level=info msg="Container to stop \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.840107 env[1223]: time="2025-05-08T00:48:32.840074923Z" level=info msg="Container to stop \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.840107 env[1223]: time="2025-05-08T00:48:32.840084562Z" level=info msg="Container to stop \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:48:32.842089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad-shm.mount: Deactivated successfully. May 8 00:48:32.845444 systemd[1]: cri-containerd-0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad.scope: Deactivated successfully. May 8 00:48:32.862771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad-rootfs.mount: Deactivated successfully. May 8 00:48:32.866409 env[1223]: time="2025-05-08T00:48:32.866345361Z" level=info msg="shim disconnected" id=0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad May 8 00:48:32.866409 env[1223]: time="2025-05-08T00:48:32.866401698Z" level=warning msg="cleaning up after shim disconnected" id=0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad namespace=k8s.io May 8 00:48:32.866409 env[1223]: time="2025-05-08T00:48:32.866411467Z" level=info msg="cleaning up dead shim" May 8 00:48:32.873312 env[1223]: time="2025-05-08T00:48:32.873252002Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2953 runtime=io.containerd.runc.v2\n" May 8 00:48:32.873662 env[1223]: time="2025-05-08T00:48:32.873622320Z" level=info msg="TearDown network for sandbox \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" successfully" May 8 00:48:32.873662 env[1223]: time="2025-05-08T00:48:32.873653810Z" level=info msg="StopPodSandbox for \"0ca2a50eeb03edd0617b5b838c247e705d03e121999762abc14092364e1353ad\" returns successfully" May 8 00:48:32.889844 kubelet[1422]: I0508 00:48:32.889804 1422 scope.go:117] "RemoveContainer" containerID="58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2" May 8 00:48:32.891073 env[1223]: time="2025-05-08T00:48:32.891042172Z" level=info msg="RemoveContainer for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\"" May 8 00:48:32.897557 env[1223]: time="2025-05-08T00:48:32.897497050Z" level=info msg="RemoveContainer for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" returns successfully" May 8 00:48:32.897871 kubelet[1422]: I0508 00:48:32.897849 1422 scope.go:117] "RemoveContainer" containerID="f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3" May 8 00:48:32.899828 env[1223]: time="2025-05-08T00:48:32.899799068Z" level=info msg="RemoveContainer for \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\"" May 8 00:48:32.903151 env[1223]: time="2025-05-08T00:48:32.903112848Z" level=info msg="RemoveContainer for \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\" returns successfully" May 8 00:48:32.903364 kubelet[1422]: I0508 00:48:32.903248 1422 scope.go:117] "RemoveContainer" containerID="dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851" May 8 00:48:32.904386 env[1223]: time="2025-05-08T00:48:32.904333478Z" level=info msg="RemoveContainer for \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\"" May 8 00:48:32.907429 env[1223]: time="2025-05-08T00:48:32.907398513Z" level=info msg="RemoveContainer for \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\" returns successfully" May 8 00:48:32.907625 kubelet[1422]: I0508 00:48:32.907590 1422 scope.go:117] "RemoveContainer" containerID="3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372" May 8 00:48:32.908661 env[1223]: time="2025-05-08T00:48:32.908628091Z" level=info msg="RemoveContainer for \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\"" May 8 00:48:32.911840 env[1223]: time="2025-05-08T00:48:32.911771355Z" level=info msg="RemoveContainer for \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\" returns successfully" May 8 00:48:32.912038 kubelet[1422]: I0508 00:48:32.912005 1422 scope.go:117] "RemoveContainer" containerID="6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a" May 8 00:48:32.913040 env[1223]: time="2025-05-08T00:48:32.913009520Z" level=info msg="RemoveContainer for \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\"" May 8 00:48:32.915891 env[1223]: time="2025-05-08T00:48:32.915859895Z" level=info msg="RemoveContainer for \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\" returns successfully" May 8 00:48:32.916044 kubelet[1422]: I0508 00:48:32.916006 1422 scope.go:117] "RemoveContainer" containerID="58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2" May 8 00:48:32.916402 env[1223]: time="2025-05-08T00:48:32.916287751Z" level=error msg="ContainerStatus for \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\": not found" May 8 00:48:32.916544 kubelet[1422]: E0508 00:48:32.916522 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\": not found" containerID="58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2" May 8 00:48:32.916610 kubelet[1422]: I0508 00:48:32.916547 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2"} err="failed to get container status \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"58d20d9789592a808ffe85957f5492a199aa0676ebec913e8dc9c54a606467f2\": not found" May 8 00:48:32.916610 kubelet[1422]: I0508 00:48:32.916580 1422 scope.go:117] "RemoveContainer" containerID="f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3" May 8 00:48:32.916772 env[1223]: time="2025-05-08T00:48:32.916723583Z" level=error msg="ContainerStatus for \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\": not found" May 8 00:48:32.916868 kubelet[1422]: E0508 00:48:32.916849 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\": not found" containerID="f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3" May 8 00:48:32.916907 kubelet[1422]: I0508 00:48:32.916867 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3"} err="failed to get container status \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2c954443ad0a307c83d709437b6e31c38556cfc8788831938e892de970b63c3\": not found" May 8 00:48:32.916907 kubelet[1422]: I0508 00:48:32.916878 1422 scope.go:117] "RemoveContainer" containerID="dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851" May 8 00:48:32.917135 env[1223]: time="2025-05-08T00:48:32.917066118Z" level=error msg="ContainerStatus for \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\": not found" May 8 00:48:32.917294 kubelet[1422]: E0508 00:48:32.917261 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\": not found" containerID="dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851" May 8 00:48:32.917294 kubelet[1422]: I0508 00:48:32.917291 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851"} err="failed to get container status \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd6d3fe954fd495accb7c4a4665b530dbb55171e638541401597e01fdb5a8851\": not found" May 8 00:48:32.917403 kubelet[1422]: I0508 00:48:32.917305 1422 scope.go:117] "RemoveContainer" containerID="3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372" May 8 00:48:32.917528 env[1223]: time="2025-05-08T00:48:32.917455000Z" level=error msg="ContainerStatus for \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\": not found" May 8 00:48:32.917624 kubelet[1422]: E0508 00:48:32.917604 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\": not found" containerID="3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372" May 8 00:48:32.917624 kubelet[1422]: I0508 00:48:32.917621 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372"} err="failed to get container status \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c3873034448c45e2afb86f505d4144b585a4191623fad7c73a077faccbc0372\": not found" May 8 00:48:32.917770 kubelet[1422]: I0508 00:48:32.917631 1422 scope.go:117] "RemoveContainer" containerID="6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a" May 8 00:48:32.917817 env[1223]: time="2025-05-08T00:48:32.917773970Z" level=error msg="ContainerStatus for \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\": not found" May 8 00:48:32.917992 kubelet[1422]: E0508 00:48:32.917957 1422 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\": not found" containerID="6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a" May 8 00:48:32.918059 kubelet[1422]: I0508 00:48:32.917995 1422 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a"} err="failed to get container status \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f411bdbc2a1e17ca2bff729f009c85b5bd5e28fe9ec8456080f3f914ebec51a\": not found" May 8 00:48:32.999569 kubelet[1422]: I0508 00:48:32.999522 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-net\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999569 kubelet[1422]: I0508 00:48:32.999554 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-etc-cni-netd\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999569 kubelet[1422]: I0508 00:48:32.999569 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-run\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999764 kubelet[1422]: I0508 00:48:32.999584 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-cgroup\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999764 kubelet[1422]: I0508 00:48:32.999604 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-xtables-lock\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999764 kubelet[1422]: I0508 00:48:32.999623 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-config-path\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999764 kubelet[1422]: I0508 00:48:32.999638 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-bpf-maps\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:32.999764 kubelet[1422]: I0508 00:48:32.999643 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:32.999764 kubelet[1422]: I0508 00:48:32.999651 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hostproc\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000005 kubelet[1422]: I0508 00:48:32.999704 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-kernel\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000005 kubelet[1422]: I0508 00:48:32.999735 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fab8730-02dd-4dff-a1dd-63d4a80165ca-clustermesh-secrets\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000005 kubelet[1422]: I0508 00:48:32.999761 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hubble-tls\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000005 kubelet[1422]: I0508 00:48:32.999782 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cni-path\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000005 kubelet[1422]: I0508 00:48:32.999676 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000005 kubelet[1422]: I0508 00:48:32.999687 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000234 kubelet[1422]: I0508 00:48:32.999698 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000234 kubelet[1422]: I0508 00:48:32.999708 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000234 kubelet[1422]: I0508 00:48:32.999703 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000234 kubelet[1422]: I0508 00:48:32.999762 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000234 kubelet[1422]: I0508 00:48:32.999787 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999805 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8vdc\" (UniqueName: \"kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-kube-api-access-w8vdc\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999902 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-lib-modules\") pod \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\" (UID: \"0fab8730-02dd-4dff-a1dd-63d4a80165ca\") " May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999964 1422 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-bpf-maps\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999974 1422 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hostproc\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999982 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-kernel\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999990 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-run\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:32.999997 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-host-proc-sys-net\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000420 kubelet[1422]: I0508 00:48:33.000004 1422 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-etc-cni-netd\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000723 kubelet[1422]: I0508 00:48:33.000011 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-cgroup\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000723 kubelet[1422]: I0508 00:48:33.000018 1422 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-xtables-lock\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.000723 kubelet[1422]: I0508 00:48:33.000037 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.000723 kubelet[1422]: I0508 00:48:33.000665 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:33.001728 kubelet[1422]: I0508 00:48:33.001688 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:48:33.003145 kubelet[1422]: I0508 00:48:33.003075 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:33.003956 kubelet[1422]: I0508 00:48:33.003902 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-kube-api-access-w8vdc" (OuterVolumeSpecName: "kube-api-access-w8vdc") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "kube-api-access-w8vdc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:33.004561 systemd[1]: var-lib-kubelet-pods-0fab8730\x2d02dd\x2d4dff\x2da1dd\x2d63d4a80165ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:48:33.004815 kubelet[1422]: I0508 00:48:33.004772 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fab8730-02dd-4dff-a1dd-63d4a80165ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0fab8730-02dd-4dff-a1dd-63d4a80165ca" (UID: "0fab8730-02dd-4dff-a1dd-63d4a80165ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:48:33.100671 kubelet[1422]: I0508 00:48:33.100595 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cilium-config-path\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.100671 kubelet[1422]: I0508 00:48:33.100624 1422 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w8vdc\" (UniqueName: \"kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-kube-api-access-w8vdc\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.100671 kubelet[1422]: I0508 00:48:33.100639 1422 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-lib-modules\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.100671 kubelet[1422]: I0508 00:48:33.100648 1422 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fab8730-02dd-4dff-a1dd-63d4a80165ca-clustermesh-secrets\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.100671 kubelet[1422]: I0508 00:48:33.100655 1422 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fab8730-02dd-4dff-a1dd-63d4a80165ca-hubble-tls\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.100671 kubelet[1422]: I0508 00:48:33.100662 1422 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fab8730-02dd-4dff-a1dd-63d4a80165ca-cni-path\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:33.194577 systemd[1]: Removed slice kubepods-burstable-pod0fab8730_02dd_4dff_a1dd_63d4a80165ca.slice. May 8 00:48:33.194663 systemd[1]: kubepods-burstable-pod0fab8730_02dd_4dff_a1dd_63d4a80165ca.slice: Consumed 7.573s CPU time. May 8 00:48:33.648423 kubelet[1422]: E0508 00:48:33.648349 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:33.723079 systemd[1]: var-lib-kubelet-pods-0fab8730\x2d02dd\x2d4dff\x2da1dd\x2d63d4a80165ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw8vdc.mount: Deactivated successfully. May 8 00:48:33.723184 systemd[1]: var-lib-kubelet-pods-0fab8730\x2d02dd\x2d4dff\x2da1dd\x2d63d4a80165ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:48:34.649561 kubelet[1422]: E0508 00:48:34.649490 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:34.707386 kubelet[1422]: I0508 00:48:34.707320 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fab8730-02dd-4dff-a1dd-63d4a80165ca" path="/var/lib/kubelet/pods/0fab8730-02dd-4dff-a1dd-63d4a80165ca/volumes" May 8 00:48:35.462283 kubelet[1422]: I0508 00:48:35.462224 1422 memory_manager.go:355] "RemoveStaleState removing state" podUID="0fab8730-02dd-4dff-a1dd-63d4a80165ca" containerName="cilium-agent" May 8 00:48:35.467614 systemd[1]: Created slice kubepods-besteffort-pod8f46979a_f5b2_4557_ba25_53f01d6c1c99.slice. May 8 00:48:35.481039 systemd[1]: Created slice kubepods-burstable-pod8e6e43eb_aab1_4d0a_b88f_d7008f6d6f78.slice. May 8 00:48:35.612798 kubelet[1422]: I0508 00:48:35.612729 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-cgroup\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.612798 kubelet[1422]: I0508 00:48:35.612795 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-clustermesh-secrets\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613051 kubelet[1422]: I0508 00:48:35.612819 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-net\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613051 kubelet[1422]: I0508 00:48:35.612841 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-ipsec-secrets\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613051 kubelet[1422]: I0508 00:48:35.612870 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f46979a-f5b2-4557-ba25-53f01d6c1c99-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6rfkt\" (UID: \"8f46979a-f5b2-4557-ba25-53f01d6c1c99\") " pod="kube-system/cilium-operator-6c4d7847fc-6rfkt" May 8 00:48:35.613051 kubelet[1422]: I0508 00:48:35.612892 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-lib-modules\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613051 kubelet[1422]: I0508 00:48:35.612927 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-config-path\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613233 kubelet[1422]: I0508 00:48:35.612967 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hubble-tls\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613233 kubelet[1422]: I0508 00:48:35.612991 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-run\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613233 kubelet[1422]: I0508 00:48:35.613018 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-bpf-maps\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613233 kubelet[1422]: I0508 00:48:35.613037 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hostproc\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613233 kubelet[1422]: I0508 00:48:35.613056 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-xtables-lock\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613233 kubelet[1422]: I0508 00:48:35.613075 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqfrq\" (UniqueName: \"kubernetes.io/projected/8f46979a-f5b2-4557-ba25-53f01d6c1c99-kube-api-access-bqfrq\") pod \"cilium-operator-6c4d7847fc-6rfkt\" (UID: \"8f46979a-f5b2-4557-ba25-53f01d6c1c99\") " pod="kube-system/cilium-operator-6c4d7847fc-6rfkt" May 8 00:48:35.613415 kubelet[1422]: I0508 00:48:35.613134 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cni-path\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613415 kubelet[1422]: I0508 00:48:35.613185 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-etc-cni-netd\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613415 kubelet[1422]: I0508 00:48:35.613205 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-kernel\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.613415 kubelet[1422]: I0508 00:48:35.613231 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwk6\" (UniqueName: \"kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-kube-api-access-chwk6\") pod \"cilium-htt47\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " pod="kube-system/cilium-htt47" May 8 00:48:35.648697 kubelet[1422]: E0508 00:48:35.648620 1422 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-chwk6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-htt47" podUID="8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" May 8 00:48:35.650002 kubelet[1422]: E0508 00:48:35.649938 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:35.771178 kubelet[1422]: E0508 00:48:35.771033 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:35.771743 env[1223]: time="2025-05-08T00:48:35.771632406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6rfkt,Uid:8f46979a-f5b2-4557-ba25-53f01d6c1c99,Namespace:kube-system,Attempt:0,}" May 8 00:48:35.785963 env[1223]: time="2025-05-08T00:48:35.785867291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:35.785963 env[1223]: time="2025-05-08T00:48:35.785917697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:35.785963 env[1223]: time="2025-05-08T00:48:35.785941683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:35.786214 env[1223]: time="2025-05-08T00:48:35.786132757Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b4a514de74c9412553b8624fef917a350a2f53137e0721dee603b6d38e05763 pid=2983 runtime=io.containerd.runc.v2 May 8 00:48:35.801148 systemd[1]: Started cri-containerd-8b4a514de74c9412553b8624fef917a350a2f53137e0721dee603b6d38e05763.scope. May 8 00:48:35.836876 env[1223]: time="2025-05-08T00:48:35.836819579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6rfkt,Uid:8f46979a-f5b2-4557-ba25-53f01d6c1c99,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b4a514de74c9412553b8624fef917a350a2f53137e0721dee603b6d38e05763\"" May 8 00:48:35.837706 kubelet[1422]: E0508 00:48:35.837676 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:35.838735 env[1223]: time="2025-05-08T00:48:35.838699134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:48:36.016414 kubelet[1422]: I0508 00:48:36.016357 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-cgroup\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016414 kubelet[1422]: I0508 00:48:36.016428 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-clustermesh-secrets\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016679 kubelet[1422]: I0508 00:48:36.016450 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.016679 kubelet[1422]: I0508 00:48:36.016458 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-config-path\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016679 kubelet[1422]: I0508 00:48:36.016545 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-bpf-maps\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016679 kubelet[1422]: I0508 00:48:36.016572 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hostproc\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016679 kubelet[1422]: I0508 00:48:36.016596 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-etc-cni-netd\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016679 kubelet[1422]: I0508 00:48:36.016613 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-net\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016863 kubelet[1422]: I0508 00:48:36.016635 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-ipsec-secrets\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016863 kubelet[1422]: I0508 00:48:36.016653 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-lib-modules\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016863 kubelet[1422]: I0508 00:48:36.016676 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-run\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016863 kubelet[1422]: I0508 00:48:36.016699 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cni-path\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016863 kubelet[1422]: I0508 00:48:36.016717 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-xtables-lock\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.016863 kubelet[1422]: I0508 00:48:36.016738 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chwk6\" (UniqueName: \"kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-kube-api-access-chwk6\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.017027 kubelet[1422]: I0508 00:48:36.016759 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-kernel\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.017027 kubelet[1422]: I0508 00:48:36.016782 1422 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hubble-tls\") pod \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\" (UID: \"8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78\") " May 8 00:48:36.017027 kubelet[1422]: I0508 00:48:36.016822 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-cgroup\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.017027 kubelet[1422]: I0508 00:48:36.016945 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017027 kubelet[1422]: I0508 00:48:36.016995 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017027 kubelet[1422]: I0508 00:48:36.017010 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hostproc" (OuterVolumeSpecName: "hostproc") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017229 kubelet[1422]: I0508 00:48:36.017025 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017229 kubelet[1422]: I0508 00:48:36.017045 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017535 kubelet[1422]: I0508 00:48:36.017515 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017603 kubelet[1422]: I0508 00:48:36.017538 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017603 kubelet[1422]: I0508 00:48:36.017574 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cni-path" (OuterVolumeSpecName: "cni-path") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.017668 kubelet[1422]: I0508 00:48:36.017606 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 8 00:48:36.019121 kubelet[1422]: I0508 00:48:36.019075 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 8 00:48:36.020275 kubelet[1422]: I0508 00:48:36.020239 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:36.020382 kubelet[1422]: I0508 00:48:36.020339 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:48:36.020612 kubelet[1422]: I0508 00:48:36.020432 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-kube-api-access-chwk6" (OuterVolumeSpecName: "kube-api-access-chwk6") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "kube-api-access-chwk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 8 00:48:36.021675 kubelet[1422]: I0508 00:48:36.021599 1422 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" (UID: "8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 8 00:48:36.118122 kubelet[1422]: I0508 00:48:36.118006 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-net\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118122 kubelet[1422]: I0508 00:48:36.118072 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-ipsec-secrets\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118122 kubelet[1422]: I0508 00:48:36.118108 1422 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-lib-modules\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118122 kubelet[1422]: I0508 00:48:36.118120 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-run\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118122 kubelet[1422]: I0508 00:48:36.118135 1422 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cni-path\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118122 kubelet[1422]: I0508 00:48:36.118146 1422 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-xtables-lock\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118157 1422 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-chwk6\" (UniqueName: \"kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-kube-api-access-chwk6\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118168 1422 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-host-proc-sys-kernel\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118178 1422 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hubble-tls\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118188 1422 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-clustermesh-secrets\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118198 1422 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-cilium-config-path\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118208 1422 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-bpf-maps\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118223 1422 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-hostproc\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.118511 kubelet[1422]: I0508 00:48:36.118234 1422 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78-etc-cni-netd\") on node \"10.0.0.83\" DevicePath \"\"" May 8 00:48:36.650373 kubelet[1422]: E0508 00:48:36.650304 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:36.713118 systemd[1]: Removed slice kubepods-burstable-pod8e6e43eb_aab1_4d0a_b88f_d7008f6d6f78.slice. May 8 00:48:36.718804 systemd[1]: var-lib-kubelet-pods-8e6e43eb\x2daab1\x2d4d0a\x2db88f\x2dd7008f6d6f78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dchwk6.mount: Deactivated successfully. May 8 00:48:36.718900 systemd[1]: var-lib-kubelet-pods-8e6e43eb\x2daab1\x2d4d0a\x2db88f\x2dd7008f6d6f78-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:48:36.718964 systemd[1]: var-lib-kubelet-pods-8e6e43eb\x2daab1\x2d4d0a\x2db88f\x2dd7008f6d6f78-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 00:48:36.719021 systemd[1]: var-lib-kubelet-pods-8e6e43eb\x2daab1\x2d4d0a\x2db88f\x2dd7008f6d6f78-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:48:36.937381 systemd[1]: Created slice kubepods-burstable-poda7a77488_f158_4610_977d_0b4a88f7fa2a.slice. May 8 00:48:37.024243 kubelet[1422]: I0508 00:48:37.024173 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7a77488-f158-4610-977d-0b4a88f7fa2a-hubble-tls\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024243 kubelet[1422]: I0508 00:48:37.024226 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-cilium-run\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024243 kubelet[1422]: I0508 00:48:37.024244 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-cni-path\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024243 kubelet[1422]: I0508 00:48:37.024257 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-etc-cni-netd\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024506 kubelet[1422]: I0508 00:48:37.024272 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-lib-modules\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024506 kubelet[1422]: I0508 00:48:37.024287 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-xtables-lock\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024506 kubelet[1422]: I0508 00:48:37.024333 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7a77488-f158-4610-977d-0b4a88f7fa2a-clustermesh-secrets\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024506 kubelet[1422]: I0508 00:48:37.024362 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7a77488-f158-4610-977d-0b4a88f7fa2a-cilium-config-path\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024506 kubelet[1422]: I0508 00:48:37.024389 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a7a77488-f158-4610-977d-0b4a88f7fa2a-cilium-ipsec-secrets\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024628 kubelet[1422]: I0508 00:48:37.024402 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-host-proc-sys-kernel\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024628 kubelet[1422]: I0508 00:48:37.024416 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-bpf-maps\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024628 kubelet[1422]: I0508 00:48:37.024436 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-host-proc-sys-net\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024628 kubelet[1422]: I0508 00:48:37.024454 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26m5\" (UniqueName: \"kubernetes.io/projected/a7a77488-f158-4610-977d-0b4a88f7fa2a-kube-api-access-x26m5\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024628 kubelet[1422]: I0508 00:48:37.024490 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-hostproc\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.024628 kubelet[1422]: I0508 00:48:37.024504 1422 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7a77488-f158-4610-977d-0b4a88f7fa2a-cilium-cgroup\") pod \"cilium-vnfck\" (UID: \"a7a77488-f158-4610-977d-0b4a88f7fa2a\") " pod="kube-system/cilium-vnfck" May 8 00:48:37.246709 kubelet[1422]: E0508 00:48:37.246556 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:37.247198 env[1223]: time="2025-05-08T00:48:37.247099212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnfck,Uid:a7a77488-f158-4610-977d-0b4a88f7fa2a,Namespace:kube-system,Attempt:0,}" May 8 00:48:37.270573 env[1223]: time="2025-05-08T00:48:37.270425115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:48:37.270573 env[1223]: time="2025-05-08T00:48:37.270505789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:48:37.270573 env[1223]: time="2025-05-08T00:48:37.270523914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:48:37.270803 env[1223]: time="2025-05-08T00:48:37.270707303Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7 pid=3032 runtime=io.containerd.runc.v2 May 8 00:48:37.284895 systemd[1]: Started cri-containerd-6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7.scope. May 8 00:48:37.305323 env[1223]: time="2025-05-08T00:48:37.305263714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vnfck,Uid:a7a77488-f158-4610-977d-0b4a88f7fa2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\"" May 8 00:48:37.306122 kubelet[1422]: E0508 00:48:37.306098 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:37.308139 env[1223]: time="2025-05-08T00:48:37.308094299Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:48:37.322496 env[1223]: time="2025-05-08T00:48:37.322424264Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1\"" May 8 00:48:37.323113 env[1223]: time="2025-05-08T00:48:37.323037864Z" level=info msg="StartContainer for \"24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1\"" May 8 00:48:37.340359 systemd[1]: Started cri-containerd-24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1.scope. May 8 00:48:37.367882 env[1223]: time="2025-05-08T00:48:37.367810507Z" level=info msg="StartContainer for \"24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1\" returns successfully" May 8 00:48:37.378371 systemd[1]: cri-containerd-24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1.scope: Deactivated successfully. May 8 00:48:37.418135 env[1223]: time="2025-05-08T00:48:37.418062015Z" level=info msg="shim disconnected" id=24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1 May 8 00:48:37.418135 env[1223]: time="2025-05-08T00:48:37.418134753Z" level=warning msg="cleaning up after shim disconnected" id=24255d87fb588c5aa371cfc989289aa40cb8c4023e8c6ca5eea07f29ba0843b1 namespace=k8s.io May 8 00:48:37.418135 env[1223]: time="2025-05-08T00:48:37.418147808Z" level=info msg="cleaning up dead shim" May 8 00:48:37.427237 env[1223]: time="2025-05-08T00:48:37.427165327Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3115 runtime=io.containerd.runc.v2\n" May 8 00:48:37.651082 kubelet[1422]: E0508 00:48:37.651016 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:37.695904 kubelet[1422]: E0508 00:48:37.695823 1422 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:48:37.903406 kubelet[1422]: E0508 00:48:37.903007 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:37.904857 env[1223]: time="2025-05-08T00:48:37.904802688Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:48:37.921146 env[1223]: time="2025-05-08T00:48:37.921083021Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c\"" May 8 00:48:37.921780 env[1223]: time="2025-05-08T00:48:37.921716579Z" level=info msg="StartContainer for \"656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c\"" May 8 00:48:37.945746 systemd[1]: Started cri-containerd-656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c.scope. May 8 00:48:37.978690 systemd[1]: cri-containerd-656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c.scope: Deactivated successfully. May 8 00:48:37.979389 env[1223]: time="2025-05-08T00:48:37.978697303Z" level=info msg="StartContainer for \"656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c\" returns successfully" May 8 00:48:38.055602 env[1223]: time="2025-05-08T00:48:38.055507306Z" level=info msg="shim disconnected" id=656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c May 8 00:48:38.055602 env[1223]: time="2025-05-08T00:48:38.055594032Z" level=warning msg="cleaning up after shim disconnected" id=656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c namespace=k8s.io May 8 00:48:38.055602 env[1223]: time="2025-05-08T00:48:38.055609180Z" level=info msg="cleaning up dead shim" May 8 00:48:38.066196 env[1223]: time="2025-05-08T00:48:38.066120610Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3176 runtime=io.containerd.runc.v2\n" May 8 00:48:38.651530 kubelet[1422]: E0508 00:48:38.651423 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:38.706516 kubelet[1422]: E0508 00:48:38.705585 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:38.708168 kubelet[1422]: I0508 00:48:38.708100 1422 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78" path="/var/lib/kubelet/pods/8e6e43eb-aab1-4d0a-b88f-d7008f6d6f78/volumes" May 8 00:48:38.718822 systemd[1]: run-containerd-runc-k8s.io-656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c-runc.xexMcD.mount: Deactivated successfully. May 8 00:48:38.718997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-656ace90c2943571c314a502cf63a2a705174c8b811c35d216f96e05a998bd8c-rootfs.mount: Deactivated successfully. May 8 00:48:38.843151 env[1223]: time="2025-05-08T00:48:38.843057017Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:38.845555 env[1223]: time="2025-05-08T00:48:38.845499291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:38.847625 env[1223]: time="2025-05-08T00:48:38.847558695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:48:38.848167 env[1223]: time="2025-05-08T00:48:38.848117731Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 8 00:48:38.850541 env[1223]: time="2025-05-08T00:48:38.850489059Z" level=info msg="CreateContainer within sandbox \"8b4a514de74c9412553b8624fef917a350a2f53137e0721dee603b6d38e05763\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:48:38.867495 env[1223]: time="2025-05-08T00:48:38.867399501Z" level=info msg="CreateContainer within sandbox \"8b4a514de74c9412553b8624fef917a350a2f53137e0721dee603b6d38e05763\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"33341bc8970b05983ed62d1660b678bab3920228bf8842f0bc54e4025b7d5b59\"" May 8 00:48:38.868241 env[1223]: time="2025-05-08T00:48:38.868173155Z" level=info msg="StartContainer for \"33341bc8970b05983ed62d1660b678bab3920228bf8842f0bc54e4025b7d5b59\"" May 8 00:48:38.890849 systemd[1]: Started cri-containerd-33341bc8970b05983ed62d1660b678bab3920228bf8842f0bc54e4025b7d5b59.scope. May 8 00:48:38.908892 kubelet[1422]: E0508 00:48:38.908242 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:38.911361 env[1223]: time="2025-05-08T00:48:38.911305262Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:48:38.923510 env[1223]: time="2025-05-08T00:48:38.921335704Z" level=info msg="StartContainer for \"33341bc8970b05983ed62d1660b678bab3920228bf8842f0bc54e4025b7d5b59\" returns successfully" May 8 00:48:38.937066 env[1223]: time="2025-05-08T00:48:38.937005812Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2\"" May 8 00:48:38.938025 env[1223]: time="2025-05-08T00:48:38.937986792Z" level=info msg="StartContainer for \"7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2\"" May 8 00:48:38.957358 systemd[1]: Started cri-containerd-7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2.scope. May 8 00:48:39.279597 env[1223]: time="2025-05-08T00:48:39.279366526Z" level=info msg="StartContainer for \"7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2\" returns successfully" May 8 00:48:39.292409 systemd[1]: cri-containerd-7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2.scope: Deactivated successfully. May 8 00:48:39.651801 kubelet[1422]: E0508 00:48:39.651746 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:39.913384 kubelet[1422]: E0508 00:48:39.913236 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:39.916692 kubelet[1422]: E0508 00:48:39.916667 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:40.247439 env[1223]: time="2025-05-08T00:48:40.247242853Z" level=info msg="shim disconnected" id=7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2 May 8 00:48:40.247439 env[1223]: time="2025-05-08T00:48:40.247311003Z" level=warning msg="cleaning up after shim disconnected" id=7d01036e19ae769ec6877402697572461e3357e30e0bc8b3e4eb77f3416daea2 namespace=k8s.io May 8 00:48:40.247439 env[1223]: time="2025-05-08T00:48:40.247323637Z" level=info msg="cleaning up dead shim" May 8 00:48:40.256321 env[1223]: time="2025-05-08T00:48:40.256273078Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3273 runtime=io.containerd.runc.v2\n" May 8 00:48:40.422460 kubelet[1422]: I0508 00:48:40.422356 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6rfkt" podStartSLOduration=2.411509669 podStartE2EDuration="5.422328925s" podCreationTimestamp="2025-05-08 00:48:35 +0000 UTC" firstStartedPulling="2025-05-08 00:48:35.838312517 +0000 UTC m=+84.728643828" lastFinishedPulling="2025-05-08 00:48:38.849131773 +0000 UTC m=+87.739463084" observedRunningTime="2025-05-08 00:48:40.244297425 +0000 UTC m=+89.134628736" watchObservedRunningTime="2025-05-08 00:48:40.422328925 +0000 UTC m=+89.312660236" May 8 00:48:40.652449 kubelet[1422]: E0508 00:48:40.652361 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:40.922690 kubelet[1422]: E0508 00:48:40.922557 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:40.923408 kubelet[1422]: E0508 00:48:40.923381 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:40.925625 env[1223]: time="2025-05-08T00:48:40.925561473Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:48:41.163649 env[1223]: time="2025-05-08T00:48:41.163572742Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0\"" May 8 00:48:41.164292 env[1223]: time="2025-05-08T00:48:41.164219894Z" level=info msg="StartContainer for \"347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0\"" May 8 00:48:41.184240 systemd[1]: Started cri-containerd-347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0.scope. May 8 00:48:41.209039 systemd[1]: cri-containerd-347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0.scope: Deactivated successfully. May 8 00:48:41.212499 env[1223]: time="2025-05-08T00:48:41.212438618Z" level=info msg="StartContainer for \"347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0\" returns successfully" May 8 00:48:41.232030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0-rootfs.mount: Deactivated successfully. May 8 00:48:41.237397 env[1223]: time="2025-05-08T00:48:41.237328906Z" level=info msg="shim disconnected" id=347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0 May 8 00:48:41.237397 env[1223]: time="2025-05-08T00:48:41.237393910Z" level=warning msg="cleaning up after shim disconnected" id=347f84aa6ee9c5aaf606526ab8a1c9c26dc9b884f6d34a6347daad7d06284ec0 namespace=k8s.io May 8 00:48:41.237397 env[1223]: time="2025-05-08T00:48:41.237405391Z" level=info msg="cleaning up dead shim" May 8 00:48:41.245349 env[1223]: time="2025-05-08T00:48:41.245282155Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:48:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" May 8 00:48:41.652624 kubelet[1422]: E0508 00:48:41.652551 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:41.928839 kubelet[1422]: E0508 00:48:41.928691 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:41.931184 env[1223]: time="2025-05-08T00:48:41.931097010Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:48:41.955387 env[1223]: time="2025-05-08T00:48:41.955295983Z" level=info msg="CreateContainer within sandbox \"6343eb12bc79a0505ca80a679a63156f9d04af590cd78032e33c51380b11a2f7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6aa89c466f5440780f22cdd0cd89fc6833e58c64e71029be779aec5b2c7f368f\"" May 8 00:48:41.956213 env[1223]: time="2025-05-08T00:48:41.956157422Z" level=info msg="StartContainer for \"6aa89c466f5440780f22cdd0cd89fc6833e58c64e71029be779aec5b2c7f368f\"" May 8 00:48:41.973199 systemd[1]: Started cri-containerd-6aa89c466f5440780f22cdd0cd89fc6833e58c64e71029be779aec5b2c7f368f.scope. May 8 00:48:42.010596 env[1223]: time="2025-05-08T00:48:42.010514394Z" level=info msg="StartContainer for \"6aa89c466f5440780f22cdd0cd89fc6833e58c64e71029be779aec5b2c7f368f\" returns successfully" May 8 00:48:42.343524 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 8 00:48:42.653407 kubelet[1422]: E0508 00:48:42.653329 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:42.934266 kubelet[1422]: E0508 00:48:42.934126 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:43.101614 kubelet[1422]: I0508 00:48:43.101517 1422 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vnfck" podStartSLOduration=7.101463205 podStartE2EDuration="7.101463205s" podCreationTimestamp="2025-05-08 00:48:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:48:43.10106135 +0000 UTC m=+91.991392682" watchObservedRunningTime="2025-05-08 00:48:43.101463205 +0000 UTC m=+91.991794516" May 8 00:48:43.654053 kubelet[1422]: E0508 00:48:43.653984 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:43.935937 kubelet[1422]: E0508 00:48:43.935749 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:44.654286 kubelet[1422]: E0508 00:48:44.654171 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:45.408029 systemd-networkd[1050]: lxc_health: Link UP May 8 00:48:45.416862 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:48:45.416587 systemd-networkd[1050]: lxc_health: Gained carrier May 8 00:48:45.656447 kubelet[1422]: E0508 00:48:45.656379 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:46.657628 kubelet[1422]: E0508 00:48:46.657536 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:47.057182 systemd-networkd[1050]: lxc_health: Gained IPv6LL May 8 00:48:47.248280 kubelet[1422]: E0508 00:48:47.248216 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:47.658596 kubelet[1422]: E0508 00:48:47.658539 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:47.943033 kubelet[1422]: E0508 00:48:47.942883 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:48.659367 kubelet[1422]: E0508 00:48:48.659277 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:48.944821 kubelet[1422]: E0508 00:48:48.944700 1422 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:48:49.660044 kubelet[1422]: E0508 00:48:49.659964 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:50.661208 kubelet[1422]: E0508 00:48:50.661115 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:51.662178 kubelet[1422]: E0508 00:48:51.662109 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:52.574334 kubelet[1422]: E0508 00:48:52.574242 1422 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:52.662632 kubelet[1422]: E0508 00:48:52.662545 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:48:53.663311 kubelet[1422]: E0508 00:48:53.663229 1422 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"