May 10 01:43:02.902304 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 01:43:02.902343 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 01:43:02.902361 kernel: BIOS-provided physical RAM map: May 10 01:43:02.902371 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 01:43:02.902380 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 01:43:02.902398 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 01:43:02.902411 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable May 10 01:43:02.902421 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved May 10 01:43:02.902430 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 01:43:02.902439 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 01:43:02.902453 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 01:43:02.902463 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 01:43:02.902472 kernel: NX (Execute Disable) protection: active May 10 01:43:02.902482 kernel: SMBIOS 2.8 present. May 10 01:43:02.902493 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 May 10 01:43:02.902504 kernel: Hypervisor detected: KVM May 10 01:43:02.902518 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 01:43:02.902528 kernel: kvm-clock: cpu 0, msr 70196001, primary cpu clock May 10 01:43:02.902538 kernel: kvm-clock: using sched offset of 4812361481 cycles May 10 01:43:02.902549 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 01:43:02.902559 kernel: tsc: Detected 2799.998 MHz processor May 10 01:43:02.902569 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 01:43:02.902580 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 01:43:02.902590 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 May 10 01:43:02.902600 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 01:43:02.902614 kernel: Using GB pages for direct mapping May 10 01:43:02.902624 kernel: ACPI: Early table checksum verification disabled May 10 01:43:02.902634 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) May 10 01:43:02.902644 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902654 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902664 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902674 kernel: ACPI: FACS 0x000000007FFDFD40 000040 May 10 01:43:02.902685 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902695 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902708 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902719 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 01:43:02.902729 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] May 10 01:43:02.902739 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] May 10 01:43:02.902749 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] May 10 01:43:02.902760 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] May 10 01:43:02.902775 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] May 10 01:43:02.902789 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] May 10 01:43:02.902800 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] May 10 01:43:02.902811 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 10 01:43:02.902822 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 10 01:43:02.902832 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 May 10 01:43:02.902843 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 May 10 01:43:02.902854 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 May 10 01:43:02.902868 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 May 10 01:43:02.902879 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 May 10 01:43:02.902889 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 May 10 01:43:02.902900 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 May 10 01:43:02.902911 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 May 10 01:43:02.902921 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 May 10 01:43:02.902932 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 May 10 01:43:02.902943 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 May 10 01:43:02.902953 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 May 10 01:43:02.902964 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 May 10 01:43:02.902978 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 May 10 01:43:02.902989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 10 01:43:02.903000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 10 01:43:02.903010 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug May 10 01:43:02.903033 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] May 10 01:43:02.903044 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] May 10 01:43:02.903055 kernel: Zone ranges: May 10 01:43:02.903076 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 01:43:02.903088 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] May 10 01:43:02.903104 kernel: Normal empty May 10 01:43:02.903115 kernel: Movable zone start for each node May 10 01:43:02.903125 kernel: Early memory node ranges May 10 01:43:02.903136 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 01:43:02.903147 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] May 10 01:43:02.903158 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] May 10 01:43:02.903168 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 01:43:02.903179 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 01:43:02.903189 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges May 10 01:43:02.903204 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 01:43:02.903215 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 01:43:02.903226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 01:43:02.903236 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 01:43:02.903247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 01:43:02.903258 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 01:43:02.903268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 01:43:02.903279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 01:43:02.903290 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 01:43:02.903304 kernel: TSC deadline timer available May 10 01:43:02.903315 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs May 10 01:43:02.903326 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 01:43:02.903337 kernel: Booting paravirtualized kernel on KVM May 10 01:43:02.903347 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 01:43:02.903358 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 May 10 01:43:02.903382 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 May 10 01:43:02.903392 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 May 10 01:43:02.903403 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 10 01:43:02.903416 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 May 10 01:43:02.903427 kernel: kvm-guest: PV spinlocks enabled May 10 01:43:02.903437 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 01:43:02.903460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 May 10 01:43:02.903471 kernel: Policy zone: DMA32 May 10 01:43:02.903482 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 01:43:02.903494 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 01:43:02.903505 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 01:43:02.903520 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 10 01:43:02.903531 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 01:43:02.903542 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 192524K reserved, 0K cma-reserved) May 10 01:43:02.903553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 10 01:43:02.903564 kernel: Kernel/User page tables isolation: enabled May 10 01:43:02.903575 kernel: ftrace: allocating 34584 entries in 136 pages May 10 01:43:02.903585 kernel: ftrace: allocated 136 pages with 2 groups May 10 01:43:02.903596 kernel: rcu: Hierarchical RCU implementation. May 10 01:43:02.903607 kernel: rcu: RCU event tracing is enabled. May 10 01:43:02.903622 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 10 01:43:02.903633 kernel: Rude variant of Tasks RCU enabled. May 10 01:43:02.903644 kernel: Tracing variant of Tasks RCU enabled. May 10 01:43:02.903655 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 01:43:02.903666 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 10 01:43:02.903676 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 May 10 01:43:02.903687 kernel: random: crng init done May 10 01:43:02.903709 kernel: Console: colour VGA+ 80x25 May 10 01:43:02.903764 kernel: printk: console [tty0] enabled May 10 01:43:02.903779 kernel: printk: console [ttyS0] enabled May 10 01:43:02.903790 kernel: ACPI: Core revision 20210730 May 10 01:43:02.903802 kernel: APIC: Switch to symmetric I/O mode setup May 10 01:43:02.903818 kernel: x2apic enabled May 10 01:43:02.903830 kernel: Switched APIC routing to physical x2apic. May 10 01:43:02.903841 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns May 10 01:43:02.903853 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) May 10 01:43:02.903865 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 01:43:02.903880 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 10 01:43:02.903892 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 10 01:43:02.903903 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 01:43:02.903914 kernel: Spectre V2 : Mitigation: Retpolines May 10 01:43:02.903926 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 01:43:02.903937 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 10 01:43:02.903948 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 01:43:02.903960 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 01:43:02.903971 kernel: MDS: Mitigation: Clear CPU buffers May 10 01:43:02.903982 kernel: MMIO Stale Data: Unknown: No mitigations May 10 01:43:02.903993 kernel: SRBDS: Unknown: Dependent on hypervisor status May 10 01:43:02.904009 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 01:43:02.904031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 01:43:02.904043 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 01:43:02.907707 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 01:43:02.907723 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 01:43:02.907736 kernel: Freeing SMP alternatives memory: 32K May 10 01:43:02.907747 kernel: pid_max: default: 32768 minimum: 301 May 10 01:43:02.907759 kernel: LSM: Security Framework initializing May 10 01:43:02.907770 kernel: SELinux: Initializing. May 10 01:43:02.907782 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 01:43:02.907793 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 10 01:43:02.907811 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) May 10 01:43:02.907823 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. May 10 01:43:02.907835 kernel: signal: max sigframe size: 1776 May 10 01:43:02.907846 kernel: rcu: Hierarchical SRCU implementation. May 10 01:43:02.907858 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 10 01:43:02.907869 kernel: smp: Bringing up secondary CPUs ... May 10 01:43:02.907881 kernel: x86: Booting SMP configuration: May 10 01:43:02.907892 kernel: .... node #0, CPUs: #1 May 10 01:43:02.907903 kernel: kvm-clock: cpu 1, msr 70196041, secondary cpu clock May 10 01:43:02.907919 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 May 10 01:43:02.907930 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 May 10 01:43:02.907942 kernel: smp: Brought up 1 node, 2 CPUs May 10 01:43:02.907953 kernel: smpboot: Max logical packages: 16 May 10 01:43:02.907965 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) May 10 01:43:02.907976 kernel: devtmpfs: initialized May 10 01:43:02.907987 kernel: x86/mm: Memory block size: 128MB May 10 01:43:02.907999 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 01:43:02.908011 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 10 01:43:02.908048 kernel: pinctrl core: initialized pinctrl subsystem May 10 01:43:02.908060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 01:43:02.908084 kernel: audit: initializing netlink subsys (disabled) May 10 01:43:02.908096 kernel: audit: type=2000 audit(1746841381.951:1): state=initialized audit_enabled=0 res=1 May 10 01:43:02.908108 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 01:43:02.908119 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 01:43:02.908130 kernel: cpuidle: using governor menu May 10 01:43:02.908142 kernel: ACPI: bus type PCI registered May 10 01:43:02.908153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 01:43:02.908169 kernel: dca service started, version 1.12.1 May 10 01:43:02.908181 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 01:43:02.908193 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 10 01:43:02.908204 kernel: PCI: Using configuration type 1 for base access May 10 01:43:02.908226 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 01:43:02.908239 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 01:43:02.908250 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 01:43:02.908262 kernel: ACPI: Added _OSI(Module Device) May 10 01:43:02.908273 kernel: ACPI: Added _OSI(Processor Device) May 10 01:43:02.908289 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 01:43:02.908301 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 01:43:02.908312 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 01:43:02.908323 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 01:43:02.908335 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 01:43:02.908346 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 01:43:02.908357 kernel: ACPI: Interpreter enabled May 10 01:43:02.908369 kernel: ACPI: PM: (supports S0 S5) May 10 01:43:02.908380 kernel: ACPI: Using IOAPIC for interrupt routing May 10 01:43:02.908395 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 01:43:02.908407 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 01:43:02.908418 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 01:43:02.908675 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 01:43:02.908827 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 10 01:43:02.908973 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 10 01:43:02.908990 kernel: PCI host bridge to bus 0000:00 May 10 01:43:02.909169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 01:43:02.909310 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 01:43:02.909470 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 01:43:02.909612 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 10 01:43:02.909761 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 01:43:02.909899 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] May 10 01:43:02.910059 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 01:43:02.910257 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 01:43:02.910426 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 May 10 01:43:02.910574 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] May 10 01:43:02.910718 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] May 10 01:43:02.910861 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] May 10 01:43:02.911004 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 01:43:02.911185 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.911349 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] May 10 01:43:02.911513 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.911672 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] May 10 01:43:02.911892 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.912054 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] May 10 01:43:02.912221 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.912371 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] May 10 01:43:02.912582 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.912750 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] May 10 01:43:02.912958 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.920221 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] May 10 01:43:02.920388 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.920546 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] May 10 01:43:02.920716 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 10 01:43:02.920875 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] May 10 01:43:02.921030 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 10 01:43:02.921210 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 01:43:02.921356 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] May 10 01:43:02.921507 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 10 01:43:02.921650 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] May 10 01:43:02.921801 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 10 01:43:02.921945 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 10 01:43:02.922118 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] May 10 01:43:02.922263 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] May 10 01:43:02.922437 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 01:43:02.922614 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 01:43:02.922786 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 01:43:02.922938 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] May 10 01:43:02.923113 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] May 10 01:43:02.923286 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 01:43:02.923451 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 01:43:02.923653 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 May 10 01:43:02.923852 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] May 10 01:43:02.924043 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 01:43:02.924203 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 01:43:02.924347 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 01:43:02.924517 kernel: pci_bus 0000:02: extended config space not accessible May 10 01:43:02.924700 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 May 10 01:43:02.924893 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] May 10 01:43:02.925091 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 01:43:02.925243 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 01:43:02.925399 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 May 10 01:43:02.925548 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] May 10 01:43:02.925711 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 01:43:02.925865 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 01:43:02.926016 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 01:43:02.926205 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 May 10 01:43:02.926358 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 10 01:43:02.926504 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 01:43:02.926648 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 01:43:02.926791 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 01:43:02.926936 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 01:43:02.935184 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 01:43:02.935349 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 01:43:02.935501 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 01:43:02.935647 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 01:43:02.935792 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 01:43:02.935937 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 01:43:02.936112 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 01:43:02.936258 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 01:43:02.936451 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 01:43:02.936598 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 01:43:02.936743 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 01:43:02.936890 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 01:43:02.937071 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 01:43:02.937221 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 01:43:02.937239 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 01:43:02.937252 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 01:43:02.937263 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 01:43:02.937281 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 01:43:02.937293 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 01:43:02.937305 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 01:43:02.937317 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 01:43:02.937328 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 01:43:02.937339 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 01:43:02.937382 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 01:43:02.937394 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 01:43:02.937406 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 01:43:02.937423 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 01:43:02.937435 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 01:43:02.937446 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 01:43:02.937458 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 01:43:02.937470 kernel: iommu: Default domain type: Translated May 10 01:43:02.937481 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 01:43:02.937631 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 01:43:02.937777 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 01:43:02.937929 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 01:43:02.937947 kernel: vgaarb: loaded May 10 01:43:02.937959 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 01:43:02.937971 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 01:43:02.937983 kernel: PTP clock support registered May 10 01:43:02.937994 kernel: PCI: Using ACPI for IRQ routing May 10 01:43:02.938006 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 01:43:02.938034 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 01:43:02.938048 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] May 10 01:43:02.938075 kernel: clocksource: Switched to clocksource kvm-clock May 10 01:43:02.938088 kernel: VFS: Disk quotas dquot_6.6.0 May 10 01:43:02.938100 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 01:43:02.938112 kernel: pnp: PnP ACPI init May 10 01:43:02.938297 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 01:43:02.938317 kernel: pnp: PnP ACPI: found 5 devices May 10 01:43:02.938329 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 01:43:02.938341 kernel: NET: Registered PF_INET protocol family May 10 01:43:02.938358 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 01:43:02.938370 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 10 01:43:02.938382 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 01:43:02.938394 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 10 01:43:02.938405 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) May 10 01:43:02.938417 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 10 01:43:02.938428 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 01:43:02.938440 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 10 01:43:02.938451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 01:43:02.938468 kernel: NET: Registered PF_XDP protocol family May 10 01:43:02.938614 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 May 10 01:43:02.938762 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 10 01:43:02.938908 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 10 01:43:02.939080 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 10 01:43:02.939230 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 10 01:43:02.939381 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 10 01:43:02.939527 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 10 01:43:02.939690 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 10 01:43:02.939839 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 10 01:43:02.939985 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 10 01:43:02.940196 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 10 01:43:02.940354 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 10 01:43:02.940508 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 10 01:43:02.940667 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 10 01:43:02.940852 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 10 01:43:02.941000 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 10 01:43:02.953231 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 10 01:43:02.953406 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 10 01:43:02.953580 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 10 01:43:02.953732 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 10 01:43:02.953894 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 10 01:43:02.954096 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 10 01:43:02.954256 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 10 01:43:02.954440 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 10 01:43:02.954601 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 10 01:43:02.954766 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 01:43:02.954936 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 10 01:43:02.955109 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 10 01:43:02.955256 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 10 01:43:02.955430 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 01:43:02.955579 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 10 01:43:02.955749 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 10 01:43:02.955903 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 10 01:43:02.956103 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 01:43:02.956259 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 10 01:43:02.956417 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 10 01:43:02.956565 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 10 01:43:02.956741 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 01:43:02.956894 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 10 01:43:02.957113 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 10 01:43:02.957264 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 10 01:43:02.957408 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 01:43:02.957589 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 10 01:43:02.957786 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 10 01:43:02.957944 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 10 01:43:02.958127 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 01:43:02.958287 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 10 01:43:02.958454 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 10 01:43:02.958612 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 10 01:43:02.958773 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 01:43:02.958916 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 01:43:02.959095 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 01:43:02.959280 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 01:43:02.959429 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 10 01:43:02.959570 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 01:43:02.959703 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] May 10 01:43:02.959859 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 10 01:43:02.960031 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] May 10 01:43:02.960195 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 10 01:43:02.960360 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] May 10 01:43:02.960533 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] May 10 01:43:02.960677 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] May 10 01:43:02.960824 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 10 01:43:02.960991 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] May 10 01:43:02.961187 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] May 10 01:43:02.961341 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 10 01:43:02.961503 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] May 10 01:43:02.961669 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] May 10 01:43:02.961844 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 10 01:43:02.962040 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] May 10 01:43:02.962223 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] May 10 01:43:02.962366 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 10 01:43:02.962542 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] May 10 01:43:02.962708 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] May 10 01:43:02.962874 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 10 01:43:02.964221 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] May 10 01:43:02.964371 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] May 10 01:43:02.964518 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 10 01:43:02.964666 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] May 10 01:43:02.964804 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] May 10 01:43:02.964949 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 10 01:43:02.964968 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 01:43:02.964981 kernel: PCI: CLS 0 bytes, default 64 May 10 01:43:02.964993 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 10 01:43:02.965005 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) May 10 01:43:02.965035 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 10 01:43:02.965055 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns May 10 01:43:02.965078 kernel: Initialise system trusted keyrings May 10 01:43:02.965091 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 10 01:43:02.965103 kernel: Key type asymmetric registered May 10 01:43:02.965115 kernel: Asymmetric key parser 'x509' registered May 10 01:43:02.965127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 01:43:02.965140 kernel: io scheduler mq-deadline registered May 10 01:43:02.965157 kernel: io scheduler kyber registered May 10 01:43:02.965169 kernel: io scheduler bfq registered May 10 01:43:02.965317 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 10 01:43:02.965463 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 10 01:43:02.965609 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.965755 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 10 01:43:02.965899 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 10 01:43:02.966074 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.966231 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 10 01:43:02.966395 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 10 01:43:02.966558 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.966702 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 10 01:43:02.966847 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 10 01:43:02.966993 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.967169 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 10 01:43:02.967317 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 10 01:43:02.967464 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.967610 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 10 01:43:02.967762 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 10 01:43:02.967907 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.968085 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 10 01:43:02.968233 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 10 01:43:02.968377 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.968523 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 10 01:43:02.968667 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 10 01:43:02.968813 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 10 01:43:02.968838 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 01:43:02.968851 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 01:43:02.968863 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 01:43:02.968876 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 01:43:02.968888 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 01:43:02.968900 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 01:43:02.968913 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 01:43:02.968933 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 01:43:02.969123 kernel: rtc_cmos 00:03: RTC can wake from S4 May 10 01:43:02.969144 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 01:43:02.969280 kernel: rtc_cmos 00:03: registered as rtc0 May 10 01:43:02.969418 kernel: rtc_cmos 00:03: setting system clock to 2025-05-10T01:43:02 UTC (1746841382) May 10 01:43:02.969555 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 10 01:43:02.969573 kernel: intel_pstate: CPU model not supported May 10 01:43:02.969586 kernel: NET: Registered PF_INET6 protocol family May 10 01:43:02.969604 kernel: Segment Routing with IPv6 May 10 01:43:02.969616 kernel: In-situ OAM (IOAM) with IPv6 May 10 01:43:02.969629 kernel: NET: Registered PF_PACKET protocol family May 10 01:43:02.969644 kernel: Key type dns_resolver registered May 10 01:43:02.969657 kernel: IPI shorthand broadcast: enabled May 10 01:43:02.969669 kernel: sched_clock: Marking stable (965042020, 210713737)->(1438807516, -263051759) May 10 01:43:02.969681 kernel: registered taskstats version 1 May 10 01:43:02.969693 kernel: Loading compiled-in X.509 certificates May 10 01:43:02.969715 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 01:43:02.969731 kernel: Key type .fscrypt registered May 10 01:43:02.969743 kernel: Key type fscrypt-provisioning registered May 10 01:43:02.969755 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 01:43:02.969767 kernel: ima: Allocated hash algorithm: sha1 May 10 01:43:02.969779 kernel: ima: No architecture policies found May 10 01:43:02.969791 kernel: clk: Disabling unused clocks May 10 01:43:02.969803 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 01:43:02.969815 kernel: Write protecting the kernel read-only data: 28672k May 10 01:43:02.969827 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 01:43:02.969843 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 01:43:02.969856 kernel: Run /init as init process May 10 01:43:02.969876 kernel: with arguments: May 10 01:43:02.969888 kernel: /init May 10 01:43:02.969900 kernel: with environment: May 10 01:43:02.969912 kernel: HOME=/ May 10 01:43:02.969929 kernel: TERM=linux May 10 01:43:02.969942 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 01:43:02.969964 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 01:43:02.969985 systemd[1]: Detected virtualization kvm. May 10 01:43:02.969999 systemd[1]: Detected architecture x86-64. May 10 01:43:02.975576 systemd[1]: Running in initrd. May 10 01:43:02.975603 systemd[1]: No hostname configured, using default hostname. May 10 01:43:02.975616 systemd[1]: Hostname set to . May 10 01:43:02.975642 systemd[1]: Initializing machine ID from VM UUID. May 10 01:43:02.975655 systemd[1]: Queued start job for default target initrd.target. May 10 01:43:02.975675 systemd[1]: Started systemd-ask-password-console.path. May 10 01:43:02.975692 systemd[1]: Reached target cryptsetup.target. May 10 01:43:02.975705 systemd[1]: Reached target paths.target. May 10 01:43:02.975717 systemd[1]: Reached target slices.target. May 10 01:43:02.975730 systemd[1]: Reached target swap.target. May 10 01:43:02.975742 systemd[1]: Reached target timers.target. May 10 01:43:02.975756 systemd[1]: Listening on iscsid.socket. May 10 01:43:02.975768 systemd[1]: Listening on iscsiuio.socket. May 10 01:43:02.975785 systemd[1]: Listening on systemd-journald-audit.socket. May 10 01:43:02.975798 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 01:43:02.975811 systemd[1]: Listening on systemd-journald.socket. May 10 01:43:02.975824 systemd[1]: Listening on systemd-networkd.socket. May 10 01:43:02.975836 systemd[1]: Listening on systemd-udevd-control.socket. May 10 01:43:02.975849 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 01:43:02.975862 systemd[1]: Reached target sockets.target. May 10 01:43:02.975874 systemd[1]: Starting kmod-static-nodes.service... May 10 01:43:02.975887 systemd[1]: Finished network-cleanup.service. May 10 01:43:02.975904 systemd[1]: Starting systemd-fsck-usr.service... May 10 01:43:02.975917 systemd[1]: Starting systemd-journald.service... May 10 01:43:02.975930 systemd[1]: Starting systemd-modules-load.service... May 10 01:43:02.975943 systemd[1]: Starting systemd-resolved.service... May 10 01:43:02.975955 systemd[1]: Starting systemd-vconsole-setup.service... May 10 01:43:02.975968 systemd[1]: Finished kmod-static-nodes.service. May 10 01:43:02.975981 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 01:43:02.976006 systemd-journald[202]: Journal started May 10 01:43:02.976126 systemd-journald[202]: Runtime Journal (/run/log/journal/a5189f173ee345feb0e5de839201eac1) is 4.7M, max 38.1M, 33.3M free. May 10 01:43:02.904455 systemd-modules-load[203]: Inserted module 'overlay' May 10 01:43:03.000002 kernel: Bridge firewalling registered May 10 01:43:02.952712 systemd-resolved[204]: Positive Trust Anchors: May 10 01:43:03.014141 systemd[1]: Started systemd-resolved.service. May 10 01:43:03.014169 kernel: audit: type=1130 audit(1746841383.000:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.014189 kernel: audit: type=1130 audit(1746841383.007:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.014215 systemd[1]: Started systemd-journald.service. May 10 01:43:03.014233 kernel: SCSI subsystem initialized May 10 01:43:03.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:02.952729 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 01:43:02.952781 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 01:43:03.029419 kernel: audit: type=1130 audit(1746841383.018:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:02.956637 systemd-resolved[204]: Defaulting to hostname 'linux'. May 10 01:43:03.041710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 01:43:03.041744 kernel: device-mapper: uevent: version 1.0.3 May 10 01:43:03.041763 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 01:43:03.041780 kernel: audit: type=1130 audit(1746841383.036:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:02.981922 systemd-modules-load[203]: Inserted module 'br_netfilter' May 10 01:43:03.047925 kernel: audit: type=1130 audit(1746841383.042:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.019126 systemd[1]: Finished systemd-fsck-usr.service. May 10 01:43:03.041274 systemd[1]: Finished systemd-vconsole-setup.service. May 10 01:43:03.042266 systemd-modules-load[203]: Inserted module 'dm_multipath' May 10 01:43:03.042554 systemd[1]: Reached target nss-lookup.target. May 10 01:43:03.059687 kernel: audit: type=1130 audit(1746841383.054:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.049607 systemd[1]: Starting dracut-cmdline-ask.service... May 10 01:43:03.066927 kernel: audit: type=1130 audit(1746841383.060:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.051470 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 01:43:03.053525 systemd[1]: Finished systemd-modules-load.service. May 10 01:43:03.059108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 01:43:03.065324 systemd[1]: Starting systemd-sysctl.service... May 10 01:43:03.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.075913 systemd[1]: Finished systemd-sysctl.service. May 10 01:43:03.081633 kernel: audit: type=1130 audit(1746841383.076:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.085883 systemd[1]: Finished dracut-cmdline-ask.service. May 10 01:43:03.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.093244 systemd[1]: Starting dracut-cmdline.service... May 10 01:43:03.106223 kernel: audit: type=1130 audit(1746841383.086:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.113392 dracut-cmdline[223]: dracut-dracut-053 May 10 01:43:03.117179 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 01:43:03.199078 kernel: Loading iSCSI transport class v2.0-870. May 10 01:43:03.219045 kernel: iscsi: registered transport (tcp) May 10 01:43:03.246793 kernel: iscsi: registered transport (qla4xxx) May 10 01:43:03.246859 kernel: QLogic iSCSI HBA Driver May 10 01:43:03.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.294489 systemd[1]: Finished dracut-cmdline.service. May 10 01:43:03.296270 systemd[1]: Starting dracut-pre-udev.service... May 10 01:43:03.353074 kernel: raid6: sse2x4 gen() 14544 MB/s May 10 01:43:03.371105 kernel: raid6: sse2x4 xor() 8350 MB/s May 10 01:43:03.389200 kernel: raid6: sse2x2 gen() 6279 MB/s May 10 01:43:03.407123 kernel: raid6: sse2x2 xor() 8359 MB/s May 10 01:43:03.425069 kernel: raid6: sse2x1 gen() 5358 MB/s May 10 01:43:03.443611 kernel: raid6: sse2x1 xor() 7620 MB/s May 10 01:43:03.443672 kernel: raid6: using algorithm sse2x4 gen() 14544 MB/s May 10 01:43:03.443688 kernel: raid6: .... xor() 8350 MB/s, rmw enabled May 10 01:43:03.444817 kernel: raid6: using ssse3x2 recovery algorithm May 10 01:43:03.462068 kernel: xor: automatically using best checksumming function avx May 10 01:43:03.573119 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 01:43:03.585770 systemd[1]: Finished dracut-pre-udev.service. May 10 01:43:03.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.587000 audit: BPF prog-id=7 op=LOAD May 10 01:43:03.587000 audit: BPF prog-id=8 op=LOAD May 10 01:43:03.587733 systemd[1]: Starting systemd-udevd.service... May 10 01:43:03.604919 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 10 01:43:03.613845 systemd[1]: Started systemd-udevd.service. May 10 01:43:03.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.617230 systemd[1]: Starting dracut-pre-trigger.service... May 10 01:43:03.634122 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 10 01:43:03.681205 systemd[1]: Finished dracut-pre-trigger.service. May 10 01:43:03.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.683027 systemd[1]: Starting systemd-udev-trigger.service... May 10 01:43:03.770701 systemd[1]: Finished systemd-udev-trigger.service. May 10 01:43:03.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:03.854087 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 10 01:43:03.923709 kernel: cryptd: max_cpu_qlen set to 1000 May 10 01:43:03.923744 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 01:43:03.923762 kernel: GPT:17805311 != 125829119 May 10 01:43:03.923781 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 01:43:03.923805 kernel: GPT:17805311 != 125829119 May 10 01:43:03.923821 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 01:43:03.923841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 01:43:03.923857 kernel: libata version 3.00 loaded. May 10 01:43:03.923877 kernel: ACPI: bus type USB registered May 10 01:43:03.925865 kernel: ahci 0000:00:1f.2: version 3.0 May 10 01:43:04.008457 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 01:43:04.008484 kernel: usbcore: registered new interface driver usbfs May 10 01:43:04.008516 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 01:43:04.008721 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 01:43:04.008904 kernel: usbcore: registered new interface driver hub May 10 01:43:04.008921 kernel: usbcore: registered new device driver usb May 10 01:43:04.008943 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) May 10 01:43:04.008965 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 01:43:04.009176 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 May 10 01:43:04.009345 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 10 01:43:04.009521 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 10 01:43:04.009688 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 May 10 01:43:04.009868 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed May 10 01:43:04.010083 kernel: hub 1-0:1.0: USB hub found May 10 01:43:04.010276 kernel: hub 1-0:1.0: 4 ports detected May 10 01:43:04.010458 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 10 01:43:04.010753 kernel: hub 2-0:1.0: USB hub found May 10 01:43:04.010987 kernel: hub 2-0:1.0: 4 ports detected May 10 01:43:04.011206 kernel: AVX version of gcm_enc/dec engaged. May 10 01:43:04.011225 kernel: AES CTR mode by8 optimization enabled May 10 01:43:04.011241 kernel: scsi host0: ahci May 10 01:43:04.011440 kernel: scsi host1: ahci May 10 01:43:04.011625 kernel: scsi host2: ahci May 10 01:43:04.011827 kernel: scsi host3: ahci May 10 01:43:04.012026 kernel: scsi host4: ahci May 10 01:43:04.012310 kernel: scsi host5: ahci May 10 01:43:04.012502 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 May 10 01:43:04.012522 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 May 10 01:43:04.012538 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 May 10 01:43:04.012561 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 May 10 01:43:04.012582 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 May 10 01:43:04.012605 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 May 10 01:43:03.965426 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 01:43:03.976586 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 01:43:03.981510 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 01:43:04.072775 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 01:43:04.077997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 01:43:04.079892 systemd[1]: Starting disk-uuid.service... May 10 01:43:04.086631 disk-uuid[528]: Primary Header is updated. May 10 01:43:04.086631 disk-uuid[528]: Secondary Entries is updated. May 10 01:43:04.086631 disk-uuid[528]: Secondary Header is updated. May 10 01:43:04.091059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 01:43:04.097073 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 01:43:04.103058 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 01:43:04.223070 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 10 01:43:04.324402 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 01:43:04.324490 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 01:43:04.325049 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 01:43:04.329231 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 01:43:04.329279 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 01:43:04.330062 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 10 01:43:04.363057 kernel: hid: raw HID events driver (C) Jiri Kosina May 10 01:43:04.370228 kernel: usbcore: registered new interface driver usbhid May 10 01:43:04.370275 kernel: usbhid: USB HID core driver May 10 01:43:04.379236 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 May 10 01:43:04.379283 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 May 10 01:43:05.103074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 01:43:05.104453 disk-uuid[529]: The operation has completed successfully. May 10 01:43:05.159449 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 01:43:05.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.159626 systemd[1]: Finished disk-uuid.service. May 10 01:43:05.165967 systemd[1]: Starting verity-setup.service... May 10 01:43:05.185074 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" May 10 01:43:05.240660 systemd[1]: Found device dev-mapper-usr.device. May 10 01:43:05.242577 systemd[1]: Mounting sysusr-usr.mount... May 10 01:43:05.244605 systemd[1]: Finished verity-setup.service. May 10 01:43:05.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.336044 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 01:43:05.337077 systemd[1]: Mounted sysusr-usr.mount. May 10 01:43:05.337944 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 01:43:05.339034 systemd[1]: Starting ignition-setup.service... May 10 01:43:05.341941 systemd[1]: Starting parse-ip-for-networkd.service... May 10 01:43:05.357064 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 01:43:05.357131 kernel: BTRFS info (device vda6): using free space tree May 10 01:43:05.357151 kernel: BTRFS info (device vda6): has skinny extents May 10 01:43:05.375942 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 01:43:05.382572 systemd[1]: Finished ignition-setup.service. May 10 01:43:05.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.384561 systemd[1]: Starting ignition-fetch-offline.service... May 10 01:43:05.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.498379 systemd[1]: Finished parse-ip-for-networkd.service. May 10 01:43:05.503000 audit: BPF prog-id=9 op=LOAD May 10 01:43:05.504208 systemd[1]: Starting systemd-networkd.service... May 10 01:43:05.544172 systemd-networkd[709]: lo: Link UP May 10 01:43:05.545085 systemd-networkd[709]: lo: Gained carrier May 10 01:43:05.546180 systemd-networkd[709]: Enumeration completed May 10 01:43:05.546574 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 01:43:05.548090 systemd[1]: Started systemd-networkd.service. May 10 01:43:05.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.551198 systemd[1]: Reached target network.target. May 10 01:43:05.551312 systemd-networkd[709]: eth0: Link UP May 10 01:43:05.551319 systemd-networkd[709]: eth0: Gained carrier May 10 01:43:05.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.570887 ignition[623]: Ignition 2.14.0 May 10 01:43:05.553234 systemd[1]: Starting iscsiuio.service... May 10 01:43:05.570918 ignition[623]: Stage: fetch-offline May 10 01:43:05.567248 systemd[1]: Started iscsiuio.service. May 10 01:43:05.571083 ignition[623]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:05.571963 systemd[1]: Starting iscsid.service... May 10 01:43:05.571132 ignition[623]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:05.572790 ignition[623]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:05.572944 ignition[623]: parsed url from cmdline: "" May 10 01:43:05.578773 iscsid[714]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 01:43:05.578773 iscsid[714]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 01:43:05.578773 iscsid[714]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 01:43:05.578773 iscsid[714]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 01:43:05.578773 iscsid[714]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 01:43:05.578773 iscsid[714]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 01:43:05.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.572951 ignition[623]: no config URL provided May 10 01:43:05.580388 systemd[1]: Finished ignition-fetch-offline.service. May 10 01:43:05.572961 ignition[623]: reading system config file "/usr/lib/ignition/user.ign" May 10 01:43:05.583197 systemd[1]: Started iscsid.service. May 10 01:43:05.572976 ignition[623]: no config at "/usr/lib/ignition/user.ign" May 10 01:43:05.585408 systemd[1]: Starting dracut-initqueue.service... May 10 01:43:05.572985 ignition[623]: failed to fetch config: resource requires networking May 10 01:43:05.593039 systemd[1]: Starting ignition-fetch.service... May 10 01:43:05.574907 ignition[623]: Ignition finished successfully May 10 01:43:05.603197 systemd-networkd[709]: eth0: DHCPv4 address 10.230.47.106/30, gateway 10.230.47.105 acquired from 10.230.47.105 May 10 01:43:05.608379 systemd[1]: Finished dracut-initqueue.service. May 10 01:43:05.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.609226 systemd[1]: Reached target remote-fs-pre.target. May 10 01:43:05.610210 systemd[1]: Reached target remote-cryptsetup.target. May 10 01:43:05.611925 systemd[1]: Reached target remote-fs.target. May 10 01:43:05.615096 systemd[1]: Starting dracut-pre-mount.service... May 10 01:43:05.620692 ignition[717]: Ignition 2.14.0 May 10 01:43:05.621598 ignition[717]: Stage: fetch May 10 01:43:05.622408 ignition[717]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:05.623323 ignition[717]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:05.624655 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:05.625698 ignition[717]: parsed url from cmdline: "" May 10 01:43:05.625793 ignition[717]: no config URL provided May 10 01:43:05.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.626813 systemd[1]: Finished dracut-pre-mount.service. May 10 01:43:05.628366 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" May 10 01:43:05.629267 ignition[717]: no config at "/usr/lib/ignition/user.ign" May 10 01:43:05.632397 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 10 01:43:05.632463 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 10 01:43:05.633406 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 10 01:43:05.653750 ignition[717]: GET result: OK May 10 01:43:05.654827 ignition[717]: parsing config with SHA512: a2e4fa61acfdc6e1e5bb1d4e902f86e1491d0bf543f022a96bbc2f2a19a1f5d619af5101e91e45028225a79ce16fa255e992eb6f49ca02ea0ac6d3119c5d3702 May 10 01:43:05.661864 unknown[717]: fetched base config from "system" May 10 01:43:05.661912 unknown[717]: fetched base config from "system" May 10 01:43:05.662447 ignition[717]: fetch: fetch complete May 10 01:43:05.661927 unknown[717]: fetched user config from "openstack" May 10 01:43:05.662457 ignition[717]: fetch: fetch passed May 10 01:43:05.664578 systemd[1]: Finished ignition-fetch.service. May 10 01:43:05.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.662535 ignition[717]: Ignition finished successfully May 10 01:43:05.666678 systemd[1]: Starting ignition-kargs.service... May 10 01:43:05.680178 ignition[734]: Ignition 2.14.0 May 10 01:43:05.680199 ignition[734]: Stage: kargs May 10 01:43:05.680370 ignition[734]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:05.680407 ignition[734]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:05.681631 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:05.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.683875 systemd[1]: Finished ignition-kargs.service. May 10 01:43:05.682741 ignition[734]: kargs: kargs passed May 10 01:43:05.685831 systemd[1]: Starting ignition-disks.service... May 10 01:43:05.682810 ignition[734]: Ignition finished successfully May 10 01:43:05.696760 ignition[739]: Ignition 2.14.0 May 10 01:43:05.696777 ignition[739]: Stage: disks May 10 01:43:05.696940 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:05.696976 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:05.698340 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:05.699485 ignition[739]: disks: disks passed May 10 01:43:05.699562 ignition[739]: Ignition finished successfully May 10 01:43:05.701042 systemd[1]: Finished ignition-disks.service. May 10 01:43:05.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.702217 systemd[1]: Reached target initrd-root-device.target. May 10 01:43:05.703255 systemd[1]: Reached target local-fs-pre.target. May 10 01:43:05.704440 systemd[1]: Reached target local-fs.target. May 10 01:43:05.705600 systemd[1]: Reached target sysinit.target. May 10 01:43:05.706752 systemd[1]: Reached target basic.target. May 10 01:43:05.709405 systemd[1]: Starting systemd-fsck-root.service... May 10 01:43:05.730067 systemd-fsck[746]: ROOT: clean, 623/1628000 files, 124060/1617920 blocks May 10 01:43:05.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.734798 systemd[1]: Finished systemd-fsck-root.service. May 10 01:43:05.737657 systemd[1]: Mounting sysroot.mount... May 10 01:43:05.752044 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 01:43:05.752560 systemd[1]: Mounted sysroot.mount. May 10 01:43:05.753971 systemd[1]: Reached target initrd-root-fs.target. May 10 01:43:05.756870 systemd[1]: Mounting sysroot-usr.mount... May 10 01:43:05.759220 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 01:43:05.761436 systemd[1]: Starting flatcar-openstack-hostname.service... May 10 01:43:05.763079 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 01:43:05.764616 systemd[1]: Reached target ignition-diskful.target. May 10 01:43:05.767445 systemd[1]: Mounted sysroot-usr.mount. May 10 01:43:05.770486 systemd[1]: Starting initrd-setup-root.service... May 10 01:43:05.777386 initrd-setup-root[757]: cut: /sysroot/etc/passwd: No such file or directory May 10 01:43:05.795265 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory May 10 01:43:05.806438 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory May 10 01:43:05.815120 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory May 10 01:43:05.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.880278 systemd[1]: Finished initrd-setup-root.service. May 10 01:43:05.882319 systemd[1]: Starting ignition-mount.service... May 10 01:43:05.891162 systemd[1]: Starting sysroot-boot.service... May 10 01:43:05.897565 bash[800]: umount: /sysroot/usr/share/oem: not mounted. May 10 01:43:05.905199 coreos-metadata[752]: May 10 01:43:05.905 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 01:43:05.917054 ignition[802]: INFO : Ignition 2.14.0 May 10 01:43:05.918278 ignition[802]: INFO : Stage: mount May 10 01:43:05.919242 ignition[802]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:05.920203 ignition[802]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:05.924904 coreos-metadata[752]: May 10 01:43:05.924 INFO Fetch successful May 10 01:43:05.923986 systemd[1]: Finished sysroot-boot.service. May 10 01:43:05.926357 coreos-metadata[752]: May 10 01:43:05.926 INFO wrote hostname srv-yxh38.gb1.brightbox.com to /sysroot/etc/hostname May 10 01:43:05.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.927799 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:05.928432 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 10 01:43:05.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:05.928574 systemd[1]: Finished flatcar-openstack-hostname.service. May 10 01:43:05.932532 ignition[802]: INFO : mount: mount passed May 10 01:43:05.933322 ignition[802]: INFO : Ignition finished successfully May 10 01:43:05.935094 systemd[1]: Finished ignition-mount.service. May 10 01:43:05.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:06.264635 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 01:43:06.277066 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (809) May 10 01:43:06.282071 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 01:43:06.282134 kernel: BTRFS info (device vda6): using free space tree May 10 01:43:06.282153 kernel: BTRFS info (device vda6): has skinny extents May 10 01:43:06.288485 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 01:43:06.290357 systemd[1]: Starting ignition-files.service... May 10 01:43:06.311386 ignition[829]: INFO : Ignition 2.14.0 May 10 01:43:06.311386 ignition[829]: INFO : Stage: files May 10 01:43:06.313133 ignition[829]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:06.313133 ignition[829]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:06.313133 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:06.316332 ignition[829]: DEBUG : files: compiled without relabeling support, skipping May 10 01:43:06.318113 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 01:43:06.319062 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 01:43:06.324106 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 01:43:06.325598 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 01:43:06.327737 unknown[829]: wrote ssh authorized keys file for user: core May 10 01:43:06.328751 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 01:43:06.330428 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 10 01:43:06.331681 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 10 01:43:06.333069 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 01:43:06.334110 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 01:43:06.334110 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 01:43:06.334110 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 01:43:06.334110 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 01:43:06.338949 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 10 01:43:06.835433 systemd-networkd[709]: eth0: Gained IPv6LL May 10 01:43:06.991598 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 10 01:43:07.707174 systemd-networkd[709]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8bda:24:19ff:fee6:2f6a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8bda:24:19ff:fee6:2f6a/64 assigned by NDisc. May 10 01:43:07.707188 systemd-networkd[709]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 01:43:08.315454 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 10 01:43:08.321151 ignition[829]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" May 10 01:43:08.321151 ignition[829]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" May 10 01:43:08.321151 ignition[829]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 01:43:08.321151 ignition[829]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 10 01:43:08.326238 ignition[829]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 01:43:08.328096 ignition[829]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 01:43:08.328096 ignition[829]: INFO : files: files passed May 10 01:43:08.328096 ignition[829]: INFO : Ignition finished successfully May 10 01:43:08.339705 kernel: kauditd_printk_skb: 28 callbacks suppressed May 10 01:43:08.339743 kernel: audit: type=1130 audit(1746841388.332:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.329465 systemd[1]: Finished ignition-files.service. May 10 01:43:08.334890 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 01:43:08.340913 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 01:43:08.342233 systemd[1]: Starting ignition-quench.service... May 10 01:43:08.348499 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 01:43:08.366119 kernel: audit: type=1130 audit(1746841388.349:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.366150 kernel: audit: type=1131 audit(1746841388.349:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.366168 kernel: audit: type=1130 audit(1746841388.360:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.348393 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 01:43:08.348515 systemd[1]: Finished ignition-quench.service. May 10 01:43:08.354055 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 01:43:08.360200 systemd[1]: Reached target ignition-complete.target. May 10 01:43:08.367765 systemd[1]: Starting initrd-parse-etc.service... May 10 01:43:08.386895 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 01:43:08.387905 systemd[1]: Finished initrd-parse-etc.service. May 10 01:43:08.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.389548 systemd[1]: Reached target initrd-fs.target. May 10 01:43:08.401120 kernel: audit: type=1130 audit(1746841388.389:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.401163 kernel: audit: type=1131 audit(1746841388.389:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.399588 systemd[1]: Reached target initrd.target. May 10 01:43:08.400246 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 01:43:08.402404 systemd[1]: Starting dracut-pre-pivot.service... May 10 01:43:08.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.418183 systemd[1]: Finished dracut-pre-pivot.service. May 10 01:43:08.424679 kernel: audit: type=1130 audit(1746841388.418:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.420244 systemd[1]: Starting initrd-cleanup.service... May 10 01:43:08.433317 systemd[1]: Stopped target nss-lookup.target. May 10 01:43:08.434714 systemd[1]: Stopped target remote-cryptsetup.target. May 10 01:43:08.436147 systemd[1]: Stopped target timers.target. May 10 01:43:08.437458 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 01:43:08.438252 systemd[1]: Stopped dracut-pre-pivot.service. May 10 01:43:08.444091 kernel: audit: type=1131 audit(1746841388.439:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.439183 systemd[1]: Stopped target initrd.target. May 10 01:43:08.444767 systemd[1]: Stopped target basic.target. May 10 01:43:08.445947 systemd[1]: Stopped target ignition-complete.target. May 10 01:43:08.447229 systemd[1]: Stopped target ignition-diskful.target. May 10 01:43:08.448504 systemd[1]: Stopped target initrd-root-device.target. May 10 01:43:08.449714 systemd[1]: Stopped target remote-fs.target. May 10 01:43:08.450931 systemd[1]: Stopped target remote-fs-pre.target. May 10 01:43:08.452239 systemd[1]: Stopped target sysinit.target. May 10 01:43:08.453478 systemd[1]: Stopped target local-fs.target. May 10 01:43:08.454643 systemd[1]: Stopped target local-fs-pre.target. May 10 01:43:08.455845 systemd[1]: Stopped target swap.target. May 10 01:43:08.456917 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 01:43:08.462964 kernel: audit: type=1131 audit(1746841388.458:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.457204 systemd[1]: Stopped dracut-pre-mount.service. May 10 01:43:08.458289 systemd[1]: Stopped target cryptsetup.target. May 10 01:43:08.483085 kernel: audit: type=1131 audit(1746841388.478:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.463802 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 01:43:08.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.464035 systemd[1]: Stopped dracut-initqueue.service. May 10 01:43:08.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.478293 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 01:43:08.478564 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 01:43:08.483994 systemd[1]: ignition-files.service: Deactivated successfully. May 10 01:43:08.484212 systemd[1]: Stopped ignition-files.service. May 10 01:43:08.486575 systemd[1]: Stopping ignition-mount.service... May 10 01:43:08.495409 systemd[1]: Stopping sysroot-boot.service... May 10 01:43:08.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.505158 ignition[867]: INFO : Ignition 2.14.0 May 10 01:43:08.505158 ignition[867]: INFO : Stage: umount May 10 01:43:08.505158 ignition[867]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 10 01:43:08.505158 ignition[867]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 10 01:43:08.505158 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 10 01:43:08.505158 ignition[867]: INFO : umount: umount passed May 10 01:43:08.505158 ignition[867]: INFO : Ignition finished successfully May 10 01:43:08.497705 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 01:43:08.497993 systemd[1]: Stopped systemd-udev-trigger.service. May 10 01:43:08.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.504682 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 01:43:08.511169 systemd[1]: Stopped dracut-pre-trigger.service. May 10 01:43:08.514467 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 01:43:08.514603 systemd[1]: Stopped ignition-mount.service. May 10 01:43:08.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.517347 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 01:43:08.517477 systemd[1]: Finished initrd-cleanup.service. May 10 01:43:08.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.519810 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 01:43:08.519886 systemd[1]: Stopped ignition-disks.service. May 10 01:43:08.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.521954 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 01:43:08.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.522025 systemd[1]: Stopped ignition-kargs.service. May 10 01:43:08.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.524265 systemd[1]: ignition-fetch.service: Deactivated successfully. May 10 01:43:08.524338 systemd[1]: Stopped ignition-fetch.service. May 10 01:43:08.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.525463 systemd[1]: Stopped target network.target. May 10 01:43:08.526602 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 01:43:08.526668 systemd[1]: Stopped ignition-fetch-offline.service. May 10 01:43:08.527999 systemd[1]: Stopped target paths.target. May 10 01:43:08.529176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 01:43:08.533098 systemd[1]: Stopped systemd-ask-password-console.path. May 10 01:43:08.533747 systemd[1]: Stopped target slices.target. May 10 01:43:08.534939 systemd[1]: Stopped target sockets.target. May 10 01:43:08.536204 systemd[1]: iscsid.socket: Deactivated successfully. May 10 01:43:08.536246 systemd[1]: Closed iscsid.socket. May 10 01:43:08.537289 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 01:43:08.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.537335 systemd[1]: Closed iscsiuio.socket. May 10 01:43:08.538434 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 01:43:08.538513 systemd[1]: Stopped ignition-setup.service. May 10 01:43:08.540485 systemd[1]: Stopping systemd-networkd.service... May 10 01:43:08.541386 systemd[1]: Stopping systemd-resolved.service... May 10 01:43:08.544070 systemd-networkd[709]: eth0: DHCPv6 lease lost May 10 01:43:08.546451 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 01:43:08.547213 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 01:43:08.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.547364 systemd[1]: Stopped systemd-resolved.service. May 10 01:43:08.549473 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 01:43:08.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.551000 audit: BPF prog-id=9 op=UNLOAD May 10 01:43:08.551000 audit: BPF prog-id=6 op=UNLOAD May 10 01:43:08.549649 systemd[1]: Stopped systemd-networkd.service. May 10 01:43:08.551531 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 01:43:08.551582 systemd[1]: Closed systemd-networkd.socket. May 10 01:43:08.553046 systemd[1]: Stopping network-cleanup.service... May 10 01:43:08.555791 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 01:43:08.555883 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 01:43:08.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.557367 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 01:43:08.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.557438 systemd[1]: Stopped systemd-sysctl.service. May 10 01:43:08.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.559757 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 01:43:08.559815 systemd[1]: Stopped systemd-modules-load.service. May 10 01:43:08.564979 systemd[1]: Stopping systemd-udevd.service... May 10 01:43:08.569542 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 01:43:08.570793 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 01:43:08.571042 systemd[1]: Stopped systemd-udevd.service. May 10 01:43:08.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.574571 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 01:43:08.574650 systemd[1]: Closed systemd-udevd-control.socket. May 10 01:43:08.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.575308 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 01:43:08.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.575359 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 01:43:08.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.576558 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 01:43:08.576626 systemd[1]: Stopped dracut-pre-udev.service. May 10 01:43:08.577914 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 01:43:08.577985 systemd[1]: Stopped dracut-cmdline.service. May 10 01:43:08.579089 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 01:43:08.579146 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 01:43:08.581834 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 01:43:08.589534 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 10 01:43:08.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.589605 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 10 01:43:08.591983 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 01:43:08.592085 systemd[1]: Stopped kmod-static-nodes.service. May 10 01:43:08.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.593647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 01:43:08.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.593722 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 01:43:08.596126 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 10 01:43:08.596928 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 01:43:08.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.597095 systemd[1]: Stopped network-cleanup.service. May 10 01:43:08.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.598171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 01:43:08.598281 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 01:43:08.644050 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 01:43:08.644210 systemd[1]: Stopped sysroot-boot.service. May 10 01:43:08.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.645805 systemd[1]: Reached target initrd-switch-root.target. May 10 01:43:08.646867 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 01:43:08.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:08.646955 systemd[1]: Stopped initrd-setup-root.service. May 10 01:43:08.649526 systemd[1]: Starting initrd-switch-root.service... May 10 01:43:08.666071 systemd[1]: Switching root. May 10 01:43:08.688469 iscsid[714]: iscsid shutting down. May 10 01:43:08.689205 systemd-journald[202]: Received SIGTERM from PID 1 (n/a). May 10 01:43:08.689349 systemd-journald[202]: Journal stopped May 10 01:43:12.532260 kernel: SELinux: Class mctp_socket not defined in policy. May 10 01:43:12.532366 kernel: SELinux: Class anon_inode not defined in policy. May 10 01:43:12.532407 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 01:43:12.532441 kernel: SELinux: policy capability network_peer_controls=1 May 10 01:43:12.532465 kernel: SELinux: policy capability open_perms=1 May 10 01:43:12.532510 kernel: SELinux: policy capability extended_socket_class=1 May 10 01:43:12.532530 kernel: SELinux: policy capability always_check_network=0 May 10 01:43:12.532552 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 01:43:12.532570 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 01:43:12.532587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 01:43:12.532626 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 01:43:12.532653 systemd[1]: Successfully loaded SELinux policy in 65.337ms. May 10 01:43:12.532709 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.931ms. May 10 01:43:12.532742 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 01:43:12.532768 systemd[1]: Detected virtualization kvm. May 10 01:43:12.532787 systemd[1]: Detected architecture x86-64. May 10 01:43:12.532806 systemd[1]: Detected first boot. May 10 01:43:12.532832 systemd[1]: Hostname set to . May 10 01:43:12.532863 systemd[1]: Initializing machine ID from VM UUID. May 10 01:43:12.532895 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 01:43:12.532917 systemd[1]: Populated /etc with preset unit settings. May 10 01:43:12.532943 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 01:43:12.532976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 01:43:12.532999 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 01:43:12.536054 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 01:43:12.536099 systemd[1]: Stopped iscsiuio.service. May 10 01:43:12.536130 systemd[1]: iscsid.service: Deactivated successfully. May 10 01:43:12.536150 systemd[1]: Stopped iscsid.service. May 10 01:43:12.536169 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 01:43:12.536188 systemd[1]: Stopped initrd-switch-root.service. May 10 01:43:12.536217 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 01:43:12.536235 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 01:43:12.536266 systemd[1]: Created slice system-addon\x2drun.slice. May 10 01:43:12.536302 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 10 01:43:12.536349 systemd[1]: Created slice system-getty.slice. May 10 01:43:12.536376 systemd[1]: Created slice system-modprobe.slice. May 10 01:43:12.536408 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 01:43:12.536426 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 01:43:12.536453 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 01:43:12.536472 systemd[1]: Created slice user.slice. May 10 01:43:12.536490 systemd[1]: Started systemd-ask-password-console.path. May 10 01:43:12.536508 systemd[1]: Started systemd-ask-password-wall.path. May 10 01:43:12.536525 systemd[1]: Set up automount boot.automount. May 10 01:43:12.536561 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 01:43:12.536580 systemd[1]: Stopped target initrd-switch-root.target. May 10 01:43:12.536608 systemd[1]: Stopped target initrd-fs.target. May 10 01:43:12.536628 systemd[1]: Stopped target initrd-root-fs.target. May 10 01:43:12.536646 systemd[1]: Reached target integritysetup.target. May 10 01:43:12.536675 systemd[1]: Reached target remote-cryptsetup.target. May 10 01:43:12.536692 systemd[1]: Reached target remote-fs.target. May 10 01:43:12.536713 systemd[1]: Reached target slices.target. May 10 01:43:12.536730 systemd[1]: Reached target swap.target. May 10 01:43:12.536747 systemd[1]: Reached target torcx.target. May 10 01:43:12.536764 systemd[1]: Reached target veritysetup.target. May 10 01:43:12.536794 systemd[1]: Listening on systemd-coredump.socket. May 10 01:43:12.536813 systemd[1]: Listening on systemd-initctl.socket. May 10 01:43:12.536854 systemd[1]: Listening on systemd-networkd.socket. May 10 01:43:12.536876 systemd[1]: Listening on systemd-udevd-control.socket. May 10 01:43:12.536901 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 01:43:12.536922 systemd[1]: Listening on systemd-userdbd.socket. May 10 01:43:12.536945 systemd[1]: Mounting dev-hugepages.mount... May 10 01:43:12.536966 systemd[1]: Mounting dev-mqueue.mount... May 10 01:43:12.536984 systemd[1]: Mounting media.mount... May 10 01:43:12.537003 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 01:43:12.541482 systemd[1]: Mounting sys-kernel-debug.mount... May 10 01:43:12.541517 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 01:43:12.541538 systemd[1]: Mounting tmp.mount... May 10 01:43:12.541565 systemd[1]: Starting flatcar-tmpfiles.service... May 10 01:43:12.541594 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 01:43:12.541620 systemd[1]: Starting kmod-static-nodes.service... May 10 01:43:12.541641 systemd[1]: Starting modprobe@configfs.service... May 10 01:43:12.541660 systemd[1]: Starting modprobe@dm_mod.service... May 10 01:43:12.541691 systemd[1]: Starting modprobe@drm.service... May 10 01:43:12.541712 systemd[1]: Starting modprobe@efi_pstore.service... May 10 01:43:12.541732 systemd[1]: Starting modprobe@fuse.service... May 10 01:43:12.541750 systemd[1]: Starting modprobe@loop.service... May 10 01:43:12.541770 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 01:43:12.541797 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 01:43:12.541828 systemd[1]: Stopped systemd-fsck-root.service. May 10 01:43:12.541866 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 01:43:12.541888 systemd[1]: Stopped systemd-fsck-usr.service. May 10 01:43:12.541918 systemd[1]: Stopped systemd-journald.service. May 10 01:43:12.541939 systemd[1]: Starting systemd-journald.service... May 10 01:43:12.541958 kernel: fuse: init (API version 7.34) May 10 01:43:12.541976 systemd[1]: Starting systemd-modules-load.service... May 10 01:43:12.541995 systemd[1]: Starting systemd-network-generator.service... May 10 01:43:12.542139 systemd[1]: Starting systemd-remount-fs.service... May 10 01:43:12.542167 systemd[1]: Starting systemd-udev-trigger.service... May 10 01:43:12.542186 systemd[1]: verity-setup.service: Deactivated successfully. May 10 01:43:12.542213 systemd[1]: Stopped verity-setup.service. May 10 01:43:12.542234 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 01:43:12.542266 systemd[1]: Mounted dev-hugepages.mount. May 10 01:43:12.542287 systemd[1]: Mounted dev-mqueue.mount. May 10 01:43:12.542306 systemd[1]: Mounted media.mount. May 10 01:43:12.542324 systemd[1]: Mounted sys-kernel-debug.mount. May 10 01:43:12.542343 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 01:43:12.542362 systemd[1]: Mounted tmp.mount. May 10 01:43:12.542382 systemd[1]: Finished kmod-static-nodes.service. May 10 01:43:12.542411 kernel: loop: module loaded May 10 01:43:12.542433 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 01:43:12.542458 systemd[1]: Finished modprobe@configfs.service. May 10 01:43:12.542478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 01:43:12.542503 systemd[1]: Finished modprobe@dm_mod.service. May 10 01:43:12.542528 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 01:43:12.542561 systemd-journald[972]: Journal started May 10 01:43:12.542657 systemd-journald[972]: Runtime Journal (/run/log/journal/a5189f173ee345feb0e5de839201eac1) is 4.7M, max 38.1M, 33.3M free. May 10 01:43:08.859000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 01:43:08.957000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 01:43:08.957000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 01:43:08.957000 audit: BPF prog-id=10 op=LOAD May 10 01:43:08.957000 audit: BPF prog-id=10 op=UNLOAD May 10 01:43:08.957000 audit: BPF prog-id=11 op=LOAD May 10 01:43:08.957000 audit: BPF prog-id=11 op=UNLOAD May 10 01:43:09.087000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 01:43:09.087000 audit[899]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 01:43:09.087000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 01:43:09.090000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 01:43:09.090000 audit[899]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 01:43:09.090000 audit: CWD cwd="/" May 10 01:43:09.090000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:09.090000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:09.090000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 01:43:12.269000 audit: BPF prog-id=12 op=LOAD May 10 01:43:12.269000 audit: BPF prog-id=3 op=UNLOAD May 10 01:43:12.269000 audit: BPF prog-id=13 op=LOAD May 10 01:43:12.270000 audit: BPF prog-id=14 op=LOAD May 10 01:43:12.270000 audit: BPF prog-id=4 op=UNLOAD May 10 01:43:12.270000 audit: BPF prog-id=5 op=UNLOAD May 10 01:43:12.273000 audit: BPF prog-id=15 op=LOAD May 10 01:43:12.273000 audit: BPF prog-id=12 op=UNLOAD May 10 01:43:12.273000 audit: BPF prog-id=16 op=LOAD May 10 01:43:12.273000 audit: BPF prog-id=17 op=LOAD May 10 01:43:12.273000 audit: BPF prog-id=13 op=UNLOAD May 10 01:43:12.273000 audit: BPF prog-id=14 op=UNLOAD May 10 01:43:12.275000 audit: BPF prog-id=18 op=LOAD May 10 01:43:12.275000 audit: BPF prog-id=15 op=UNLOAD May 10 01:43:12.275000 audit: BPF prog-id=19 op=LOAD May 10 01:43:12.275000 audit: BPF prog-id=20 op=LOAD May 10 01:43:12.275000 audit: BPF prog-id=16 op=UNLOAD May 10 01:43:12.275000 audit: BPF prog-id=17 op=UNLOAD May 10 01:43:12.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.287000 audit: BPF prog-id=18 op=UNLOAD May 10 01:43:12.563980 systemd[1]: Finished modprobe@drm.service. May 10 01:43:12.564090 systemd[1]: Started systemd-journald.service. May 10 01:43:12.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.454000 audit: BPF prog-id=21 op=LOAD May 10 01:43:12.454000 audit: BPF prog-id=22 op=LOAD May 10 01:43:12.454000 audit: BPF prog-id=23 op=LOAD May 10 01:43:12.454000 audit: BPF prog-id=19 op=UNLOAD May 10 01:43:12.454000 audit: BPF prog-id=20 op=UNLOAD May 10 01:43:12.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.523000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 01:43:12.523000 audit[972]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffcb38a4930 a2=4000 a3=7ffcb38a49cc items=0 ppid=1 pid=972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 01:43:12.523000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 01:43:12.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.266543 systemd[1]: Queued start job for default target multi-user.target. May 10 01:43:09.083282 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 01:43:12.266562 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 10 01:43:09.083953 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 01:43:12.276376 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 01:43:09.083993 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 01:43:12.549716 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 01:43:09.084078 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 01:43:12.549920 systemd[1]: Finished modprobe@efi_pstore.service. May 10 01:43:09.084097 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 01:43:09.084151 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 01:43:09.084172 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 01:43:09.084541 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 01:43:09.084609 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 01:43:09.084636 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 01:43:12.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:09.086516 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 01:43:12.569052 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 01:43:09.086574 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 01:43:09.086605 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 01:43:09.086632 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 01:43:09.086663 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 01:43:09.086688 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 01:43:12.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.571209 systemd[1]: Finished modprobe@fuse.service. May 10 01:43:11.702542 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 01:43:12.572681 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 01:43:11.702984 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 01:43:11.703226 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 01:43:11.703609 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 01:43:11.703698 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 01:43:11.703831 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2025-05-10T01:43:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 01:43:12.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.574174 systemd[1]: Finished modprobe@loop.service. May 10 01:43:12.576296 systemd[1]: Finished systemd-modules-load.service. May 10 01:43:12.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.580222 systemd[1]: Finished systemd-network-generator.service. May 10 01:43:12.581317 systemd[1]: Finished systemd-remount-fs.service. May 10 01:43:12.582714 systemd[1]: Reached target network-pre.target. May 10 01:43:12.586346 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 01:43:12.588849 systemd[1]: Mounting sys-kernel-config.mount... May 10 01:43:12.589669 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 01:43:12.592903 systemd[1]: Starting systemd-hwdb-update.service... May 10 01:43:12.596438 systemd[1]: Starting systemd-journal-flush.service... May 10 01:43:12.598418 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 01:43:12.600615 systemd[1]: Starting systemd-random-seed.service... May 10 01:43:12.602421 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 01:43:12.604489 systemd[1]: Starting systemd-sysctl.service... May 10 01:43:12.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.616802 systemd-journald[972]: Time spent on flushing to /var/log/journal/a5189f173ee345feb0e5de839201eac1 is 46.959ms for 1278 entries. May 10 01:43:12.616802 systemd-journald[972]: System Journal (/var/log/journal/a5189f173ee345feb0e5de839201eac1) is 8.0M, max 584.8M, 576.8M free. May 10 01:43:12.683635 systemd-journald[972]: Received client request to flush runtime journal. May 10 01:43:12.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.611928 systemd[1]: Finished flatcar-tmpfiles.service. May 10 01:43:12.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.613433 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 01:43:12.614584 systemd[1]: Mounted sys-kernel-config.mount. May 10 01:43:12.620143 systemd[1]: Starting systemd-sysusers.service... May 10 01:43:12.621610 systemd[1]: Finished systemd-random-seed.service. May 10 01:43:12.624529 systemd[1]: Reached target first-boot-complete.target. May 10 01:43:12.674413 systemd[1]: Finished systemd-sysctl.service. May 10 01:43:12.675990 systemd[1]: Finished systemd-sysusers.service. May 10 01:43:12.679604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 01:43:12.684664 systemd[1]: Finished systemd-journal-flush.service. May 10 01:43:12.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.719277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 01:43:12.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:12.753250 systemd[1]: Finished systemd-udev-trigger.service. May 10 01:43:12.755742 systemd[1]: Starting systemd-udev-settle.service... May 10 01:43:12.767600 udevadm[1010]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 10 01:43:13.311448 systemd[1]: Finished systemd-hwdb-update.service. May 10 01:43:13.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.315000 audit: BPF prog-id=24 op=LOAD May 10 01:43:13.315000 audit: BPF prog-id=25 op=LOAD May 10 01:43:13.315000 audit: BPF prog-id=7 op=UNLOAD May 10 01:43:13.315000 audit: BPF prog-id=8 op=UNLOAD May 10 01:43:13.316248 systemd[1]: Starting systemd-udevd.service... May 10 01:43:13.342236 systemd-udevd[1011]: Using default interface naming scheme 'v252'. May 10 01:43:13.379234 kernel: kauditd_printk_skb: 113 callbacks suppressed May 10 01:43:13.379352 kernel: audit: type=1130 audit(1746841393.375:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.374653 systemd[1]: Started systemd-udevd.service. May 10 01:43:13.379901 systemd[1]: Starting systemd-networkd.service... May 10 01:43:13.376000 audit: BPF prog-id=26 op=LOAD May 10 01:43:13.385034 kernel: audit: type=1334 audit(1746841393.376:154): prog-id=26 op=LOAD May 10 01:43:13.396855 kernel: audit: type=1334 audit(1746841393.391:155): prog-id=27 op=LOAD May 10 01:43:13.396928 kernel: audit: type=1334 audit(1746841393.393:156): prog-id=28 op=LOAD May 10 01:43:13.397062 kernel: audit: type=1334 audit(1746841393.395:157): prog-id=29 op=LOAD May 10 01:43:13.391000 audit: BPF prog-id=27 op=LOAD May 10 01:43:13.393000 audit: BPF prog-id=28 op=LOAD May 10 01:43:13.395000 audit: BPF prog-id=29 op=LOAD May 10 01:43:13.398179 systemd[1]: Starting systemd-userdbd.service... May 10 01:43:13.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.449131 systemd[1]: Started systemd-userdbd.service. May 10 01:43:13.456053 kernel: audit: type=1130 audit(1746841393.449:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.475499 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 01:43:13.564434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 01:43:13.579582 systemd-networkd[1014]: lo: Link UP May 10 01:43:13.579594 systemd-networkd[1014]: lo: Gained carrier May 10 01:43:13.580465 systemd-networkd[1014]: Enumeration completed May 10 01:43:13.580591 systemd[1]: Started systemd-networkd.service. May 10 01:43:13.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.581430 systemd-networkd[1014]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 01:43:13.587058 kernel: audit: type=1130 audit(1746841393.581:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.588199 systemd-networkd[1014]: eth0: Link UP May 10 01:43:13.588212 systemd-networkd[1014]: eth0: Gained carrier May 10 01:43:13.603063 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 10 01:43:13.603211 systemd-networkd[1014]: eth0: DHCPv4 address 10.230.47.106/30, gateway 10.230.47.105 acquired from 10.230.47.105 May 10 01:43:13.628071 kernel: ACPI: button: Power Button [PWRF] May 10 01:43:13.648037 kernel: mousedev: PS/2 mouse device common for all mice May 10 01:43:13.658000 audit[1025]: AVC avc: denied { confidentiality } for pid=1025 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 01:43:13.668039 kernel: audit: type=1400 audit(1746841393.658:160): avc: denied { confidentiality } for pid=1025 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 01:43:13.658000 audit[1025]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558419d70b60 a1=338ac a2=7f102a4e8bc5 a3=5 items=110 ppid=1011 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 01:43:13.687065 kernel: audit: type=1300 audit(1746841393.658:160): arch=c000003e syscall=175 success=yes exit=0 a0=558419d70b60 a1=338ac a2=7f102a4e8bc5 a3=5 items=110 ppid=1011 pid=1025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 01:43:13.658000 audit: CWD cwd="/" May 10 01:43:13.658000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.690105 kernel: audit: type=1307 audit(1746841393.658:160): cwd="/" May 10 01:43:13.658000 audit: PATH item=1 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=2 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=3 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=4 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=5 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=6 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=7 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=8 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=9 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=10 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=11 name=(null) inode=13936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=12 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=13 name=(null) inode=13937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=14 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=15 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=16 name=(null) inode=13934 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=17 name=(null) inode=13939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=18 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=19 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=20 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=21 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=22 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=23 name=(null) inode=13942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=24 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=25 name=(null) inode=13943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=26 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=27 name=(null) inode=13944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=28 name=(null) inode=13940 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=29 name=(null) inode=13945 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=30 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=31 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=32 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=33 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=34 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=35 name=(null) inode=13948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=36 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=37 name=(null) inode=13949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=38 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=39 name=(null) inode=13950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=40 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=41 name=(null) inode=13951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=42 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=43 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=44 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=45 name=(null) inode=13953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=46 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=47 name=(null) inode=13954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=48 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=49 name=(null) inode=13955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=50 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=51 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=52 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=53 name=(null) inode=13957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=55 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=56 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=57 name=(null) inode=13959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=58 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=59 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=60 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=61 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=62 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=63 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=64 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=65 name=(null) inode=13963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=66 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=67 name=(null) inode=13964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=68 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=69 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=70 name=(null) inode=13961 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=71 name=(null) inode=13966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=72 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=73 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=74 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=75 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=76 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=77 name=(null) inode=13969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=78 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=79 name=(null) inode=13970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=80 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=81 name=(null) inode=13971 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=82 name=(null) inode=13967 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=83 name=(null) inode=13972 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=84 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=85 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=86 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=87 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=88 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=89 name=(null) inode=13975 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=90 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=91 name=(null) inode=13976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=92 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=93 name=(null) inode=13977 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=94 name=(null) inode=13973 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=95 name=(null) inode=13978 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=96 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=97 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=98 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=99 name=(null) inode=13980 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=100 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=101 name=(null) inode=13981 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=102 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=103 name=(null) inode=13982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=104 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=105 name=(null) inode=13983 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=106 name=(null) inode=13979 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=107 name=(null) inode=13984 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PATH item=109 name=(null) inode=13986 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 01:43:13.658000 audit: PROCTITLE proctitle="(udev-worker)" May 10 01:43:13.713037 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 01:43:13.747293 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 01:43:13.747550 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 01:43:13.747824 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 May 10 01:43:13.887601 systemd[1]: Finished systemd-udev-settle.service. May 10 01:43:13.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.890058 systemd[1]: Starting lvm2-activation-early.service... May 10 01:43:13.922350 lvm[1040]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 01:43:13.956544 systemd[1]: Finished lvm2-activation-early.service. May 10 01:43:13.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.957539 systemd[1]: Reached target cryptsetup.target. May 10 01:43:13.959781 systemd[1]: Starting lvm2-activation.service... May 10 01:43:13.966072 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 01:43:13.990513 systemd[1]: Finished lvm2-activation.service. May 10 01:43:13.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:13.991388 systemd[1]: Reached target local-fs-pre.target. May 10 01:43:13.992027 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 01:43:13.992076 systemd[1]: Reached target local-fs.target. May 10 01:43:13.992639 systemd[1]: Reached target machines.target. May 10 01:43:13.995072 systemd[1]: Starting ldconfig.service... May 10 01:43:13.996259 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 01:43:13.996310 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:13.997676 systemd[1]: Starting systemd-boot-update.service... May 10 01:43:14.000457 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 01:43:14.005242 systemd[1]: Starting systemd-machine-id-commit.service... May 10 01:43:14.009139 systemd[1]: Starting systemd-sysext.service... May 10 01:43:14.014598 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1043 (bootctl) May 10 01:43:14.016067 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 01:43:14.030887 systemd[1]: Unmounting usr-share-oem.mount... May 10 01:43:14.153344 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 01:43:14.153617 systemd[1]: Unmounted usr-share-oem.mount. May 10 01:43:14.190520 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 01:43:14.191379 systemd[1]: Finished systemd-machine-id-commit.service. May 10 01:43:14.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.204028 kernel: loop0: detected capacity change from 0 to 210664 May 10 01:43:14.203690 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 01:43:14.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.241052 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 01:43:14.262054 kernel: loop1: detected capacity change from 0 to 210664 May 10 01:43:14.280824 (sd-sysext)[1057]: Using extensions 'kubernetes'. May 10 01:43:14.281516 (sd-sysext)[1057]: Merged extensions into '/usr'. May 10 01:43:14.295030 systemd-fsck[1054]: fsck.fat 4.2 (2021-01-31) May 10 01:43:14.295030 systemd-fsck[1054]: /dev/vda1: 790 files, 120688/258078 clusters May 10 01:43:14.322220 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 01:43:14.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.326710 systemd[1]: Mounting boot.mount... May 10 01:43:14.327392 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 01:43:14.334818 systemd[1]: Mounting usr-share-oem.mount... May 10 01:43:14.336746 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 01:43:14.340183 systemd[1]: Starting modprobe@dm_mod.service... May 10 01:43:14.342776 systemd[1]: Starting modprobe@efi_pstore.service... May 10 01:43:14.346590 systemd[1]: Starting modprobe@loop.service... May 10 01:43:14.347412 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 01:43:14.347601 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:14.347828 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 01:43:14.353744 systemd[1]: Mounted boot.mount. May 10 01:43:14.356754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 01:43:14.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.356983 systemd[1]: Finished modprobe@dm_mod.service. May 10 01:43:14.360534 systemd[1]: Mounted usr-share-oem.mount. May 10 01:43:14.361742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 01:43:14.362589 systemd[1]: Finished modprobe@efi_pstore.service. May 10 01:43:14.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.363912 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 01:43:14.366619 systemd[1]: Finished modprobe@loop.service. May 10 01:43:14.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.370618 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 01:43:14.370793 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 01:43:14.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.374239 systemd[1]: Finished systemd-sysext.service. May 10 01:43:14.376781 systemd[1]: Starting ensure-sysext.service... May 10 01:43:14.384224 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 01:43:14.391653 systemd[1]: Finished systemd-boot-update.service. May 10 01:43:14.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.396621 systemd[1]: Reloading. May 10 01:43:14.406211 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 01:43:14.411893 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 01:43:14.421189 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 01:43:14.557143 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-05-10T01:43:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 01:43:14.557213 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-05-10T01:43:14Z" level=info msg="torcx already run" May 10 01:43:14.617963 ldconfig[1042]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 01:43:14.636236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 01:43:14.636265 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 01:43:14.663052 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 01:43:14.742000 audit: BPF prog-id=30 op=LOAD May 10 01:43:14.742000 audit: BPF prog-id=31 op=LOAD May 10 01:43:14.742000 audit: BPF prog-id=24 op=UNLOAD May 10 01:43:14.742000 audit: BPF prog-id=25 op=UNLOAD May 10 01:43:14.743000 audit: BPF prog-id=32 op=LOAD May 10 01:43:14.743000 audit: BPF prog-id=21 op=UNLOAD May 10 01:43:14.743000 audit: BPF prog-id=33 op=LOAD May 10 01:43:14.743000 audit: BPF prog-id=34 op=LOAD May 10 01:43:14.743000 audit: BPF prog-id=22 op=UNLOAD May 10 01:43:14.743000 audit: BPF prog-id=23 op=UNLOAD May 10 01:43:14.748000 audit: BPF prog-id=35 op=LOAD May 10 01:43:14.748000 audit: BPF prog-id=26 op=UNLOAD May 10 01:43:14.749000 audit: BPF prog-id=36 op=LOAD May 10 01:43:14.749000 audit: BPF prog-id=27 op=UNLOAD May 10 01:43:14.750000 audit: BPF prog-id=37 op=LOAD May 10 01:43:14.750000 audit: BPF prog-id=38 op=LOAD May 10 01:43:14.750000 audit: BPF prog-id=28 op=UNLOAD May 10 01:43:14.750000 audit: BPF prog-id=29 op=UNLOAD May 10 01:43:14.763263 systemd[1]: Finished ldconfig.service. May 10 01:43:14.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.765470 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 01:43:14.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.772965 systemd[1]: Starting audit-rules.service... May 10 01:43:14.775125 systemd[1]: Starting clean-ca-certificates.service... May 10 01:43:14.778255 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 01:43:14.783000 audit: BPF prog-id=39 op=LOAD May 10 01:43:14.787000 audit: BPF prog-id=40 op=LOAD May 10 01:43:14.785199 systemd[1]: Starting systemd-resolved.service... May 10 01:43:14.788719 systemd[1]: Starting systemd-timesyncd.service... May 10 01:43:14.791883 systemd[1]: Starting systemd-update-utmp.service... May 10 01:43:14.794245 systemd[1]: Finished clean-ca-certificates.service. May 10 01:43:14.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.800388 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 01:43:14.810197 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 01:43:14.812486 systemd[1]: Starting modprobe@dm_mod.service... May 10 01:43:14.815506 systemd[1]: Starting modprobe@efi_pstore.service... May 10 01:43:14.818990 systemd[1]: Starting modprobe@loop.service... May 10 01:43:14.819693 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 01:43:14.819928 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:14.820162 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 01:43:14.822612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 01:43:14.823051 systemd[1]: Finished modprobe@dm_mod.service. May 10 01:43:14.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.827959 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 01:43:14.830856 systemd[1]: Starting modprobe@dm_mod.service... May 10 01:43:14.831603 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 01:43:14.831872 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:14.832113 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 01:43:14.837413 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 01:43:14.838107 systemd[1]: Finished modprobe@loop.service. May 10 01:43:14.838000 audit[1139]: SYSTEM_BOOT pid=1139 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 01:43:14.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.839843 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 01:43:14.840056 systemd[1]: Finished modprobe@efi_pstore.service. May 10 01:43:14.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.843917 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 01:43:14.845972 systemd[1]: Starting modprobe@drm.service... May 10 01:43:14.846794 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 01:43:14.846952 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:14.849413 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 01:43:14.850236 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 01:43:14.850522 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 01:43:14.853439 systemd[1]: Finished ensure-sysext.service. May 10 01:43:14.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.855402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 01:43:14.855591 systemd[1]: Finished modprobe@dm_mod.service. May 10 01:43:14.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.857880 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 01:43:14.858125 systemd[1]: Finished modprobe@drm.service. May 10 01:43:14.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.861339 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 01:43:14.863625 systemd[1]: Finished systemd-update-utmp.service. May 10 01:43:14.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.881556 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 01:43:14.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.884519 systemd[1]: Starting systemd-update-done.service... May 10 01:43:14.901187 systemd[1]: Finished systemd-update-done.service. May 10 01:43:14.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 01:43:14.925000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 01:43:14.925000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff6f43cd40 a2=420 a3=0 items=0 ppid=1133 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 01:43:14.925000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 01:43:14.926147 augenrules[1160]: No rules May 10 01:43:14.926321 systemd[1]: Finished audit-rules.service. May 10 01:43:14.942661 systemd[1]: Started systemd-timesyncd.service. May 10 01:43:14.943559 systemd[1]: Reached target time-set.target. May 10 01:43:14.944261 systemd-resolved[1137]: Positive Trust Anchors: May 10 01:43:14.944279 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 01:43:14.944317 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 01:43:14.951839 systemd-resolved[1137]: Using system hostname 'srv-yxh38.gb1.brightbox.com'. May 10 01:43:14.954487 systemd[1]: Started systemd-resolved.service. May 10 01:43:14.955297 systemd[1]: Reached target network.target. May 10 01:43:14.955895 systemd[1]: Reached target nss-lookup.target. May 10 01:43:14.956602 systemd[1]: Reached target sysinit.target. May 10 01:43:14.957335 systemd[1]: Started motdgen.path. May 10 01:43:14.957918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 01:43:14.958849 systemd[1]: Started logrotate.timer. May 10 01:43:14.959521 systemd[1]: Started mdadm.timer. May 10 01:43:14.960073 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 01:43:14.960662 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 01:43:14.960718 systemd[1]: Reached target paths.target. May 10 01:43:14.961363 systemd[1]: Reached target timers.target. May 10 01:43:14.962420 systemd[1]: Listening on dbus.socket. May 10 01:43:14.964664 systemd[1]: Starting docker.socket... May 10 01:43:14.969137 systemd[1]: Listening on sshd.socket. May 10 01:43:14.969898 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:14.970527 systemd[1]: Listening on docker.socket. May 10 01:43:14.971431 systemd[1]: Reached target sockets.target. May 10 01:43:14.972032 systemd[1]: Reached target basic.target. May 10 01:43:14.972681 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 01:43:14.972733 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 01:43:14.974680 systemd[1]: Starting containerd.service... May 10 01:43:14.977849 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 10 01:43:14.980076 systemd[1]: Starting dbus.service... May 10 01:43:14.983728 systemd[1]: Starting enable-oem-cloudinit.service... May 10 01:43:15.777866 systemd-resolved[1137]: Clock change detected. Flushing caches. May 10 01:43:15.778192 systemd-timesyncd[1138]: Contacted time server 178.215.228.24:123 (0.flatcar.pool.ntp.org). May 10 01:43:15.778707 systemd-timesyncd[1138]: Initial clock synchronization to Sat 2025-05-10 01:43:15.777799 UTC. May 10 01:43:15.779245 systemd[1]: Starting extend-filesystems.service... May 10 01:43:15.780264 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 01:43:15.784237 systemd[1]: Starting motdgen.service... May 10 01:43:15.789812 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 01:43:15.792380 systemd[1]: Starting sshd-keygen.service... May 10 01:43:15.800162 systemd[1]: Starting systemd-logind.service... May 10 01:43:15.800991 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 01:43:15.801177 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 01:43:15.802067 jq[1173]: false May 10 01:43:15.808079 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 01:43:15.815231 systemd[1]: Starting update-engine.service... May 10 01:43:15.818267 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 01:43:15.825107 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 01:43:15.825547 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 01:43:15.826490 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 01:43:15.827881 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 01:43:15.837554 jq[1190]: true May 10 01:43:15.858478 systemd[1]: motdgen.service: Deactivated successfully. May 10 01:43:15.858799 systemd[1]: Finished motdgen.service. May 10 01:43:15.868109 jq[1194]: true May 10 01:43:15.896016 dbus-daemon[1170]: [system] SELinux support is enabled May 10 01:43:15.896278 systemd[1]: Started dbus.service. May 10 01:43:15.899971 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 01:43:15.900026 systemd[1]: Reached target system-config.target. May 10 01:43:15.900761 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 01:43:15.900825 systemd[1]: Reached target user-config.target. May 10 01:43:15.906571 dbus-daemon[1170]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1014 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 10 01:43:15.907541 extend-filesystems[1174]: Found loop1 May 10 01:43:15.907541 extend-filesystems[1174]: Found vda May 10 01:43:15.912066 systemd[1]: Starting systemd-hostnamed.service... May 10 01:43:15.916170 extend-filesystems[1174]: Found vda1 May 10 01:43:15.916170 extend-filesystems[1174]: Found vda2 May 10 01:43:15.916170 extend-filesystems[1174]: Found vda3 May 10 01:43:15.916170 extend-filesystems[1174]: Found usr May 10 01:43:15.916170 extend-filesystems[1174]: Found vda4 May 10 01:43:15.916170 extend-filesystems[1174]: Found vda6 May 10 01:43:15.916170 extend-filesystems[1174]: Found vda7 May 10 01:43:15.916170 extend-filesystems[1174]: Found vda9 May 10 01:43:15.916170 extend-filesystems[1174]: Checking size of /dev/vda9 May 10 01:43:16.008702 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks May 10 01:43:16.039948 env[1193]: time="2025-05-10T01:43:15.983008926Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 01:43:15.940063 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 01:43:16.040478 update_engine[1185]: I0510 01:43:15.956627 1185 main.cc:92] Flatcar Update Engine starting May 10 01:43:16.040478 update_engine[1185]: I0510 01:43:15.961879 1185 update_check_scheduler.cc:74] Next update check in 5m55s May 10 01:43:16.042001 extend-filesystems[1174]: Resized partition /dev/vda9 May 10 01:43:15.940121 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 01:43:16.043972 extend-filesystems[1218]: resize2fs 1.46.5 (30-Dec-2021) May 10 01:43:15.961651 systemd[1]: Started update-engine.service. May 10 01:43:16.013161 systemd[1]: Created slice system-sshd.slice. May 10 01:43:16.017475 systemd[1]: Started locksmithd.service. May 10 01:43:16.041301 systemd-logind[1179]: Watching system buttons on /dev/input/event2 (Power Button) May 10 01:43:16.041357 systemd-logind[1179]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 01:43:16.041758 systemd-logind[1179]: New seat seat0. May 10 01:43:16.046958 systemd[1]: Started systemd-logind.service. May 10 01:43:16.063969 bash[1222]: Updated "/home/core/.ssh/authorized_keys" May 10 01:43:16.064467 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 01:43:16.113891 env[1193]: time="2025-05-10T01:43:16.113840641Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 01:43:16.114559 env[1193]: time="2025-05-10T01:43:16.114526579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.124758308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.124827256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.125134837Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.125179920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.125199255Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.125214659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 01:43:16.125994 env[1193]: time="2025-05-10T01:43:16.125381450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 01:43:16.126922 env[1193]: time="2025-05-10T01:43:16.126457367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 01:43:16.126922 env[1193]: time="2025-05-10T01:43:16.126691458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 01:43:16.126922 env[1193]: time="2025-05-10T01:43:16.126718643Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 01:43:16.126922 env[1193]: time="2025-05-10T01:43:16.126846098Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 01:43:16.126922 env[1193]: time="2025-05-10T01:43:16.126866491Z" level=info msg="metadata content store policy set" policy=shared May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140324683Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140385174Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140406856Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140468601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140492051Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140512345Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140536881Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 01:43:16.140607 env[1193]: time="2025-05-10T01:43:16.140557449Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 01:43:16.141623 env[1193]: time="2025-05-10T01:43:16.140587885Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 01:43:16.141623 env[1193]: time="2025-05-10T01:43:16.141111131Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 01:43:16.141623 env[1193]: time="2025-05-10T01:43:16.141134916Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 01:43:16.141623 env[1193]: time="2025-05-10T01:43:16.141161114Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 01:43:16.141623 env[1193]: time="2025-05-10T01:43:16.141323363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 01:43:16.141623 env[1193]: time="2025-05-10T01:43:16.141510292Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142215708Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142266783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142289796Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142404315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142427923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142453573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142477698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142497089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142523723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142541428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142558525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142591218Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142813725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142840468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144036 env[1193]: time="2025-05-10T01:43:16.142885046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 01:43:16.142622 systemd[1]: Started systemd-hostnamed.service. May 10 01:43:16.142429 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.hostname1' May 10 01:43:16.144975 env[1193]: time="2025-05-10T01:43:16.142903089Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 01:43:16.144975 env[1193]: time="2025-05-10T01:43:16.142944393Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 01:43:16.144975 env[1193]: time="2025-05-10T01:43:16.142959454Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 01:43:16.144975 env[1193]: time="2025-05-10T01:43:16.143006572Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 01:43:16.144975 env[1193]: time="2025-05-10T01:43:16.143084723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 01:43:16.144661 dbus-daemon[1170]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1206 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 10 01:43:16.145399 env[1193]: time="2025-05-10T01:43:16.143350014Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 01:43:16.145399 env[1193]: time="2025-05-10T01:43:16.143434324Z" level=info msg="Connect containerd service" May 10 01:43:16.145399 env[1193]: time="2025-05-10T01:43:16.143512737Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 01:43:16.147724 env[1193]: time="2025-05-10T01:43:16.146683619Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 01:43:16.148050 env[1193]: time="2025-05-10T01:43:16.148003791Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 01:43:16.148208 env[1193]: time="2025-05-10T01:43:16.148183691Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 01:43:16.148944 systemd[1]: Starting polkit.service... May 10 01:43:16.150139 env[1193]: time="2025-05-10T01:43:16.150064459Z" level=info msg="Start subscribing containerd event" May 10 01:43:16.150204 env[1193]: time="2025-05-10T01:43:16.150166577Z" level=info msg="Start recovering state" May 10 01:43:16.150376 env[1193]: time="2025-05-10T01:43:16.150276173Z" level=info msg="Start event monitor" May 10 01:43:16.150376 env[1193]: time="2025-05-10T01:43:16.150303928Z" level=info msg="Start snapshots syncer" May 10 01:43:16.150376 env[1193]: time="2025-05-10T01:43:16.150324943Z" level=info msg="Start cni network conf syncer for default" May 10 01:43:16.150376 env[1193]: time="2025-05-10T01:43:16.150349388Z" level=info msg="Start streaming server" May 10 01:43:16.150694 systemd[1]: Started containerd.service. May 10 01:43:16.151233 env[1193]: time="2025-05-10T01:43:16.151202483Z" level=info msg="containerd successfully booted in 0.168594s" May 10 01:43:16.162601 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 10 01:43:16.170598 polkitd[1229]: Started polkitd version 121 May 10 01:43:16.185381 extend-filesystems[1218]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 01:43:16.185381 extend-filesystems[1218]: old_desc_blocks = 1, new_desc_blocks = 8 May 10 01:43:16.185381 extend-filesystems[1218]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 10 01:43:16.190453 extend-filesystems[1174]: Resized filesystem in /dev/vda9 May 10 01:43:16.186790 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 01:43:16.187044 systemd[1]: Finished extend-filesystems.service. May 10 01:43:16.195665 polkitd[1229]: Loading rules from directory /etc/polkit-1/rules.d May 10 01:43:16.195911 polkitd[1229]: Loading rules from directory /usr/share/polkit-1/rules.d May 10 01:43:16.202618 polkitd[1229]: Finished loading, compiling and executing 2 rules May 10 01:43:16.203697 dbus-daemon[1170]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 10 01:43:16.204277 systemd[1]: Started polkit.service. May 10 01:43:16.205436 polkitd[1229]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 10 01:43:16.230862 systemd-hostnamed[1206]: Hostname set to (static) May 10 01:43:16.281457 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 01:43:16.324789 systemd-networkd[1014]: eth0: Gained IPv6LL May 10 01:43:16.333930 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 01:43:16.335088 systemd[1]: Reached target network-online.target. May 10 01:43:16.338398 systemd[1]: Starting kubelet.service... May 10 01:43:16.821619 sshd_keygen[1196]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 01:43:16.839173 systemd-networkd[1014]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8bda:24:19ff:fee6:2f6a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8bda:24:19ff:fee6:2f6a/64 assigned by NDisc. May 10 01:43:16.839188 systemd-networkd[1014]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 10 01:43:16.849206 systemd[1]: Finished sshd-keygen.service. May 10 01:43:16.856434 systemd[1]: Starting issuegen.service... May 10 01:43:16.859701 systemd[1]: Started sshd@0-10.230.47.106:22-139.178.68.195:52148.service. May 10 01:43:16.877871 systemd[1]: issuegen.service: Deactivated successfully. May 10 01:43:16.878136 systemd[1]: Finished issuegen.service. May 10 01:43:16.881637 systemd[1]: Starting systemd-user-sessions.service... May 10 01:43:16.894557 systemd[1]: Finished systemd-user-sessions.service. May 10 01:43:16.897819 systemd[1]: Started getty@tty1.service. May 10 01:43:16.902456 systemd[1]: Started serial-getty@ttyS0.service. May 10 01:43:16.904046 systemd[1]: Reached target getty.target. May 10 01:43:17.279607 systemd[1]: Started kubelet.service. May 10 01:43:17.782921 sshd[1252]: Accepted publickey for core from 139.178.68.195 port 52148 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:17.786100 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:17.802891 systemd[1]: Created slice user-500.slice. May 10 01:43:17.807708 systemd[1]: Starting user-runtime-dir@500.service... May 10 01:43:17.821659 systemd-logind[1179]: New session 1 of user core. May 10 01:43:17.828964 systemd[1]: Finished user-runtime-dir@500.service. May 10 01:43:17.834084 systemd[1]: Starting user@500.service... May 10 01:43:17.840175 (systemd)[1270]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:17.958125 systemd[1270]: Queued start job for default target default.target. May 10 01:43:17.959399 systemd[1270]: Reached target paths.target. May 10 01:43:17.959609 systemd[1270]: Reached target sockets.target. May 10 01:43:17.959777 systemd[1270]: Reached target timers.target. May 10 01:43:17.959917 systemd[1270]: Reached target basic.target. May 10 01:43:17.960129 systemd[1270]: Reached target default.target. May 10 01:43:17.960241 systemd[1]: Started user@500.service. May 10 01:43:17.960609 systemd[1270]: Startup finished in 108ms. May 10 01:43:17.962574 systemd[1]: Started session-1.scope. May 10 01:43:18.000447 kubelet[1262]: E0510 01:43:18.000384 1262 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 01:43:18.002930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 01:43:18.003154 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 01:43:18.003599 systemd[1]: kubelet.service: Consumed 1.097s CPU time. May 10 01:43:18.588395 systemd[1]: Started sshd@1-10.230.47.106:22-139.178.68.195:52160.service. May 10 01:43:19.470647 sshd[1279]: Accepted publickey for core from 139.178.68.195 port 52160 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:19.473207 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:19.479814 systemd-logind[1179]: New session 2 of user core. May 10 01:43:19.480847 systemd[1]: Started session-2.scope. May 10 01:43:20.088530 sshd[1279]: pam_unix(sshd:session): session closed for user core May 10 01:43:20.092414 systemd[1]: sshd@1-10.230.47.106:22-139.178.68.195:52160.service: Deactivated successfully. May 10 01:43:20.093552 systemd[1]: session-2.scope: Deactivated successfully. May 10 01:43:20.094340 systemd-logind[1179]: Session 2 logged out. Waiting for processes to exit. May 10 01:43:20.095420 systemd-logind[1179]: Removed session 2. May 10 01:43:20.237112 systemd[1]: Started sshd@2-10.230.47.106:22-139.178.68.195:52170.service. May 10 01:43:21.130439 sshd[1286]: Accepted publickey for core from 139.178.68.195 port 52170 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:21.133305 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:21.140431 systemd-logind[1179]: New session 3 of user core. May 10 01:43:21.140852 systemd[1]: Started session-3.scope. May 10 01:43:21.760350 sshd[1286]: pam_unix(sshd:session): session closed for user core May 10 01:43:21.764158 systemd-logind[1179]: Session 3 logged out. Waiting for processes to exit. May 10 01:43:21.764746 systemd[1]: sshd@2-10.230.47.106:22-139.178.68.195:52170.service: Deactivated successfully. May 10 01:43:21.765822 systemd[1]: session-3.scope: Deactivated successfully. May 10 01:43:21.767030 systemd-logind[1179]: Removed session 3. May 10 01:43:22.914236 coreos-metadata[1169]: May 10 01:43:22.914 WARN failed to locate config-drive, using the metadata service API instead May 10 01:43:22.966857 coreos-metadata[1169]: May 10 01:43:22.966 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 10 01:43:22.998058 coreos-metadata[1169]: May 10 01:43:22.997 INFO Fetch successful May 10 01:43:22.998326 coreos-metadata[1169]: May 10 01:43:22.998 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 10 01:43:23.030550 coreos-metadata[1169]: May 10 01:43:23.030 INFO Fetch successful May 10 01:43:23.032746 unknown[1169]: wrote ssh authorized keys file for user: core May 10 01:43:23.046605 update-ssh-keys[1293]: Updated "/home/core/.ssh/authorized_keys" May 10 01:43:23.047743 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 10 01:43:23.048288 systemd[1]: Reached target multi-user.target. May 10 01:43:23.050521 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 01:43:23.061237 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 01:43:23.061464 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 01:43:23.064767 systemd[1]: Startup finished in 1.110s (kernel) + 6.117s (initrd) + 13.497s (userspace) = 20.725s. May 10 01:43:28.079360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 01:43:28.079784 systemd[1]: Stopped kubelet.service. May 10 01:43:28.079858 systemd[1]: kubelet.service: Consumed 1.097s CPU time. May 10 01:43:28.081994 systemd[1]: Starting kubelet.service... May 10 01:43:28.214710 systemd[1]: Started kubelet.service. May 10 01:43:28.316266 kubelet[1299]: E0510 01:43:28.316184 1299 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 01:43:28.319933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 01:43:28.320200 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 01:43:31.905842 systemd[1]: Started sshd@3-10.230.47.106:22-139.178.68.195:35496.service. May 10 01:43:32.787542 sshd[1306]: Accepted publickey for core from 139.178.68.195 port 35496 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:32.790101 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:32.798450 systemd-logind[1179]: New session 4 of user core. May 10 01:43:32.798539 systemd[1]: Started session-4.scope. May 10 01:43:33.403710 sshd[1306]: pam_unix(sshd:session): session closed for user core May 10 01:43:33.407188 systemd-logind[1179]: Session 4 logged out. Waiting for processes to exit. May 10 01:43:33.407633 systemd[1]: sshd@3-10.230.47.106:22-139.178.68.195:35496.service: Deactivated successfully. May 10 01:43:33.408484 systemd[1]: session-4.scope: Deactivated successfully. May 10 01:43:33.409613 systemd-logind[1179]: Removed session 4. May 10 01:43:33.550214 systemd[1]: Started sshd@4-10.230.47.106:22-139.178.68.195:35508.service. May 10 01:43:34.436253 sshd[1312]: Accepted publickey for core from 139.178.68.195 port 35508 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:34.438243 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:34.445337 systemd-logind[1179]: New session 5 of user core. May 10 01:43:34.445630 systemd[1]: Started session-5.scope. May 10 01:43:35.049737 sshd[1312]: pam_unix(sshd:session): session closed for user core May 10 01:43:35.053620 systemd[1]: sshd@4-10.230.47.106:22-139.178.68.195:35508.service: Deactivated successfully. May 10 01:43:35.054628 systemd[1]: session-5.scope: Deactivated successfully. May 10 01:43:35.055433 systemd-logind[1179]: Session 5 logged out. Waiting for processes to exit. May 10 01:43:35.057094 systemd-logind[1179]: Removed session 5. May 10 01:43:35.197568 systemd[1]: Started sshd@5-10.230.47.106:22-139.178.68.195:35804.service. May 10 01:43:36.084501 sshd[1318]: Accepted publickey for core from 139.178.68.195 port 35804 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:36.086742 sshd[1318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:36.093839 systemd-logind[1179]: New session 6 of user core. May 10 01:43:36.095554 systemd[1]: Started session-6.scope. May 10 01:43:36.705886 sshd[1318]: pam_unix(sshd:session): session closed for user core May 10 01:43:36.710505 systemd-logind[1179]: Session 6 logged out. Waiting for processes to exit. May 10 01:43:36.711091 systemd[1]: sshd@5-10.230.47.106:22-139.178.68.195:35804.service: Deactivated successfully. May 10 01:43:36.712165 systemd[1]: session-6.scope: Deactivated successfully. May 10 01:43:36.713207 systemd-logind[1179]: Removed session 6. May 10 01:43:36.860275 systemd[1]: Started sshd@6-10.230.47.106:22-139.178.68.195:35810.service. May 10 01:43:37.766515 sshd[1324]: Accepted publickey for core from 139.178.68.195 port 35810 ssh2: RSA SHA256:YQmh9kay2Fbwp/WeJvefEh7C1hXKeGuPiyso2bRkh84 May 10 01:43:37.768761 sshd[1324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 01:43:37.776231 systemd-logind[1179]: New session 7 of user core. May 10 01:43:37.776984 systemd[1]: Started session-7.scope. May 10 01:43:38.261711 sudo[1327]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 01:43:38.262197 sudo[1327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 01:43:38.287448 systemd[1]: Starting coreos-metadata.service... May 10 01:43:38.329260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 01:43:38.329714 systemd[1]: Stopped kubelet.service. May 10 01:43:38.332009 systemd[1]: Starting kubelet.service... May 10 01:43:38.464754 systemd[1]: Started kubelet.service. May 10 01:43:38.537651 kubelet[1338]: E0510 01:43:38.537434 1338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 01:43:38.540216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 01:43:38.540454 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 01:43:45.344083 coreos-metadata[1331]: May 10 01:43:45.343 WARN failed to locate config-drive, using the metadata service API instead May 10 01:43:45.393533 coreos-metadata[1331]: May 10 01:43:45.393 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 10 01:43:45.394668 coreos-metadata[1331]: May 10 01:43:45.394 INFO Fetch successful May 10 01:43:45.394919 coreos-metadata[1331]: May 10 01:43:45.394 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 10 01:43:45.406013 coreos-metadata[1331]: May 10 01:43:45.405 INFO Fetch successful May 10 01:43:45.406277 coreos-metadata[1331]: May 10 01:43:45.406 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 10 01:43:45.420554 coreos-metadata[1331]: May 10 01:43:45.420 INFO Fetch successful May 10 01:43:45.420893 coreos-metadata[1331]: May 10 01:43:45.420 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 10 01:43:45.436223 coreos-metadata[1331]: May 10 01:43:45.435 INFO Fetch successful May 10 01:43:45.436674 coreos-metadata[1331]: May 10 01:43:45.436 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 10 01:43:45.453923 coreos-metadata[1331]: May 10 01:43:45.453 INFO Fetch successful May 10 01:43:45.464016 systemd[1]: Finished coreos-metadata.service. May 10 01:43:46.275270 systemd[1]: Stopped kubelet.service. May 10 01:43:46.278877 systemd[1]: Starting kubelet.service... May 10 01:43:46.316054 systemd[1]: Reloading. May 10 01:43:46.442661 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2025-05-10T01:43:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 01:43:46.442842 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2025-05-10T01:43:46Z" level=info msg="torcx already run" May 10 01:43:46.554846 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 01:43:46.555425 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 01:43:46.582796 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 01:43:46.742557 systemd[1]: Started kubelet.service. May 10 01:43:46.751877 systemd[1]: Stopping kubelet.service... May 10 01:43:46.752546 systemd[1]: kubelet.service: Deactivated successfully. May 10 01:43:46.752967 systemd[1]: Stopped kubelet.service. May 10 01:43:46.755660 systemd[1]: Starting kubelet.service... May 10 01:43:46.869885 systemd[1]: Started kubelet.service. May 10 01:43:46.878130 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 10 01:43:46.930769 kubelet[1461]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 01:43:46.931305 kubelet[1461]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 01:43:46.931437 kubelet[1461]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 01:43:46.947095 kubelet[1461]: I0510 01:43:46.947001 1461 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 01:43:47.552209 kubelet[1461]: I0510 01:43:47.552163 1461 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 10 01:43:47.552503 kubelet[1461]: I0510 01:43:47.552480 1461 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 01:43:47.552922 kubelet[1461]: I0510 01:43:47.552896 1461 server.go:927] "Client rotation is on, will bootstrap in background" May 10 01:43:47.586572 kubelet[1461]: I0510 01:43:47.586528 1461 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 01:43:47.604457 kubelet[1461]: I0510 01:43:47.604417 1461 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 01:43:47.605063 kubelet[1461]: I0510 01:43:47.605014 1461 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 01:43:47.605865 kubelet[1461]: I0510 01:43:47.605194 1461 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.47.106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 10 01:43:47.607978 kubelet[1461]: I0510 01:43:47.607946 1461 topology_manager.go:138] "Creating topology manager with none policy" May 10 01:43:47.608113 kubelet[1461]: I0510 01:43:47.608091 1461 container_manager_linux.go:301] "Creating device plugin manager" May 10 01:43:47.608437 kubelet[1461]: I0510 01:43:47.608415 1461 state_mem.go:36] "Initialized new in-memory state store" May 10 01:43:47.610250 kubelet[1461]: I0510 01:43:47.610219 1461 kubelet.go:400] "Attempting to sync node with API server" May 10 01:43:47.610339 kubelet[1461]: I0510 01:43:47.610251 1461 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 01:43:47.610339 kubelet[1461]: I0510 01:43:47.610303 1461 kubelet.go:312] "Adding apiserver pod source" May 10 01:43:47.610469 kubelet[1461]: I0510 01:43:47.610342 1461 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 01:43:47.611994 kubelet[1461]: E0510 01:43:47.611966 1461 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:47.612227 kubelet[1461]: E0510 01:43:47.612200 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:47.615328 kubelet[1461]: I0510 01:43:47.615299 1461 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 01:43:47.617102 kubelet[1461]: I0510 01:43:47.617067 1461 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 01:43:47.617206 kubelet[1461]: W0510 01:43:47.617169 1461 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 01:43:47.618194 kubelet[1461]: I0510 01:43:47.618167 1461 server.go:1264] "Started kubelet" May 10 01:43:47.619973 kubelet[1461]: I0510 01:43:47.619909 1461 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 01:43:47.621740 kubelet[1461]: I0510 01:43:47.621716 1461 server.go:455] "Adding debug handlers to kubelet server" May 10 01:43:47.629621 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 01:43:47.629816 kubelet[1461]: I0510 01:43:47.627835 1461 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 01:43:47.629816 kubelet[1461]: I0510 01:43:47.628472 1461 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 01:43:47.630186 kubelet[1461]: I0510 01:43:47.630164 1461 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 01:43:47.634863 kubelet[1461]: W0510 01:43:47.634824 1461 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.47.106" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 10 01:43:47.635012 kubelet[1461]: E0510 01:43:47.634877 1461 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.230.47.106" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 10 01:43:47.635083 kubelet[1461]: W0510 01:43:47.635028 1461 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 10 01:43:47.635083 kubelet[1461]: E0510 01:43:47.635055 1461 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 10 01:43:47.640081 kubelet[1461]: I0510 01:43:47.640058 1461 volume_manager.go:291] "Starting Kubelet Volume Manager" May 10 01:43:47.640726 kubelet[1461]: I0510 01:43:47.640701 1461 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 10 01:43:47.640982 kubelet[1461]: I0510 01:43:47.640961 1461 reconciler.go:26] "Reconciler: start to sync state" May 10 01:43:47.641681 kubelet[1461]: E0510 01:43:47.641565 1461 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 01:43:47.642328 kubelet[1461]: I0510 01:43:47.642303 1461 factory.go:221] Registration of the systemd container factory successfully May 10 01:43:47.642570 kubelet[1461]: I0510 01:43:47.642539 1461 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 01:43:47.645671 kubelet[1461]: I0510 01:43:47.645645 1461 factory.go:221] Registration of the containerd container factory successfully May 10 01:43:47.666918 kubelet[1461]: E0510 01:43:47.666870 1461 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.47.106\" not found" node="10.230.47.106" May 10 01:43:47.672248 kubelet[1461]: I0510 01:43:47.672225 1461 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 01:43:47.672248 kubelet[1461]: I0510 01:43:47.672245 1461 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 01:43:47.672473 kubelet[1461]: I0510 01:43:47.672291 1461 state_mem.go:36] "Initialized new in-memory state store" May 10 01:43:47.676340 kubelet[1461]: I0510 01:43:47.676314 1461 policy_none.go:49] "None policy: Start" May 10 01:43:47.677236 kubelet[1461]: I0510 01:43:47.677210 1461 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 01:43:47.677326 kubelet[1461]: I0510 01:43:47.677270 1461 state_mem.go:35] "Initializing new in-memory state store" May 10 01:43:47.686492 systemd[1]: Created slice kubepods.slice. May 10 01:43:47.694565 systemd[1]: Created slice kubepods-burstable.slice. May 10 01:43:47.699400 systemd[1]: Created slice kubepods-besteffort.slice. May 10 01:43:47.711860 kubelet[1461]: I0510 01:43:47.711813 1461 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 01:43:47.712436 kubelet[1461]: I0510 01:43:47.712377 1461 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 01:43:47.712774 kubelet[1461]: I0510 01:43:47.712745 1461 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 01:43:47.714780 kubelet[1461]: E0510 01:43:47.714752 1461 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.47.106\" not found" May 10 01:43:47.741785 kubelet[1461]: I0510 01:43:47.741726 1461 kubelet_node_status.go:73] "Attempting to register node" node="10.230.47.106" May 10 01:43:47.747041 kubelet[1461]: I0510 01:43:47.747014 1461 kubelet_node_status.go:76] "Successfully registered node" node="10.230.47.106" May 10 01:43:47.760199 kubelet[1461]: E0510 01:43:47.760154 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:47.817641 kubelet[1461]: I0510 01:43:47.815625 1461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 01:43:47.819165 kubelet[1461]: I0510 01:43:47.819137 1461 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 01:43:47.819279 kubelet[1461]: I0510 01:43:47.819192 1461 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 01:43:47.819279 kubelet[1461]: I0510 01:43:47.819220 1461 kubelet.go:2337] "Starting kubelet main sync loop" May 10 01:43:47.819391 kubelet[1461]: E0510 01:43:47.819304 1461 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 10 01:43:47.860824 kubelet[1461]: E0510 01:43:47.860755 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:47.896229 sudo[1327]: pam_unix(sudo:session): session closed for user root May 10 01:43:47.961212 kubelet[1461]: E0510 01:43:47.961158 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.044852 sshd[1324]: pam_unix(sshd:session): session closed for user core May 10 01:43:48.048648 systemd-logind[1179]: Session 7 logged out. Waiting for processes to exit. May 10 01:43:48.050723 systemd[1]: sshd@6-10.230.47.106:22-139.178.68.195:35810.service: Deactivated successfully. May 10 01:43:48.051795 systemd[1]: session-7.scope: Deactivated successfully. May 10 01:43:48.053657 systemd-logind[1179]: Removed session 7. May 10 01:43:48.061833 kubelet[1461]: E0510 01:43:48.061797 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.162927 kubelet[1461]: E0510 01:43:48.162848 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.263735 kubelet[1461]: E0510 01:43:48.263656 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.364713 kubelet[1461]: E0510 01:43:48.364654 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.465783 kubelet[1461]: E0510 01:43:48.465549 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.556078 kubelet[1461]: I0510 01:43:48.556020 1461 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 10 01:43:48.556651 kubelet[1461]: W0510 01:43:48.556599 1461 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 10 01:43:48.556781 kubelet[1461]: W0510 01:43:48.556674 1461 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 10 01:43:48.556947 kubelet[1461]: W0510 01:43:48.556705 1461 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 10 01:43:48.566331 kubelet[1461]: E0510 01:43:48.566201 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.612696 kubelet[1461]: E0510 01:43:48.612634 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:48.666639 kubelet[1461]: E0510 01:43:48.666522 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.768167 kubelet[1461]: E0510 01:43:48.767502 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.867997 kubelet[1461]: E0510 01:43:48.867951 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:48.969181 kubelet[1461]: E0510 01:43:48.969114 1461 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.230.47.106\" not found" May 10 01:43:49.072071 kubelet[1461]: I0510 01:43:49.071514 1461 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 10 01:43:49.072775 env[1193]: time="2025-05-10T01:43:49.072681829Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 01:43:49.073718 kubelet[1461]: I0510 01:43:49.073687 1461 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 10 01:43:49.612819 kubelet[1461]: I0510 01:43:49.612765 1461 apiserver.go:52] "Watching apiserver" May 10 01:43:49.613067 kubelet[1461]: E0510 01:43:49.612788 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:49.624343 kubelet[1461]: I0510 01:43:49.624293 1461 topology_manager.go:215] "Topology Admit Handler" podUID="40c80e5b-6f80-4ffb-9fa7-3292deef4874" podNamespace="kube-system" podName="kube-proxy-vvnk6" May 10 01:43:49.624525 kubelet[1461]: I0510 01:43:49.624486 1461 topology_manager.go:215] "Topology Admit Handler" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" podNamespace="kube-system" podName="cilium-kmm9b" May 10 01:43:49.632844 systemd[1]: Created slice kubepods-burstable-pod79f32ecc_4f2e_48d1_972f_dfca021e4899.slice. May 10 01:43:49.641425 kubelet[1461]: I0510 01:43:49.641397 1461 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 10 01:43:49.644533 systemd[1]: Created slice kubepods-besteffort-pod40c80e5b_6f80_4ffb_9fa7_3292deef4874.slice. May 10 01:43:49.654739 kubelet[1461]: I0510 01:43:49.654699 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzlp\" (UniqueName: \"kubernetes.io/projected/40c80e5b-6f80-4ffb-9fa7-3292deef4874-kube-api-access-tpzlp\") pod \"kube-proxy-vvnk6\" (UID: \"40c80e5b-6f80-4ffb-9fa7-3292deef4874\") " pod="kube-system/kube-proxy-vvnk6" May 10 01:43:49.655020 kubelet[1461]: I0510 01:43:49.654990 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-cgroup\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.655169 kubelet[1461]: I0510 01:43:49.655142 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-xtables-lock\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.655361 kubelet[1461]: I0510 01:43:49.655334 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-config-path\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.655519 kubelet[1461]: I0510 01:43:49.655481 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-kernel\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.655730 kubelet[1461]: I0510 01:43:49.655704 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40c80e5b-6f80-4ffb-9fa7-3292deef4874-xtables-lock\") pod \"kube-proxy-vvnk6\" (UID: \"40c80e5b-6f80-4ffb-9fa7-3292deef4874\") " pod="kube-system/kube-proxy-vvnk6" May 10 01:43:49.655894 kubelet[1461]: I0510 01:43:49.655864 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40c80e5b-6f80-4ffb-9fa7-3292deef4874-lib-modules\") pod \"kube-proxy-vvnk6\" (UID: \"40c80e5b-6f80-4ffb-9fa7-3292deef4874\") " pod="kube-system/kube-proxy-vvnk6" May 10 01:43:49.655981 kubelet[1461]: I0510 01:43:49.655928 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cni-path\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.655981 kubelet[1461]: I0510 01:43:49.655962 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40c80e5b-6f80-4ffb-9fa7-3292deef4874-kube-proxy\") pod \"kube-proxy-vvnk6\" (UID: \"40c80e5b-6f80-4ffb-9fa7-3292deef4874\") " pod="kube-system/kube-proxy-vvnk6" May 10 01:43:49.656071 kubelet[1461]: I0510 01:43:49.655990 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-bpf-maps\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656071 kubelet[1461]: I0510 01:43:49.656037 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-lib-modules\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656155 kubelet[1461]: I0510 01:43:49.656064 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79f32ecc-4f2e-48d1-972f-dfca021e4899-clustermesh-secrets\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656155 kubelet[1461]: I0510 01:43:49.656108 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-net\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656155 kubelet[1461]: I0510 01:43:49.656132 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-hubble-tls\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656324 kubelet[1461]: I0510 01:43:49.656173 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-hostproc\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656324 kubelet[1461]: I0510 01:43:49.656202 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-etc-cni-netd\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656324 kubelet[1461]: I0510 01:43:49.656229 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-run\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.656324 kubelet[1461]: I0510 01:43:49.656276 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwlz\" (UniqueName: \"kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-kube-api-access-bgwlz\") pod \"cilium-kmm9b\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " pod="kube-system/cilium-kmm9b" May 10 01:43:49.946012 env[1193]: time="2025-05-10T01:43:49.943042894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmm9b,Uid:79f32ecc-4f2e-48d1-972f-dfca021e4899,Namespace:kube-system,Attempt:0,}" May 10 01:43:49.957100 env[1193]: time="2025-05-10T01:43:49.957019620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvnk6,Uid:40c80e5b-6f80-4ffb-9fa7-3292deef4874,Namespace:kube-system,Attempt:0,}" May 10 01:43:50.613929 kubelet[1461]: E0510 01:43:50.613851 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:50.946259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248493403.mount: Deactivated successfully. May 10 01:43:50.956293 env[1193]: time="2025-05-10T01:43:50.956241000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.965548 env[1193]: time="2025-05-10T01:43:50.965501680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.978568 env[1193]: time="2025-05-10T01:43:50.978484815Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.980983 env[1193]: time="2025-05-10T01:43:50.980948238Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.984305 env[1193]: time="2025-05-10T01:43:50.984264653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.987482 env[1193]: time="2025-05-10T01:43:50.987402986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.988836 env[1193]: time="2025-05-10T01:43:50.988797255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:50.990018 env[1193]: time="2025-05-10T01:43:50.989919813Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:51.023192 env[1193]: time="2025-05-10T01:43:51.022873870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:43:51.023192 env[1193]: time="2025-05-10T01:43:51.022963833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:43:51.023192 env[1193]: time="2025-05-10T01:43:51.022997302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:43:51.023633 env[1193]: time="2025-05-10T01:43:51.023545431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:43:51.023795 env[1193]: time="2025-05-10T01:43:51.023630828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:43:51.023795 env[1193]: time="2025-05-10T01:43:51.023649245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:43:51.024055 env[1193]: time="2025-05-10T01:43:51.023974311Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9c173f67ee99f072cdaf5185d70915096308198e4803643064ca6d1cb841c51 pid=1525 runtime=io.containerd.runc.v2 May 10 01:43:51.024185 env[1193]: time="2025-05-10T01:43:51.023983743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938 pid=1526 runtime=io.containerd.runc.v2 May 10 01:43:51.052643 systemd[1]: Started cri-containerd-b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938.scope. May 10 01:43:51.054499 systemd[1]: Started cri-containerd-c9c173f67ee99f072cdaf5185d70915096308198e4803643064ca6d1cb841c51.scope. May 10 01:43:51.113372 env[1193]: time="2025-05-10T01:43:51.113318970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vvnk6,Uid:40c80e5b-6f80-4ffb-9fa7-3292deef4874,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9c173f67ee99f072cdaf5185d70915096308198e4803643064ca6d1cb841c51\"" May 10 01:43:51.121242 env[1193]: time="2025-05-10T01:43:51.121168750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 10 01:43:51.122841 env[1193]: time="2025-05-10T01:43:51.122802198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kmm9b,Uid:79f32ecc-4f2e-48d1-972f-dfca021e4899,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\"" May 10 01:43:51.614387 kubelet[1461]: E0510 01:43:51.614325 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:52.615264 kubelet[1461]: E0510 01:43:52.615193 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:52.778463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620582403.mount: Deactivated successfully. May 10 01:43:53.615865 kubelet[1461]: E0510 01:43:53.615802 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:54.187315 env[1193]: time="2025-05-10T01:43:54.187253139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:54.189273 env[1193]: time="2025-05-10T01:43:54.189238553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:54.192052 env[1193]: time="2025-05-10T01:43:54.192005928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:54.196846 env[1193]: time="2025-05-10T01:43:54.195273314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:43:54.196846 env[1193]: time="2025-05-10T01:43:54.195768883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 10 01:43:54.200053 env[1193]: time="2025-05-10T01:43:54.199903901Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 01:43:54.201778 env[1193]: time="2025-05-10T01:43:54.201717767Z" level=info msg="CreateContainer within sandbox \"c9c173f67ee99f072cdaf5185d70915096308198e4803643064ca6d1cb841c51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 01:43:54.218320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531820833.mount: Deactivated successfully. May 10 01:43:54.224986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081916624.mount: Deactivated successfully. May 10 01:43:54.230213 env[1193]: time="2025-05-10T01:43:54.230165178Z" level=info msg="CreateContainer within sandbox \"c9c173f67ee99f072cdaf5185d70915096308198e4803643064ca6d1cb841c51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24b2e0206a12ab8091af17f1ad224a23a8afda53eba1a67c36456808c6aa8b13\"" May 10 01:43:54.231358 env[1193]: time="2025-05-10T01:43:54.231324578Z" level=info msg="StartContainer for \"24b2e0206a12ab8091af17f1ad224a23a8afda53eba1a67c36456808c6aa8b13\"" May 10 01:43:54.257550 systemd[1]: Started cri-containerd-24b2e0206a12ab8091af17f1ad224a23a8afda53eba1a67c36456808c6aa8b13.scope. May 10 01:43:54.306462 env[1193]: time="2025-05-10T01:43:54.306354627Z" level=info msg="StartContainer for \"24b2e0206a12ab8091af17f1ad224a23a8afda53eba1a67c36456808c6aa8b13\" returns successfully" May 10 01:43:54.616841 kubelet[1461]: E0510 01:43:54.616788 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:55.617282 kubelet[1461]: E0510 01:43:55.617213 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:56.618307 kubelet[1461]: E0510 01:43:56.618182 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:57.619490 kubelet[1461]: E0510 01:43:57.619387 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:58.620635 kubelet[1461]: E0510 01:43:58.620555 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:43:59.621079 kubelet[1461]: E0510 01:43:59.621013 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:00.621279 kubelet[1461]: E0510 01:44:00.621198 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:01.194045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442679625.mount: Deactivated successfully. May 10 01:44:01.347794 update_engine[1185]: I0510 01:44:01.346800 1185 update_attempter.cc:509] Updating boot flags... May 10 01:44:01.622217 kubelet[1461]: E0510 01:44:01.622110 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:02.622982 kubelet[1461]: E0510 01:44:02.622847 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:03.624888 kubelet[1461]: E0510 01:44:03.624844 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:04.625945 kubelet[1461]: E0510 01:44:04.625900 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:05.556519 env[1193]: time="2025-05-10T01:44:05.556398309Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:05.574843 env[1193]: time="2025-05-10T01:44:05.574791051Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:05.576992 env[1193]: time="2025-05-10T01:44:05.576954852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:05.578268 env[1193]: time="2025-05-10T01:44:05.578227625Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 01:44:05.582881 env[1193]: time="2025-05-10T01:44:05.582842454Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 01:44:05.597365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873880637.mount: Deactivated successfully. May 10 01:44:05.604891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88728250.mount: Deactivated successfully. May 10 01:44:05.610679 env[1193]: time="2025-05-10T01:44:05.610612979Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\"" May 10 01:44:05.611869 env[1193]: time="2025-05-10T01:44:05.611825892Z" level=info msg="StartContainer for \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\"" May 10 01:44:05.627869 kubelet[1461]: E0510 01:44:05.627741 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:05.646592 systemd[1]: Started cri-containerd-37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28.scope. May 10 01:44:05.700049 env[1193]: time="2025-05-10T01:44:05.699979930Z" level=info msg="StartContainer for \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\" returns successfully" May 10 01:44:05.712186 systemd[1]: cri-containerd-37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28.scope: Deactivated successfully. May 10 01:44:05.892333 kubelet[1461]: I0510 01:44:05.892215 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vvnk6" podStartSLOduration=15.812773687 podStartE2EDuration="18.892162063s" podCreationTimestamp="2025-05-10 01:43:47 +0000 UTC" firstStartedPulling="2025-05-10 01:43:51.119135787 +0000 UTC m=+4.243655117" lastFinishedPulling="2025-05-10 01:43:54.198524157 +0000 UTC m=+7.323043493" observedRunningTime="2025-05-10 01:43:54.850673466 +0000 UTC m=+7.975192799" watchObservedRunningTime="2025-05-10 01:44:05.892162063 +0000 UTC m=+19.016681408" May 10 01:44:06.020470 env[1193]: time="2025-05-10T01:44:06.020405702Z" level=info msg="shim disconnected" id=37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28 May 10 01:44:06.020815 env[1193]: time="2025-05-10T01:44:06.020782096Z" level=warning msg="cleaning up after shim disconnected" id=37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28 namespace=k8s.io May 10 01:44:06.020941 env[1193]: time="2025-05-10T01:44:06.020913626Z" level=info msg="cleaning up dead shim" May 10 01:44:06.032045 env[1193]: time="2025-05-10T01:44:06.031982916Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:44:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1815 runtime=io.containerd.runc.v2\n" May 10 01:44:06.593161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28-rootfs.mount: Deactivated successfully. May 10 01:44:06.628599 kubelet[1461]: E0510 01:44:06.628501 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:06.871314 env[1193]: time="2025-05-10T01:44:06.871154916Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 01:44:06.902793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198431668.mount: Deactivated successfully. May 10 01:44:06.910916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount707172956.mount: Deactivated successfully. May 10 01:44:06.919322 env[1193]: time="2025-05-10T01:44:06.919232694Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\"" May 10 01:44:06.921059 env[1193]: time="2025-05-10T01:44:06.921023784Z" level=info msg="StartContainer for \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\"" May 10 01:44:06.945174 systemd[1]: Started cri-containerd-082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e.scope. May 10 01:44:06.995681 env[1193]: time="2025-05-10T01:44:06.995615783Z" level=info msg="StartContainer for \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\" returns successfully" May 10 01:44:07.014432 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 01:44:07.015522 systemd[1]: Stopped systemd-sysctl.service. May 10 01:44:07.015884 systemd[1]: Stopping systemd-sysctl.service... May 10 01:44:07.021730 systemd[1]: Starting systemd-sysctl.service... May 10 01:44:07.023288 systemd[1]: cri-containerd-082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e.scope: Deactivated successfully. May 10 01:44:07.036506 systemd[1]: Finished systemd-sysctl.service. May 10 01:44:07.055288 env[1193]: time="2025-05-10T01:44:07.055220384Z" level=info msg="shim disconnected" id=082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e May 10 01:44:07.055288 env[1193]: time="2025-05-10T01:44:07.055286758Z" level=warning msg="cleaning up after shim disconnected" id=082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e namespace=k8s.io May 10 01:44:07.055603 env[1193]: time="2025-05-10T01:44:07.055303971Z" level=info msg="cleaning up dead shim" May 10 01:44:07.066988 env[1193]: time="2025-05-10T01:44:07.066921123Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:44:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1878 runtime=io.containerd.runc.v2\n" May 10 01:44:07.611511 kubelet[1461]: E0510 01:44:07.611389 1461 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:07.629180 kubelet[1461]: E0510 01:44:07.629067 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:07.876616 env[1193]: time="2025-05-10T01:44:07.875870387Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 01:44:07.894042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840475611.mount: Deactivated successfully. May 10 01:44:07.901267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622529147.mount: Deactivated successfully. May 10 01:44:07.906861 env[1193]: time="2025-05-10T01:44:07.906811623Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\"" May 10 01:44:07.907831 env[1193]: time="2025-05-10T01:44:07.907798232Z" level=info msg="StartContainer for \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\"" May 10 01:44:07.929312 systemd[1]: Started cri-containerd-e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633.scope. May 10 01:44:07.977426 systemd[1]: cri-containerd-e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633.scope: Deactivated successfully. May 10 01:44:07.980007 env[1193]: time="2025-05-10T01:44:07.979954922Z" level=info msg="StartContainer for \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\" returns successfully" May 10 01:44:08.010770 env[1193]: time="2025-05-10T01:44:08.010695725Z" level=info msg="shim disconnected" id=e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633 May 10 01:44:08.010770 env[1193]: time="2025-05-10T01:44:08.010769186Z" level=warning msg="cleaning up after shim disconnected" id=e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633 namespace=k8s.io May 10 01:44:08.011102 env[1193]: time="2025-05-10T01:44:08.010786410Z" level=info msg="cleaning up dead shim" May 10 01:44:08.022268 env[1193]: time="2025-05-10T01:44:08.022210348Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:44:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1936 runtime=io.containerd.runc.v2\n" May 10 01:44:08.630219 kubelet[1461]: E0510 01:44:08.630154 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:08.880841 env[1193]: time="2025-05-10T01:44:08.880422118Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 01:44:08.894778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248569361.mount: Deactivated successfully. May 10 01:44:08.902638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount981281157.mount: Deactivated successfully. May 10 01:44:08.905463 env[1193]: time="2025-05-10T01:44:08.905408270Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\"" May 10 01:44:08.906291 env[1193]: time="2025-05-10T01:44:08.906256992Z" level=info msg="StartContainer for \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\"" May 10 01:44:08.926966 systemd[1]: Started cri-containerd-f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b.scope. May 10 01:44:08.968139 systemd[1]: cri-containerd-f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b.scope: Deactivated successfully. May 10 01:44:08.972898 env[1193]: time="2025-05-10T01:44:08.972793812Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f32ecc_4f2e_48d1_972f_dfca021e4899.slice/cri-containerd-f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b.scope/memory.events\": no such file or directory" May 10 01:44:08.974259 env[1193]: time="2025-05-10T01:44:08.974209559Z" level=info msg="StartContainer for \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\" returns successfully" May 10 01:44:09.001963 env[1193]: time="2025-05-10T01:44:09.001904149Z" level=info msg="shim disconnected" id=f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b May 10 01:44:09.001963 env[1193]: time="2025-05-10T01:44:09.001963584Z" level=warning msg="cleaning up after shim disconnected" id=f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b namespace=k8s.io May 10 01:44:09.002243 env[1193]: time="2025-05-10T01:44:09.001981636Z" level=info msg="cleaning up dead shim" May 10 01:44:09.013739 env[1193]: time="2025-05-10T01:44:09.013635644Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:44:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1993 runtime=io.containerd.runc.v2\n" May 10 01:44:09.631882 kubelet[1461]: E0510 01:44:09.631833 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:09.886588 env[1193]: time="2025-05-10T01:44:09.886192442Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 01:44:09.904030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644839608.mount: Deactivated successfully. May 10 01:44:09.911767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111496167.mount: Deactivated successfully. May 10 01:44:09.916538 env[1193]: time="2025-05-10T01:44:09.916489985Z" level=info msg="CreateContainer within sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\"" May 10 01:44:09.917635 env[1193]: time="2025-05-10T01:44:09.917597207Z" level=info msg="StartContainer for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\"" May 10 01:44:09.940730 systemd[1]: Started cri-containerd-9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844.scope. May 10 01:44:09.989618 env[1193]: time="2025-05-10T01:44:09.989526038Z" level=info msg="StartContainer for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" returns successfully" May 10 01:44:10.138759 kubelet[1461]: I0510 01:44:10.138629 1461 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 10 01:44:10.633076 kubelet[1461]: E0510 01:44:10.632917 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:10.680628 kernel: Initializing XFRM netlink socket May 10 01:44:10.928733 kubelet[1461]: I0510 01:44:10.928480 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kmm9b" podStartSLOduration=9.472623189 podStartE2EDuration="23.928460287s" podCreationTimestamp="2025-05-10 01:43:47 +0000 UTC" firstStartedPulling="2025-05-10 01:43:51.124071492 +0000 UTC m=+4.248590822" lastFinishedPulling="2025-05-10 01:44:05.579908583 +0000 UTC m=+18.704427920" observedRunningTime="2025-05-10 01:44:10.928418811 +0000 UTC m=+24.052938162" watchObservedRunningTime="2025-05-10 01:44:10.928460287 +0000 UTC m=+24.052979628" May 10 01:44:11.633486 kubelet[1461]: E0510 01:44:11.633414 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:12.146823 kubelet[1461]: I0510 01:44:12.146736 1461 topology_manager.go:215] "Topology Admit Handler" podUID="69fd6b6b-7e17-445d-bad2-6561600fb63e" podNamespace="default" podName="nginx-deployment-85f456d6dd-grdsg" May 10 01:44:12.155174 systemd[1]: Created slice kubepods-besteffort-pod69fd6b6b_7e17_445d_bad2_6561600fb63e.slice. May 10 01:44:12.204760 kubelet[1461]: I0510 01:44:12.204699 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpk4p\" (UniqueName: \"kubernetes.io/projected/69fd6b6b-7e17-445d-bad2-6561600fb63e-kube-api-access-rpk4p\") pod \"nginx-deployment-85f456d6dd-grdsg\" (UID: \"69fd6b6b-7e17-445d-bad2-6561600fb63e\") " pod="default/nginx-deployment-85f456d6dd-grdsg" May 10 01:44:12.398418 systemd-networkd[1014]: cilium_host: Link UP May 10 01:44:12.405009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 01:44:12.402011 systemd-networkd[1014]: cilium_net: Link UP May 10 01:44:12.402021 systemd-networkd[1014]: cilium_net: Gained carrier May 10 01:44:12.402334 systemd-networkd[1014]: cilium_host: Gained carrier May 10 01:44:12.423382 systemd-networkd[1014]: cilium_host: Gained IPv6LL May 10 01:44:12.461725 env[1193]: time="2025-05-10T01:44:12.460915517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-grdsg,Uid:69fd6b6b-7e17-445d-bad2-6561600fb63e,Namespace:default,Attempt:0,}" May 10 01:44:12.517778 systemd-networkd[1014]: cilium_net: Gained IPv6LL May 10 01:44:12.589806 systemd-networkd[1014]: cilium_vxlan: Link UP May 10 01:44:12.589817 systemd-networkd[1014]: cilium_vxlan: Gained carrier May 10 01:44:12.634020 kubelet[1461]: E0510 01:44:12.633943 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:12.957618 kernel: NET: Registered PF_ALG protocol family May 10 01:44:13.635031 kubelet[1461]: E0510 01:44:13.634802 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:13.945980 systemd-networkd[1014]: lxc_health: Link UP May 10 01:44:13.960503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 01:44:13.958936 systemd-networkd[1014]: lxc_health: Gained carrier May 10 01:44:14.372808 systemd-networkd[1014]: cilium_vxlan: Gained IPv6LL May 10 01:44:14.554164 systemd-networkd[1014]: lxc3a7f0ba6c2bb: Link UP May 10 01:44:14.562617 kernel: eth0: renamed from tmp8ac79 May 10 01:44:14.568176 systemd-networkd[1014]: lxc3a7f0ba6c2bb: Gained carrier May 10 01:44:14.568975 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3a7f0ba6c2bb: link becomes ready May 10 01:44:14.635854 kubelet[1461]: E0510 01:44:14.635790 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:15.333923 systemd-networkd[1014]: lxc_health: Gained IPv6LL May 10 01:44:15.637111 kubelet[1461]: E0510 01:44:15.636895 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:16.548956 systemd-networkd[1014]: lxc3a7f0ba6c2bb: Gained IPv6LL May 10 01:44:16.637436 kubelet[1461]: E0510 01:44:16.637365 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:17.639063 kubelet[1461]: E0510 01:44:17.638963 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:18.640142 kubelet[1461]: E0510 01:44:18.640070 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:19.641141 kubelet[1461]: E0510 01:44:19.641077 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:19.718022 env[1193]: time="2025-05-10T01:44:19.717889014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:44:19.718022 env[1193]: time="2025-05-10T01:44:19.717967058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:44:19.718831 env[1193]: time="2025-05-10T01:44:19.718773926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:44:19.719148 env[1193]: time="2025-05-10T01:44:19.719095354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ac791fd7ca7edb9aa543007acd682181388314cb05cffbaf40e1e012718c77f pid=2529 runtime=io.containerd.runc.v2 May 10 01:44:19.754723 systemd[1]: Started cri-containerd-8ac791fd7ca7edb9aa543007acd682181388314cb05cffbaf40e1e012718c77f.scope. May 10 01:44:19.819531 env[1193]: time="2025-05-10T01:44:19.819408091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-grdsg,Uid:69fd6b6b-7e17-445d-bad2-6561600fb63e,Namespace:default,Attempt:0,} returns sandbox id \"8ac791fd7ca7edb9aa543007acd682181388314cb05cffbaf40e1e012718c77f\"" May 10 01:44:19.824898 env[1193]: time="2025-05-10T01:44:19.824861683Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 10 01:44:20.641641 kubelet[1461]: E0510 01:44:20.641570 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:21.642863 kubelet[1461]: E0510 01:44:21.642810 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:22.643999 kubelet[1461]: E0510 01:44:22.643940 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:23.644799 kubelet[1461]: E0510 01:44:23.644737 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:24.016641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount619463818.mount: Deactivated successfully. May 10 01:44:24.645822 kubelet[1461]: E0510 01:44:24.645757 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:25.645996 kubelet[1461]: E0510 01:44:25.645910 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:26.467780 env[1193]: time="2025-05-10T01:44:26.467700800Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:26.469768 env[1193]: time="2025-05-10T01:44:26.469730956Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:26.472772 env[1193]: time="2025-05-10T01:44:26.472709517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:26.481493 env[1193]: time="2025-05-10T01:44:26.480962827Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:26.481641 env[1193]: time="2025-05-10T01:44:26.481603414Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 10 01:44:26.485696 env[1193]: time="2025-05-10T01:44:26.485656600Z" level=info msg="CreateContainer within sandbox \"8ac791fd7ca7edb9aa543007acd682181388314cb05cffbaf40e1e012718c77f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 10 01:44:26.499824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3194388986.mount: Deactivated successfully. May 10 01:44:26.507237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1974570634.mount: Deactivated successfully. May 10 01:44:26.511515 env[1193]: time="2025-05-10T01:44:26.511462361Z" level=info msg="CreateContainer within sandbox \"8ac791fd7ca7edb9aa543007acd682181388314cb05cffbaf40e1e012718c77f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e677d35c8a2ab736d2a37d74f49167f7c5f39919fd243f34d5d8f74ddf03d334\"" May 10 01:44:26.512659 env[1193]: time="2025-05-10T01:44:26.512564895Z" level=info msg="StartContainer for \"e677d35c8a2ab736d2a37d74f49167f7c5f39919fd243f34d5d8f74ddf03d334\"" May 10 01:44:26.540064 systemd[1]: Started cri-containerd-e677d35c8a2ab736d2a37d74f49167f7c5f39919fd243f34d5d8f74ddf03d334.scope. May 10 01:44:26.584666 env[1193]: time="2025-05-10T01:44:26.583948564Z" level=info msg="StartContainer for \"e677d35c8a2ab736d2a37d74f49167f7c5f39919fd243f34d5d8f74ddf03d334\" returns successfully" May 10 01:44:26.647822 kubelet[1461]: E0510 01:44:26.647733 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:27.611053 kubelet[1461]: E0510 01:44:27.610995 1461 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:27.648986 kubelet[1461]: E0510 01:44:27.648915 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:28.649473 kubelet[1461]: E0510 01:44:28.649419 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:29.650890 kubelet[1461]: E0510 01:44:29.650820 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:30.652306 kubelet[1461]: E0510 01:44:30.652241 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:31.653722 kubelet[1461]: E0510 01:44:31.653663 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:32.654710 kubelet[1461]: E0510 01:44:32.654648 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:33.655896 kubelet[1461]: E0510 01:44:33.655835 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:33.792562 kubelet[1461]: I0510 01:44:33.792446 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-grdsg" podStartSLOduration=15.132944951 podStartE2EDuration="21.792409277s" podCreationTimestamp="2025-05-10 01:44:12 +0000 UTC" firstStartedPulling="2025-05-10 01:44:19.824136566 +0000 UTC m=+32.948655889" lastFinishedPulling="2025-05-10 01:44:26.483600878 +0000 UTC m=+39.608120215" observedRunningTime="2025-05-10 01:44:26.936246061 +0000 UTC m=+40.060765407" watchObservedRunningTime="2025-05-10 01:44:33.792409277 +0000 UTC m=+46.916928615" May 10 01:44:33.793232 kubelet[1461]: I0510 01:44:33.793142 1461 topology_manager.go:215] "Topology Admit Handler" podUID="0241ca3c-db1c-4cbf-8216-7e224bffcf20" podNamespace="default" podName="nfs-server-provisioner-0" May 10 01:44:33.801318 systemd[1]: Created slice kubepods-besteffort-pod0241ca3c_db1c_4cbf_8216_7e224bffcf20.slice. May 10 01:44:33.883412 kubelet[1461]: I0510 01:44:33.883354 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldwb\" (UniqueName: \"kubernetes.io/projected/0241ca3c-db1c-4cbf-8216-7e224bffcf20-kube-api-access-dldwb\") pod \"nfs-server-provisioner-0\" (UID: \"0241ca3c-db1c-4cbf-8216-7e224bffcf20\") " pod="default/nfs-server-provisioner-0" May 10 01:44:33.883755 kubelet[1461]: I0510 01:44:33.883717 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0241ca3c-db1c-4cbf-8216-7e224bffcf20-data\") pod \"nfs-server-provisioner-0\" (UID: \"0241ca3c-db1c-4cbf-8216-7e224bffcf20\") " pod="default/nfs-server-provisioner-0" May 10 01:44:34.106713 env[1193]: time="2025-05-10T01:44:34.106555009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0241ca3c-db1c-4cbf-8216-7e224bffcf20,Namespace:default,Attempt:0,}" May 10 01:44:34.169012 systemd-networkd[1014]: lxcf838e3d5642d: Link UP May 10 01:44:34.180800 kernel: eth0: renamed from tmp489ef May 10 01:44:34.193613 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 01:44:34.193745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf838e3d5642d: link becomes ready May 10 01:44:34.193958 systemd-networkd[1014]: lxcf838e3d5642d: Gained carrier May 10 01:44:34.393428 env[1193]: time="2025-05-10T01:44:34.392785417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:44:34.393810 env[1193]: time="2025-05-10T01:44:34.393745864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:44:34.393916 env[1193]: time="2025-05-10T01:44:34.393846112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:44:34.394150 env[1193]: time="2025-05-10T01:44:34.394095969Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/489ef08cb5d48a66a7d00c87e2a6261155a1728d8fce692b9b73a6c841c9f7df pid=2665 runtime=io.containerd.runc.v2 May 10 01:44:34.417112 systemd[1]: Started cri-containerd-489ef08cb5d48a66a7d00c87e2a6261155a1728d8fce692b9b73a6c841c9f7df.scope. May 10 01:44:34.489712 env[1193]: time="2025-05-10T01:44:34.489649114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0241ca3c-db1c-4cbf-8216-7e224bffcf20,Namespace:default,Attempt:0,} returns sandbox id \"489ef08cb5d48a66a7d00c87e2a6261155a1728d8fce692b9b73a6c841c9f7df\"" May 10 01:44:34.492791 env[1193]: time="2025-05-10T01:44:34.492751832Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 10 01:44:34.657837 kubelet[1461]: E0510 01:44:34.656977 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:35.000008 systemd[1]: run-containerd-runc-k8s.io-489ef08cb5d48a66a7d00c87e2a6261155a1728d8fce692b9b73a6c841c9f7df-runc.I0ZTUu.mount: Deactivated successfully. May 10 01:44:35.658066 kubelet[1461]: E0510 01:44:35.657986 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:35.813145 systemd-networkd[1014]: lxcf838e3d5642d: Gained IPv6LL May 10 01:44:36.658674 kubelet[1461]: E0510 01:44:36.658591 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:37.659473 kubelet[1461]: E0510 01:44:37.659380 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:38.456806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689603614.mount: Deactivated successfully. May 10 01:44:38.660400 kubelet[1461]: E0510 01:44:38.660291 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:39.661559 kubelet[1461]: E0510 01:44:39.661485 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:40.661740 kubelet[1461]: E0510 01:44:40.661681 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:41.662315 kubelet[1461]: E0510 01:44:41.662253 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:41.853602 env[1193]: time="2025-05-10T01:44:41.853527587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:41.861632 env[1193]: time="2025-05-10T01:44:41.861571837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:41.863884 env[1193]: time="2025-05-10T01:44:41.863847373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:41.866000 env[1193]: time="2025-05-10T01:44:41.865964321Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:41.867212 env[1193]: time="2025-05-10T01:44:41.867167421Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 10 01:44:41.871255 env[1193]: time="2025-05-10T01:44:41.871181728Z" level=info msg="CreateContainer within sandbox \"489ef08cb5d48a66a7d00c87e2a6261155a1728d8fce692b9b73a6c841c9f7df\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 10 01:44:41.883512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788259546.mount: Deactivated successfully. May 10 01:44:41.893494 env[1193]: time="2025-05-10T01:44:41.893441466Z" level=info msg="CreateContainer within sandbox \"489ef08cb5d48a66a7d00c87e2a6261155a1728d8fce692b9b73a6c841c9f7df\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f66e0666d93fc630a809834ffdf03e18ab8ca021edce8ed5313a07d6a139adaa\"" May 10 01:44:41.894346 env[1193]: time="2025-05-10T01:44:41.894291657Z" level=info msg="StartContainer for \"f66e0666d93fc630a809834ffdf03e18ab8ca021edce8ed5313a07d6a139adaa\"" May 10 01:44:41.930363 systemd[1]: Started cri-containerd-f66e0666d93fc630a809834ffdf03e18ab8ca021edce8ed5313a07d6a139adaa.scope. May 10 01:44:41.984309 env[1193]: time="2025-05-10T01:44:41.984254223Z" level=info msg="StartContainer for \"f66e0666d93fc630a809834ffdf03e18ab8ca021edce8ed5313a07d6a139adaa\" returns successfully" May 10 01:44:42.663344 kubelet[1461]: E0510 01:44:42.663268 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:43.663837 kubelet[1461]: E0510 01:44:43.663471 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:44.664004 kubelet[1461]: E0510 01:44:44.663870 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:45.664996 kubelet[1461]: E0510 01:44:45.664912 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:46.666663 kubelet[1461]: E0510 01:44:46.666568 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:47.610971 kubelet[1461]: E0510 01:44:47.610897 1461 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:47.666867 kubelet[1461]: E0510 01:44:47.666813 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:48.667134 kubelet[1461]: E0510 01:44:48.667042 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:49.668201 kubelet[1461]: E0510 01:44:49.668107 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:50.669119 kubelet[1461]: E0510 01:44:50.669059 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:51.670179 kubelet[1461]: E0510 01:44:51.670093 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:52.219229 kubelet[1461]: I0510 01:44:52.219122 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.841805673 podStartE2EDuration="19.219098605s" podCreationTimestamp="2025-05-10 01:44:33 +0000 UTC" firstStartedPulling="2025-05-10 01:44:34.491822158 +0000 UTC m=+47.616341485" lastFinishedPulling="2025-05-10 01:44:41.869115082 +0000 UTC m=+54.993634417" observedRunningTime="2025-05-10 01:44:42.984271731 +0000 UTC m=+56.108791072" watchObservedRunningTime="2025-05-10 01:44:52.219098605 +0000 UTC m=+65.343617943" May 10 01:44:52.219572 kubelet[1461]: I0510 01:44:52.219322 1461 topology_manager.go:215] "Topology Admit Handler" podUID="4a5c7baf-75c1-430a-ab0f-a26cc595362b" podNamespace="default" podName="test-pod-1" May 10 01:44:52.226320 systemd[1]: Created slice kubepods-besteffort-pod4a5c7baf_75c1_430a_ab0f_a26cc595362b.slice. May 10 01:44:52.313804 kubelet[1461]: I0510 01:44:52.313738 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgslk\" (UniqueName: \"kubernetes.io/projected/4a5c7baf-75c1-430a-ab0f-a26cc595362b-kube-api-access-fgslk\") pod \"test-pod-1\" (UID: \"4a5c7baf-75c1-430a-ab0f-a26cc595362b\") " pod="default/test-pod-1" May 10 01:44:52.314128 kubelet[1461]: I0510 01:44:52.314086 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df220d74-1612-465a-9177-e3c3fae7948e\" (UniqueName: \"kubernetes.io/nfs/4a5c7baf-75c1-430a-ab0f-a26cc595362b-pvc-df220d74-1612-465a-9177-e3c3fae7948e\") pod \"test-pod-1\" (UID: \"4a5c7baf-75c1-430a-ab0f-a26cc595362b\") " pod="default/test-pod-1" May 10 01:44:52.460653 kernel: FS-Cache: Loaded May 10 01:44:52.518781 kernel: RPC: Registered named UNIX socket transport module. May 10 01:44:52.519008 kernel: RPC: Registered udp transport module. May 10 01:44:52.519065 kernel: RPC: Registered tcp transport module. May 10 01:44:52.520010 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 10 01:44:52.597638 kernel: FS-Cache: Netfs 'nfs' registered for caching May 10 01:44:52.670750 kubelet[1461]: E0510 01:44:52.670687 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:52.796768 kernel: NFS: Registering the id_resolver key type May 10 01:44:52.797183 kernel: Key type id_resolver registered May 10 01:44:52.797246 kernel: Key type id_legacy registered May 10 01:44:52.853595 nfsidmap[2793]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' May 10 01:44:52.860753 nfsidmap[2796]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' May 10 01:44:53.132096 env[1193]: time="2025-05-10T01:44:53.131026379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4a5c7baf-75c1-430a-ab0f-a26cc595362b,Namespace:default,Attempt:0,}" May 10 01:44:53.184484 systemd-networkd[1014]: lxc1c64f86d299f: Link UP May 10 01:44:53.194736 kernel: eth0: renamed from tmp6139d May 10 01:44:53.202678 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 01:44:53.202773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1c64f86d299f: link becomes ready May 10 01:44:53.202849 systemd-networkd[1014]: lxc1c64f86d299f: Gained carrier May 10 01:44:53.436342 env[1193]: time="2025-05-10T01:44:53.435629407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:44:53.436727 env[1193]: time="2025-05-10T01:44:53.436657990Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:44:53.436978 env[1193]: time="2025-05-10T01:44:53.436893313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:44:53.437531 env[1193]: time="2025-05-10T01:44:53.437473273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6139d7f916931f652a91482f1732d1b5bd3e15e81343904301a3ec4eaaaad906 pid=2833 runtime=io.containerd.runc.v2 May 10 01:44:53.465123 systemd[1]: Started cri-containerd-6139d7f916931f652a91482f1732d1b5bd3e15e81343904301a3ec4eaaaad906.scope. May 10 01:44:53.555994 env[1193]: time="2025-05-10T01:44:53.555935975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4a5c7baf-75c1-430a-ab0f-a26cc595362b,Namespace:default,Attempt:0,} returns sandbox id \"6139d7f916931f652a91482f1732d1b5bd3e15e81343904301a3ec4eaaaad906\"" May 10 01:44:53.559778 env[1193]: time="2025-05-10T01:44:53.559743122Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 10 01:44:53.672361 kubelet[1461]: E0510 01:44:53.672249 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:54.021085 env[1193]: time="2025-05-10T01:44:54.021005889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:54.022745 env[1193]: time="2025-05-10T01:44:54.022711872Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:54.024827 env[1193]: time="2025-05-10T01:44:54.024794908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:54.027164 env[1193]: time="2025-05-10T01:44:54.027121286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:44:54.028343 env[1193]: time="2025-05-10T01:44:54.028280837Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 10 01:44:54.038814 env[1193]: time="2025-05-10T01:44:54.038729130Z" level=info msg="CreateContainer within sandbox \"6139d7f916931f652a91482f1732d1b5bd3e15e81343904301a3ec4eaaaad906\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 10 01:44:54.056905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577094219.mount: Deactivated successfully. May 10 01:44:54.063879 env[1193]: time="2025-05-10T01:44:54.063808038Z" level=info msg="CreateContainer within sandbox \"6139d7f916931f652a91482f1732d1b5bd3e15e81343904301a3ec4eaaaad906\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"174760722015579cb004d38352c3e744ed1f98d0d1cc5cae0bc33ccf61bd91fd\"" May 10 01:44:54.065137 env[1193]: time="2025-05-10T01:44:54.065102620Z" level=info msg="StartContainer for \"174760722015579cb004d38352c3e744ed1f98d0d1cc5cae0bc33ccf61bd91fd\"" May 10 01:44:54.089125 systemd[1]: Started cri-containerd-174760722015579cb004d38352c3e744ed1f98d0d1cc5cae0bc33ccf61bd91fd.scope. May 10 01:44:54.137217 env[1193]: time="2025-05-10T01:44:54.137150573Z" level=info msg="StartContainer for \"174760722015579cb004d38352c3e744ed1f98d0d1cc5cae0bc33ccf61bd91fd\" returns successfully" May 10 01:44:54.673554 kubelet[1461]: E0510 01:44:54.673413 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:54.692948 systemd-networkd[1014]: lxc1c64f86d299f: Gained IPv6LL May 10 01:44:55.674497 kubelet[1461]: E0510 01:44:55.674414 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:56.676406 kubelet[1461]: E0510 01:44:56.676296 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:57.677863 kubelet[1461]: E0510 01:44:57.677771 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:58.679012 kubelet[1461]: E0510 01:44:58.678943 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:44:59.680676 kubelet[1461]: E0510 01:44:59.680586 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:00.682239 kubelet[1461]: E0510 01:45:00.682162 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:01.682839 kubelet[1461]: E0510 01:45:01.682765 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:02.213445 kubelet[1461]: I0510 01:45:02.213348 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=26.741752287 podStartE2EDuration="27.213309182s" podCreationTimestamp="2025-05-10 01:44:35 +0000 UTC" firstStartedPulling="2025-05-10 01:44:53.559016188 +0000 UTC m=+66.683535518" lastFinishedPulling="2025-05-10 01:44:54.030573086 +0000 UTC m=+67.155092413" observedRunningTime="2025-05-10 01:44:55.012101492 +0000 UTC m=+68.136620829" watchObservedRunningTime="2025-05-10 01:45:02.213309182 +0000 UTC m=+75.337828519" May 10 01:45:02.241398 systemd[1]: run-containerd-runc-k8s.io-9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844-runc.mP8VgJ.mount: Deactivated successfully. May 10 01:45:02.264367 env[1193]: time="2025-05-10T01:45:02.264248149Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 01:45:02.271957 env[1193]: time="2025-05-10T01:45:02.271890667Z" level=info msg="StopContainer for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" with timeout 2 (s)" May 10 01:45:02.272459 env[1193]: time="2025-05-10T01:45:02.272423919Z" level=info msg="Stop container \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" with signal terminated" May 10 01:45:02.285016 systemd-networkd[1014]: lxc_health: Link DOWN May 10 01:45:02.285027 systemd-networkd[1014]: lxc_health: Lost carrier May 10 01:45:02.325381 systemd[1]: cri-containerd-9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844.scope: Deactivated successfully. May 10 01:45:02.325811 systemd[1]: cri-containerd-9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844.scope: Consumed 9.393s CPU time. May 10 01:45:02.352110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844-rootfs.mount: Deactivated successfully. May 10 01:45:02.364165 env[1193]: time="2025-05-10T01:45:02.364106111Z" level=info msg="shim disconnected" id=9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844 May 10 01:45:02.364588 env[1193]: time="2025-05-10T01:45:02.364533296Z" level=warning msg="cleaning up after shim disconnected" id=9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844 namespace=k8s.io May 10 01:45:02.364734 env[1193]: time="2025-05-10T01:45:02.364705382Z" level=info msg="cleaning up dead shim" May 10 01:45:02.376245 env[1193]: time="2025-05-10T01:45:02.376195054Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2966 runtime=io.containerd.runc.v2\n" May 10 01:45:02.378490 env[1193]: time="2025-05-10T01:45:02.378449178Z" level=info msg="StopContainer for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" returns successfully" May 10 01:45:02.380261 env[1193]: time="2025-05-10T01:45:02.380220541Z" level=info msg="StopPodSandbox for \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\"" May 10 01:45:02.380346 env[1193]: time="2025-05-10T01:45:02.380303430Z" level=info msg="Container to stop \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:02.380346 env[1193]: time="2025-05-10T01:45:02.380332991Z" level=info msg="Container to stop \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:02.380449 env[1193]: time="2025-05-10T01:45:02.380351728Z" level=info msg="Container to stop \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:02.380449 env[1193]: time="2025-05-10T01:45:02.380369107Z" level=info msg="Container to stop \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:02.380449 env[1193]: time="2025-05-10T01:45:02.380392982Z" level=info msg="Container to stop \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:02.382784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938-shm.mount: Deactivated successfully. May 10 01:45:02.391355 systemd[1]: cri-containerd-b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938.scope: Deactivated successfully. May 10 01:45:02.420467 env[1193]: time="2025-05-10T01:45:02.420399045Z" level=info msg="shim disconnected" id=b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938 May 10 01:45:02.421362 env[1193]: time="2025-05-10T01:45:02.421331707Z" level=warning msg="cleaning up after shim disconnected" id=b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938 namespace=k8s.io May 10 01:45:02.421501 env[1193]: time="2025-05-10T01:45:02.421473001Z" level=info msg="cleaning up dead shim" May 10 01:45:02.432826 env[1193]: time="2025-05-10T01:45:02.432776751Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2997 runtime=io.containerd.runc.v2\n" May 10 01:45:02.433835 env[1193]: time="2025-05-10T01:45:02.433786800Z" level=info msg="TearDown network for sandbox \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" successfully" May 10 01:45:02.434031 env[1193]: time="2025-05-10T01:45:02.433987546Z" level=info msg="StopPodSandbox for \"b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938\" returns successfully" May 10 01:45:02.495065 kubelet[1461]: I0510 01:45:02.494871 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-cgroup\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.495065 kubelet[1461]: I0510 01:45:02.494972 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-lib-modules\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.495065 kubelet[1461]: I0510 01:45:02.495028 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79f32ecc-4f2e-48d1-972f-dfca021e4899-clustermesh-secrets\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.495904 kubelet[1461]: I0510 01:45:02.495806 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.496138 kubelet[1461]: I0510 01:45:02.496082 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.497373 kubelet[1461]: I0510 01:45:02.497336 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-net\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497490 kubelet[1461]: I0510 01:45:02.497397 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-config-path\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497490 kubelet[1461]: I0510 01:45:02.497430 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cni-path\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497490 kubelet[1461]: I0510 01:45:02.497474 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-etc-cni-netd\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497717 kubelet[1461]: I0510 01:45:02.497498 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-run\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497717 kubelet[1461]: I0510 01:45:02.497534 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-kernel\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497717 kubelet[1461]: I0510 01:45:02.497577 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-hubble-tls\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497717 kubelet[1461]: I0510 01:45:02.497624 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-xtables-lock\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497717 kubelet[1461]: I0510 01:45:02.497663 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgwlz\" (UniqueName: \"kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-kube-api-access-bgwlz\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.497717 kubelet[1461]: I0510 01:45:02.497713 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-bpf-maps\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.498059 kubelet[1461]: I0510 01:45:02.497741 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-hostproc\") pod \"79f32ecc-4f2e-48d1-972f-dfca021e4899\" (UID: \"79f32ecc-4f2e-48d1-972f-dfca021e4899\") " May 10 01:45:02.498059 kubelet[1461]: I0510 01:45:02.497806 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-cgroup\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.498059 kubelet[1461]: I0510 01:45:02.497835 1461 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-lib-modules\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.498059 kubelet[1461]: I0510 01:45:02.497879 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-hostproc" (OuterVolumeSpecName: "hostproc") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.498059 kubelet[1461]: I0510 01:45:02.497930 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.501386 kubelet[1461]: I0510 01:45:02.501344 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.501595 kubelet[1461]: I0510 01:45:02.501548 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cni-path" (OuterVolumeSpecName: "cni-path") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.501757 kubelet[1461]: I0510 01:45:02.501605 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 01:45:02.501976 kubelet[1461]: I0510 01:45:02.501930 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.502152 kubelet[1461]: I0510 01:45:02.502117 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.502754 kubelet[1461]: I0510 01:45:02.502726 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.502934 kubelet[1461]: I0510 01:45:02.502907 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:02.505881 kubelet[1461]: I0510 01:45:02.505846 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 01:45:02.506303 kubelet[1461]: I0510 01:45:02.506272 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f32ecc-4f2e-48d1-972f-dfca021e4899-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 01:45:02.508924 kubelet[1461]: I0510 01:45:02.508882 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-kube-api-access-bgwlz" (OuterVolumeSpecName: "kube-api-access-bgwlz") pod "79f32ecc-4f2e-48d1-972f-dfca021e4899" (UID: "79f32ecc-4f2e-48d1-972f-dfca021e4899"). InnerVolumeSpecName "kube-api-access-bgwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 01:45:02.598685 kubelet[1461]: I0510 01:45:02.598626 1461 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-xtables-lock\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.598988 kubelet[1461]: I0510 01:45:02.598959 1461 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bgwlz\" (UniqueName: \"kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-kube-api-access-bgwlz\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.599174 kubelet[1461]: I0510 01:45:02.599142 1461 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-bpf-maps\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.599309 kubelet[1461]: I0510 01:45:02.599285 1461 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-hostproc\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.599451 kubelet[1461]: I0510 01:45:02.599428 1461 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/79f32ecc-4f2e-48d1-972f-dfca021e4899-clustermesh-secrets\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.599669 kubelet[1461]: I0510 01:45:02.599648 1461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-net\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.599828 kubelet[1461]: I0510 01:45:02.599805 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-config-path\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.599979 kubelet[1461]: I0510 01:45:02.599956 1461 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cni-path\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.600124 kubelet[1461]: I0510 01:45:02.600093 1461 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-etc-cni-netd\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.600335 kubelet[1461]: I0510 01:45:02.600313 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-cilium-run\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.600493 kubelet[1461]: I0510 01:45:02.600471 1461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/79f32ecc-4f2e-48d1-972f-dfca021e4899-host-proc-sys-kernel\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.600675 kubelet[1461]: I0510 01:45:02.600652 1461 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/79f32ecc-4f2e-48d1-972f-dfca021e4899-hubble-tls\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:02.684570 kubelet[1461]: E0510 01:45:02.684504 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:02.730006 kubelet[1461]: E0510 01:45:02.729923 1461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 01:45:03.020036 kubelet[1461]: I0510 01:45:03.019989 1461 scope.go:117] "RemoveContainer" containerID="9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844" May 10 01:45:03.023239 env[1193]: time="2025-05-10T01:45:03.022656109Z" level=info msg="RemoveContainer for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\"" May 10 01:45:03.026701 env[1193]: time="2025-05-10T01:45:03.026662924Z" level=info msg="RemoveContainer for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" returns successfully" May 10 01:45:03.027117 kubelet[1461]: I0510 01:45:03.027078 1461 scope.go:117] "RemoveContainer" containerID="f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b" May 10 01:45:03.030675 systemd[1]: Removed slice kubepods-burstable-pod79f32ecc_4f2e_48d1_972f_dfca021e4899.slice. May 10 01:45:03.030831 systemd[1]: kubepods-burstable-pod79f32ecc_4f2e_48d1_972f_dfca021e4899.slice: Consumed 9.559s CPU time. May 10 01:45:03.031562 env[1193]: time="2025-05-10T01:45:03.031503132Z" level=info msg="RemoveContainer for \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\"" May 10 01:45:03.035272 env[1193]: time="2025-05-10T01:45:03.035216493Z" level=info msg="RemoveContainer for \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\" returns successfully" May 10 01:45:03.037279 kubelet[1461]: I0510 01:45:03.037233 1461 scope.go:117] "RemoveContainer" containerID="e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633" May 10 01:45:03.038952 env[1193]: time="2025-05-10T01:45:03.038908890Z" level=info msg="RemoveContainer for \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\"" May 10 01:45:03.041940 env[1193]: time="2025-05-10T01:45:03.041905030Z" level=info msg="RemoveContainer for \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\" returns successfully" May 10 01:45:03.042287 kubelet[1461]: I0510 01:45:03.042257 1461 scope.go:117] "RemoveContainer" containerID="082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e" May 10 01:45:03.046045 env[1193]: time="2025-05-10T01:45:03.046003248Z" level=info msg="RemoveContainer for \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\"" May 10 01:45:03.059860 env[1193]: time="2025-05-10T01:45:03.059774221Z" level=info msg="RemoveContainer for \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\" returns successfully" May 10 01:45:03.060511 kubelet[1461]: I0510 01:45:03.060478 1461 scope.go:117] "RemoveContainer" containerID="37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28" May 10 01:45:03.062531 env[1193]: time="2025-05-10T01:45:03.062159933Z" level=info msg="RemoveContainer for \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\"" May 10 01:45:03.065045 env[1193]: time="2025-05-10T01:45:03.065006712Z" level=info msg="RemoveContainer for \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\" returns successfully" May 10 01:45:03.065350 kubelet[1461]: I0510 01:45:03.065319 1461 scope.go:117] "RemoveContainer" containerID="9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844" May 10 01:45:03.065772 env[1193]: time="2025-05-10T01:45:03.065615380Z" level=error msg="ContainerStatus for \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\": not found" May 10 01:45:03.066046 kubelet[1461]: E0510 01:45:03.066012 1461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\": not found" containerID="9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844" May 10 01:45:03.066307 kubelet[1461]: I0510 01:45:03.066189 1461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844"} err="failed to get container status \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\": rpc error: code = NotFound desc = an error occurred when try to find container \"9962777ae97f6bf81d49198fbbcfc980a5ae68c427a5efb278d700f5d86e2844\": not found" May 10 01:45:03.066443 kubelet[1461]: I0510 01:45:03.066419 1461 scope.go:117] "RemoveContainer" containerID="f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b" May 10 01:45:03.066847 env[1193]: time="2025-05-10T01:45:03.066789402Z" level=error msg="ContainerStatus for \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\": not found" May 10 01:45:03.067131 kubelet[1461]: E0510 01:45:03.067082 1461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\": not found" containerID="f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b" May 10 01:45:03.067264 kubelet[1461]: I0510 01:45:03.067233 1461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b"} err="failed to get container status \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f99c8539aba2b8e59d5be2a24a2b4e5f30ae489851f6dbfb6b0aab25cd278c8b\": not found" May 10 01:45:03.067380 kubelet[1461]: I0510 01:45:03.067357 1461 scope.go:117] "RemoveContainer" containerID="e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633" May 10 01:45:03.067720 env[1193]: time="2025-05-10T01:45:03.067663960Z" level=error msg="ContainerStatus for \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\": not found" May 10 01:45:03.067974 kubelet[1461]: E0510 01:45:03.067946 1461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\": not found" containerID="e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633" May 10 01:45:03.068135 kubelet[1461]: I0510 01:45:03.068092 1461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633"} err="failed to get container status \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5ab81ef3e03e9b17c13de26067af1f966e9279250fd511d8266a6eaa3558633\": not found" May 10 01:45:03.068253 kubelet[1461]: I0510 01:45:03.068230 1461 scope.go:117] "RemoveContainer" containerID="082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e" May 10 01:45:03.068647 env[1193]: time="2025-05-10T01:45:03.068557287Z" level=error msg="ContainerStatus for \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\": not found" May 10 01:45:03.068849 kubelet[1461]: E0510 01:45:03.068821 1461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\": not found" containerID="082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e" May 10 01:45:03.068997 kubelet[1461]: I0510 01:45:03.068967 1461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e"} err="failed to get container status \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\": rpc error: code = NotFound desc = an error occurred when try to find container \"082c946cbac3a99cacd883ea90e6f531cb6e6a0d0ba234b63b13b309f7b8718e\": not found" May 10 01:45:03.069127 kubelet[1461]: I0510 01:45:03.069091 1461 scope.go:117] "RemoveContainer" containerID="37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28" May 10 01:45:03.069475 env[1193]: time="2025-05-10T01:45:03.069411662Z" level=error msg="ContainerStatus for \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\": not found" May 10 01:45:03.069783 kubelet[1461]: E0510 01:45:03.069749 1461 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\": not found" containerID="37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28" May 10 01:45:03.069863 kubelet[1461]: I0510 01:45:03.069795 1461 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28"} err="failed to get container status \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\": rpc error: code = NotFound desc = an error occurred when try to find container \"37d4b45e42ae7d7d48469cc82d3d265d62fb40c104b2d93f5c3d2526ccd9fe28\": not found" May 10 01:45:03.233020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9babb5455175c2a214c604a7d8aa42239c1b09878fd41cca8bced1f74b16938-rootfs.mount: Deactivated successfully. May 10 01:45:03.233191 systemd[1]: var-lib-kubelet-pods-79f32ecc\x2d4f2e\x2d48d1\x2d972f\x2ddfca021e4899-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbgwlz.mount: Deactivated successfully. May 10 01:45:03.233318 systemd[1]: var-lib-kubelet-pods-79f32ecc\x2d4f2e\x2d48d1\x2d972f\x2ddfca021e4899-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 01:45:03.233416 systemd[1]: var-lib-kubelet-pods-79f32ecc\x2d4f2e\x2d48d1\x2d972f\x2ddfca021e4899-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 01:45:03.685635 kubelet[1461]: E0510 01:45:03.685524 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:03.824108 kubelet[1461]: I0510 01:45:03.824033 1461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" path="/var/lib/kubelet/pods/79f32ecc-4f2e-48d1-972f-dfca021e4899/volumes" May 10 01:45:04.686726 kubelet[1461]: E0510 01:45:04.686568 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:05.687191 kubelet[1461]: E0510 01:45:05.687072 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:06.688219 kubelet[1461]: E0510 01:45:06.688072 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:07.404018 kubelet[1461]: I0510 01:45:07.403923 1461 topology_manager.go:215] "Topology Admit Handler" podUID="d164972a-4688-4b97-9a49-7d27868d347b" podNamespace="kube-system" podName="cilium-operator-599987898-wswk2" May 10 01:45:07.404250 kubelet[1461]: E0510 01:45:07.404092 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" containerName="mount-cgroup" May 10 01:45:07.404250 kubelet[1461]: E0510 01:45:07.404134 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" containerName="clean-cilium-state" May 10 01:45:07.404250 kubelet[1461]: E0510 01:45:07.404147 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" containerName="apply-sysctl-overwrites" May 10 01:45:07.404250 kubelet[1461]: E0510 01:45:07.404167 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" containerName="mount-bpf-fs" May 10 01:45:07.404250 kubelet[1461]: E0510 01:45:07.404214 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" containerName="cilium-agent" May 10 01:45:07.404536 kubelet[1461]: I0510 01:45:07.404287 1461 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f32ecc-4f2e-48d1-972f-dfca021e4899" containerName="cilium-agent" May 10 01:45:07.411896 kubelet[1461]: I0510 01:45:07.411859 1461 topology_manager.go:215] "Topology Admit Handler" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" podNamespace="kube-system" podName="cilium-t2sgx" May 10 01:45:07.412126 systemd[1]: Created slice kubepods-besteffort-podd164972a_4688_4b97_9a49_7d27868d347b.slice. May 10 01:45:07.419559 kubelet[1461]: W0510 01:45:07.419528 1461 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.47.106" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.419768 kubelet[1461]: E0510 01:45:07.419735 1461 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.230.47.106" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.420364 systemd[1]: Created slice kubepods-burstable-podb1b38921_e9c5_4efa_b3ca_50f9e8c186c9.slice. May 10 01:45:07.434116 kubelet[1461]: W0510 01:45:07.434062 1461 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.230.47.106" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.434116 kubelet[1461]: E0510 01:45:07.434108 1461 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.230.47.106" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.434116 kubelet[1461]: W0510 01:45:07.434062 1461 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.230.47.106" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.434490 kubelet[1461]: E0510 01:45:07.434136 1461 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.230.47.106" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.434715 kubelet[1461]: W0510 01:45:07.434688 1461 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.230.47.106" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.434854 kubelet[1461]: E0510 01:45:07.434831 1461 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.230.47.106" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.230.47.106' and this object May 10 01:45:07.534419 kubelet[1461]: I0510 01:45:07.534347 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-cgroup\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534419 kubelet[1461]: I0510 01:45:07.534416 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-kernel\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534718 kubelet[1461]: I0510 01:45:07.534455 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-lib-modules\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534718 kubelet[1461]: I0510 01:45:07.534484 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-clustermesh-secrets\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534718 kubelet[1461]: I0510 01:45:07.534510 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mlfv\" (UniqueName: \"kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-kube-api-access-7mlfv\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534718 kubelet[1461]: I0510 01:45:07.534536 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-run\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534718 kubelet[1461]: I0510 01:45:07.534568 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d164972a-4688-4b97-9a49-7d27868d347b-cilium-config-path\") pod \"cilium-operator-599987898-wswk2\" (UID: \"d164972a-4688-4b97-9a49-7d27868d347b\") " pod="kube-system/cilium-operator-599987898-wswk2" May 10 01:45:07.534968 kubelet[1461]: I0510 01:45:07.534624 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hubble-tls\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534968 kubelet[1461]: I0510 01:45:07.534651 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnnf9\" (UniqueName: \"kubernetes.io/projected/d164972a-4688-4b97-9a49-7d27868d347b-kube-api-access-gnnf9\") pod \"cilium-operator-599987898-wswk2\" (UID: \"d164972a-4688-4b97-9a49-7d27868d347b\") " pod="kube-system/cilium-operator-599987898-wswk2" May 10 01:45:07.534968 kubelet[1461]: I0510 01:45:07.534677 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-bpf-maps\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534968 kubelet[1461]: I0510 01:45:07.534701 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-config-path\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.534968 kubelet[1461]: I0510 01:45:07.534727 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-ipsec-secrets\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.535256 kubelet[1461]: I0510 01:45:07.534751 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-net\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.535256 kubelet[1461]: I0510 01:45:07.534775 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hostproc\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.535256 kubelet[1461]: I0510 01:45:07.534800 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cni-path\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.535256 kubelet[1461]: I0510 01:45:07.534823 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-etc-cni-netd\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.535256 kubelet[1461]: I0510 01:45:07.534862 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-xtables-lock\") pod \"cilium-t2sgx\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " pod="kube-system/cilium-t2sgx" May 10 01:45:07.611322 kubelet[1461]: E0510 01:45:07.611240 1461 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:07.689725 kubelet[1461]: E0510 01:45:07.688722 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:07.730972 kubelet[1461]: E0510 01:45:07.730918 1461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 01:45:08.617236 env[1193]: time="2025-05-10T01:45:08.617162587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wswk2,Uid:d164972a-4688-4b97-9a49-7d27868d347b,Namespace:kube-system,Attempt:0,}" May 10 01:45:08.628292 env[1193]: time="2025-05-10T01:45:08.628250008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2sgx,Uid:b1b38921-e9c5-4efa-b3ca-50f9e8c186c9,Namespace:kube-system,Attempt:0,}" May 10 01:45:08.638526 env[1193]: time="2025-05-10T01:45:08.638423224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:45:08.638526 env[1193]: time="2025-05-10T01:45:08.638488068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:45:08.638862 env[1193]: time="2025-05-10T01:45:08.638504644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:45:08.639287 env[1193]: time="2025-05-10T01:45:08.639228056Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ccb84ea1c4aaf04147fbd455e3c63e189a9bdf7e245fe691c4de442a870f3116 pid=3027 runtime=io.containerd.runc.v2 May 10 01:45:08.663285 env[1193]: time="2025-05-10T01:45:08.663151225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:45:08.663285 env[1193]: time="2025-05-10T01:45:08.663243740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:45:08.663608 env[1193]: time="2025-05-10T01:45:08.663260977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:45:08.665053 env[1193]: time="2025-05-10T01:45:08.664993764Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b pid=3049 runtime=io.containerd.runc.v2 May 10 01:45:08.679198 systemd[1]: Started cri-containerd-ccb84ea1c4aaf04147fbd455e3c63e189a9bdf7e245fe691c4de442a870f3116.scope. May 10 01:45:08.689764 kubelet[1461]: E0510 01:45:08.689709 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:08.697960 systemd[1]: Started cri-containerd-b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b.scope. May 10 01:45:08.706656 systemd[1]: run-containerd-runc-k8s.io-b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b-runc.3Se5v4.mount: Deactivated successfully. May 10 01:45:08.758402 env[1193]: time="2025-05-10T01:45:08.758340924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t2sgx,Uid:b1b38921-e9c5-4efa-b3ca-50f9e8c186c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\"" May 10 01:45:08.764901 env[1193]: time="2025-05-10T01:45:08.764858173Z" level=info msg="CreateContainer within sandbox \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 01:45:08.786037 env[1193]: time="2025-05-10T01:45:08.785983622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-wswk2,Uid:d164972a-4688-4b97-9a49-7d27868d347b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccb84ea1c4aaf04147fbd455e3c63e189a9bdf7e245fe691c4de442a870f3116\"" May 10 01:45:08.786817 env[1193]: time="2025-05-10T01:45:08.786761941Z" level=info msg="CreateContainer within sandbox \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\"" May 10 01:45:08.787938 env[1193]: time="2025-05-10T01:45:08.787905655Z" level=info msg="StartContainer for \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\"" May 10 01:45:08.789490 env[1193]: time="2025-05-10T01:45:08.789452968Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 01:45:08.810670 systemd[1]: Started cri-containerd-c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791.scope. May 10 01:45:08.828100 systemd[1]: cri-containerd-c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791.scope: Deactivated successfully. May 10 01:45:08.848101 env[1193]: time="2025-05-10T01:45:08.848038671Z" level=info msg="shim disconnected" id=c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791 May 10 01:45:08.848101 env[1193]: time="2025-05-10T01:45:08.848102453Z" level=warning msg="cleaning up after shim disconnected" id=c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791 namespace=k8s.io May 10 01:45:08.848405 env[1193]: time="2025-05-10T01:45:08.848119662Z" level=info msg="cleaning up dead shim" May 10 01:45:08.858922 env[1193]: time="2025-05-10T01:45:08.858869245Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3126 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T01:45:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 01:45:08.859505 env[1193]: time="2025-05-10T01:45:08.859381666Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" May 10 01:45:08.860140 env[1193]: time="2025-05-10T01:45:08.859729520Z" level=error msg="Failed to pipe stdout of container \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\"" error="reading from a closed fifo" May 10 01:45:08.860330 env[1193]: time="2025-05-10T01:45:08.859841264Z" level=error msg="Failed to pipe stderr of container \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\"" error="reading from a closed fifo" May 10 01:45:08.861322 env[1193]: time="2025-05-10T01:45:08.861270353Z" level=error msg="StartContainer for \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 01:45:08.862343 kubelet[1461]: E0510 01:45:08.861665 1461 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791" May 10 01:45:08.862343 kubelet[1461]: E0510 01:45:08.861931 1461 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 01:45:08.862343 kubelet[1461]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 01:45:08.862343 kubelet[1461]: rm /hostbin/cilium-mount May 10 01:45:08.862712 kubelet[1461]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mlfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t2sgx_kube-system(b1b38921-e9c5-4efa-b3ca-50f9e8c186c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 01:45:08.862876 kubelet[1461]: E0510 01:45:08.861999 1461 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t2sgx" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" May 10 01:45:09.039892 env[1193]: time="2025-05-10T01:45:09.039730189Z" level=info msg="CreateContainer within sandbox \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 10 01:45:09.051422 env[1193]: time="2025-05-10T01:45:09.051368868Z" level=info msg="CreateContainer within sandbox \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\"" May 10 01:45:09.052417 env[1193]: time="2025-05-10T01:45:09.052383482Z" level=info msg="StartContainer for \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\"" May 10 01:45:09.073343 systemd[1]: Started cri-containerd-a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d.scope. May 10 01:45:09.090149 systemd[1]: cri-containerd-a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d.scope: Deactivated successfully. May 10 01:45:09.100164 env[1193]: time="2025-05-10T01:45:09.100102251Z" level=info msg="shim disconnected" id=a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d May 10 01:45:09.100467 env[1193]: time="2025-05-10T01:45:09.100435977Z" level=warning msg="cleaning up after shim disconnected" id=a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d namespace=k8s.io May 10 01:45:09.100638 env[1193]: time="2025-05-10T01:45:09.100610874Z" level=info msg="cleaning up dead shim" May 10 01:45:09.111855 env[1193]: time="2025-05-10T01:45:09.111756971Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3162 runtime=io.containerd.runc.v2\ntime=\"2025-05-10T01:45:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 10 01:45:09.112244 env[1193]: time="2025-05-10T01:45:09.112140512Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" May 10 01:45:09.114281 env[1193]: time="2025-05-10T01:45:09.114233175Z" level=error msg="Failed to pipe stdout of container \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\"" error="reading from a closed fifo" May 10 01:45:09.114458 env[1193]: time="2025-05-10T01:45:09.114418027Z" level=error msg="Failed to pipe stderr of container \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\"" error="reading from a closed fifo" May 10 01:45:09.115978 env[1193]: time="2025-05-10T01:45:09.115913125Z" level=error msg="StartContainer for \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 10 01:45:09.116675 kubelet[1461]: E0510 01:45:09.116237 1461 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d" May 10 01:45:09.116675 kubelet[1461]: E0510 01:45:09.116385 1461 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 10 01:45:09.116675 kubelet[1461]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 10 01:45:09.116675 kubelet[1461]: rm /hostbin/cilium-mount May 10 01:45:09.116944 kubelet[1461]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7mlfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-t2sgx_kube-system(b1b38921-e9c5-4efa-b3ca-50f9e8c186c9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 10 01:45:09.117098 kubelet[1461]: E0510 01:45:09.116430 1461 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-t2sgx" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" May 10 01:45:09.491432 kubelet[1461]: I0510 01:45:09.491330 1461 setters.go:580] "Node became not ready" node="10.230.47.106" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T01:45:09Z","lastTransitionTime":"2025-05-10T01:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 01:45:09.654247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount948504141.mount: Deactivated successfully. May 10 01:45:09.690233 kubelet[1461]: E0510 01:45:09.690156 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:10.041421 kubelet[1461]: I0510 01:45:10.041377 1461 scope.go:117] "RemoveContainer" containerID="c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791" May 10 01:45:10.044307 env[1193]: time="2025-05-10T01:45:10.043884672Z" level=info msg="StopPodSandbox for \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\"" May 10 01:45:10.044762 env[1193]: time="2025-05-10T01:45:10.044396369Z" level=info msg="Container to stop \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:10.044762 env[1193]: time="2025-05-10T01:45:10.044438445Z" level=info msg="Container to stop \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 01:45:10.047457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b-shm.mount: Deactivated successfully. May 10 01:45:10.053999 env[1193]: time="2025-05-10T01:45:10.053945525Z" level=info msg="RemoveContainer for \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\"" May 10 01:45:10.056972 systemd[1]: cri-containerd-b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b.scope: Deactivated successfully. May 10 01:45:10.062285 env[1193]: time="2025-05-10T01:45:10.062177701Z" level=info msg="RemoveContainer for \"c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791\" returns successfully" May 10 01:45:10.085120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b-rootfs.mount: Deactivated successfully. May 10 01:45:10.090450 env[1193]: time="2025-05-10T01:45:10.090385119Z" level=info msg="shim disconnected" id=b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b May 10 01:45:10.090615 env[1193]: time="2025-05-10T01:45:10.090448522Z" level=warning msg="cleaning up after shim disconnected" id=b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b namespace=k8s.io May 10 01:45:10.090615 env[1193]: time="2025-05-10T01:45:10.090465539Z" level=info msg="cleaning up dead shim" May 10 01:45:10.102683 env[1193]: time="2025-05-10T01:45:10.102564109Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3192 runtime=io.containerd.runc.v2\n" May 10 01:45:10.103130 env[1193]: time="2025-05-10T01:45:10.103083086Z" level=info msg="TearDown network for sandbox \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\" successfully" May 10 01:45:10.103201 env[1193]: time="2025-05-10T01:45:10.103127445Z" level=info msg="StopPodSandbox for \"b7d8d664eef40e0e8cf0e4164b6302b23f4760219275cf1fa6db103a6fd26f6b\" returns successfully" May 10 01:45:10.253155 kubelet[1461]: I0510 01:45:10.252437 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mlfv\" (UniqueName: \"kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-kube-api-access-7mlfv\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253377 kubelet[1461]: I0510 01:45:10.253160 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hubble-tls\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253377 kubelet[1461]: I0510 01:45:10.253283 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-etc-cni-netd\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253377 kubelet[1461]: I0510 01:45:10.253309 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-run\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253377 kubelet[1461]: I0510 01:45:10.253331 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-lib-modules\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253377 kubelet[1461]: I0510 01:45:10.253376 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-clustermesh-secrets\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253693 kubelet[1461]: I0510 01:45:10.253406 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-ipsec-secrets\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253693 kubelet[1461]: I0510 01:45:10.253429 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-net\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253693 kubelet[1461]: I0510 01:45:10.253450 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cni-path\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253693 kubelet[1461]: I0510 01:45:10.253479 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-cgroup\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253693 kubelet[1461]: I0510 01:45:10.253521 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-xtables-lock\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.253693 kubelet[1461]: I0510 01:45:10.253546 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-kernel\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.254009 kubelet[1461]: I0510 01:45:10.253570 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-bpf-maps\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.254009 kubelet[1461]: I0510 01:45:10.253618 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-config-path\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.254009 kubelet[1461]: I0510 01:45:10.253641 1461 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hostproc\") pod \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\" (UID: \"b1b38921-e9c5-4efa-b3ca-50f9e8c186c9\") " May 10 01:45:10.254009 kubelet[1461]: I0510 01:45:10.253712 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hostproc" (OuterVolumeSpecName: "hostproc") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.259770 systemd[1]: var-lib-kubelet-pods-b1b38921\x2de9c5\x2d4efa\x2db3ca\x2d50f9e8c186c9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 01:45:10.259907 systemd[1]: var-lib-kubelet-pods-b1b38921\x2de9c5\x2d4efa\x2db3ca\x2d50f9e8c186c9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7mlfv.mount: Deactivated successfully. May 10 01:45:10.265296 kubelet[1461]: I0510 01:45:10.265258 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.265530 kubelet[1461]: I0510 01:45:10.265501 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.265719 kubelet[1461]: I0510 01:45:10.265692 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.266492 kubelet[1461]: I0510 01:45:10.266462 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-kube-api-access-7mlfv" (OuterVolumeSpecName: "kube-api-access-7mlfv") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "kube-api-access-7mlfv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 01:45:10.266678 kubelet[1461]: I0510 01:45:10.266648 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.266835 kubelet[1461]: I0510 01:45:10.266808 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.266988 kubelet[1461]: I0510 01:45:10.266962 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cni-path" (OuterVolumeSpecName: "cni-path") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.267123 kubelet[1461]: I0510 01:45:10.267098 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.267298 kubelet[1461]: I0510 01:45:10.267268 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 01:45:10.267705 kubelet[1461]: I0510 01:45:10.267679 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.267867 kubelet[1461]: I0510 01:45:10.267825 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 01:45:10.272431 kubelet[1461]: I0510 01:45:10.272350 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 01:45:10.273764 kubelet[1461]: I0510 01:45:10.273719 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 01:45:10.276056 kubelet[1461]: I0510 01:45:10.276011 1461 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" (UID: "b1b38921-e9c5-4efa-b3ca-50f9e8c186c9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 01:45:10.354488 kubelet[1461]: I0510 01:45:10.354420 1461 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-7mlfv\" (UniqueName: \"kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-kube-api-access-7mlfv\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354488 kubelet[1461]: I0510 01:45:10.354466 1461 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-etc-cni-netd\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354488 kubelet[1461]: I0510 01:45:10.354482 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-run\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354488 kubelet[1461]: I0510 01:45:10.354496 1461 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hubble-tls\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354509 1461 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-clustermesh-secrets\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354523 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-ipsec-secrets\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354535 1461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-net\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354548 1461 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cni-path\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354561 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-cgroup\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354573 1461 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-lib-modules\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354607 1461 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-host-proc-sys-kernel\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.354905 kubelet[1461]: I0510 01:45:10.354621 1461 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-bpf-maps\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.355645 kubelet[1461]: I0510 01:45:10.354633 1461 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-cilium-config-path\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.355645 kubelet[1461]: I0510 01:45:10.354645 1461 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-hostproc\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.355645 kubelet[1461]: I0510 01:45:10.354657 1461 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9-xtables-lock\") on node \"10.230.47.106\" DevicePath \"\"" May 10 01:45:10.654464 systemd[1]: var-lib-kubelet-pods-b1b38921\x2de9c5\x2d4efa\x2db3ca\x2d50f9e8c186c9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 01:45:10.654671 systemd[1]: var-lib-kubelet-pods-b1b38921\x2de9c5\x2d4efa\x2db3ca\x2d50f9e8c186c9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 01:45:10.691141 kubelet[1461]: E0510 01:45:10.691038 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:10.712367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582468444.mount: Deactivated successfully. May 10 01:45:11.045301 kubelet[1461]: I0510 01:45:11.045169 1461 scope.go:117] "RemoveContainer" containerID="a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d" May 10 01:45:11.051065 systemd[1]: Removed slice kubepods-burstable-podb1b38921_e9c5_4efa_b3ca_50f9e8c186c9.slice. May 10 01:45:11.052982 env[1193]: time="2025-05-10T01:45:11.052939256Z" level=info msg="RemoveContainer for \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\"" May 10 01:45:11.056851 env[1193]: time="2025-05-10T01:45:11.056816034Z" level=info msg="RemoveContainer for \"a70548138b6c4c332bf49c689a1db98a733ce02896ca2c17316d2d05ac1ae57d\" returns successfully" May 10 01:45:11.135612 kubelet[1461]: I0510 01:45:11.135524 1461 topology_manager.go:215] "Topology Admit Handler" podUID="b174826a-671d-455c-b1ec-e252ac3882c1" podNamespace="kube-system" podName="cilium-srhwc" May 10 01:45:11.136015 kubelet[1461]: E0510 01:45:11.135987 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" containerName="mount-cgroup" May 10 01:45:11.136167 kubelet[1461]: I0510 01:45:11.136141 1461 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" containerName="mount-cgroup" May 10 01:45:11.136316 kubelet[1461]: I0510 01:45:11.136293 1461 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" containerName="mount-cgroup" May 10 01:45:11.136452 kubelet[1461]: E0510 01:45:11.136428 1461 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" containerName="mount-cgroup" May 10 01:45:11.145125 systemd[1]: Created slice kubepods-burstable-podb174826a_671d_455c_b1ec_e252ac3882c1.slice. May 10 01:45:11.260648 kubelet[1461]: I0510 01:45:11.260595 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-hostproc\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.260938 kubelet[1461]: I0510 01:45:11.260907 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-lib-modules\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.261115 kubelet[1461]: I0510 01:45:11.261089 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b174826a-671d-455c-b1ec-e252ac3882c1-clustermesh-secrets\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.261292 kubelet[1461]: I0510 01:45:11.261265 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-bpf-maps\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.261453 kubelet[1461]: I0510 01:45:11.261427 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-host-proc-sys-kernel\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.261628 kubelet[1461]: I0510 01:45:11.261603 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b174826a-671d-455c-b1ec-e252ac3882c1-cilium-config-path\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.261788 kubelet[1461]: I0510 01:45:11.261762 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b174826a-671d-455c-b1ec-e252ac3882c1-cilium-ipsec-secrets\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.261956 kubelet[1461]: I0510 01:45:11.261931 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-cni-path\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.262113 kubelet[1461]: I0510 01:45:11.262084 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b174826a-671d-455c-b1ec-e252ac3882c1-hubble-tls\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.262275 kubelet[1461]: I0510 01:45:11.262249 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-cilium-cgroup\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.262425 kubelet[1461]: I0510 01:45:11.262400 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-xtables-lock\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.262573 kubelet[1461]: I0510 01:45:11.262548 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-host-proc-sys-net\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.262733 kubelet[1461]: I0510 01:45:11.262708 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-cilium-run\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.262889 kubelet[1461]: I0510 01:45:11.262864 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b174826a-671d-455c-b1ec-e252ac3882c1-etc-cni-netd\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.263088 kubelet[1461]: I0510 01:45:11.263063 1461 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggql2\" (UniqueName: \"kubernetes.io/projected/b174826a-671d-455c-b1ec-e252ac3882c1-kube-api-access-ggql2\") pod \"cilium-srhwc\" (UID: \"b174826a-671d-455c-b1ec-e252ac3882c1\") " pod="kube-system/cilium-srhwc" May 10 01:45:11.454976 env[1193]: time="2025-05-10T01:45:11.454436504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srhwc,Uid:b174826a-671d-455c-b1ec-e252ac3882c1,Namespace:kube-system,Attempt:0,}" May 10 01:45:11.481610 env[1193]: time="2025-05-10T01:45:11.481452062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 01:45:11.481820 env[1193]: time="2025-05-10T01:45:11.481623367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 01:45:11.481820 env[1193]: time="2025-05-10T01:45:11.481692364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 01:45:11.482178 env[1193]: time="2025-05-10T01:45:11.482121405Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f pid=3221 runtime=io.containerd.runc.v2 May 10 01:45:11.501446 systemd[1]: Started cri-containerd-b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f.scope. May 10 01:45:11.559604 env[1193]: time="2025-05-10T01:45:11.559481694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srhwc,Uid:b174826a-671d-455c-b1ec-e252ac3882c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\"" May 10 01:45:11.563029 env[1193]: time="2025-05-10T01:45:11.562991249Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 01:45:11.594184 env[1193]: time="2025-05-10T01:45:11.594077782Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6\"" May 10 01:45:11.595198 env[1193]: time="2025-05-10T01:45:11.595152636Z" level=info msg="StartContainer for \"c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6\"" May 10 01:45:11.627829 systemd[1]: Started cri-containerd-c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6.scope. May 10 01:45:11.692195 kubelet[1461]: E0510 01:45:11.692101 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:11.697780 env[1193]: time="2025-05-10T01:45:11.697716969Z" level=info msg="StartContainer for \"c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6\" returns successfully" May 10 01:45:11.713413 systemd[1]: cri-containerd-c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6.scope: Deactivated successfully. May 10 01:45:11.758304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6-rootfs.mount: Deactivated successfully. May 10 01:45:11.871282 kubelet[1461]: I0510 01:45:11.870858 1461 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b38921-e9c5-4efa-b3ca-50f9e8c186c9" path="/var/lib/kubelet/pods/b1b38921-e9c5-4efa-b3ca-50f9e8c186c9/volumes" May 10 01:45:11.884357 env[1193]: time="2025-05-10T01:45:11.884264603Z" level=info msg="shim disconnected" id=c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6 May 10 01:45:11.884650 env[1193]: time="2025-05-10T01:45:11.884618093Z" level=warning msg="cleaning up after shim disconnected" id=c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6 namespace=k8s.io May 10 01:45:11.884830 env[1193]: time="2025-05-10T01:45:11.884801904Z" level=info msg="cleaning up dead shim" May 10 01:45:11.908450 env[1193]: time="2025-05-10T01:45:11.908390076Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3306 runtime=io.containerd.runc.v2\n" May 10 01:45:11.953651 kubelet[1461]: W0510 01:45:11.953447 1461 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1b38921_e9c5_4efa_b3ca_50f9e8c186c9.slice/cri-containerd-c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791.scope WatchSource:0}: container "c30c0847302f874bfc23b86247c44692d3e1ad2e591ab3b8acf956d075526791" in namespace "k8s.io": not found May 10 01:45:11.962721 env[1193]: time="2025-05-10T01:45:11.962665768Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:45:11.965537 env[1193]: time="2025-05-10T01:45:11.965417775Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:45:11.969200 env[1193]: time="2025-05-10T01:45:11.969148740Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 01:45:11.970218 env[1193]: time="2025-05-10T01:45:11.970094506Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 01:45:11.973884 env[1193]: time="2025-05-10T01:45:11.973836948Z" level=info msg="CreateContainer within sandbox \"ccb84ea1c4aaf04147fbd455e3c63e189a9bdf7e245fe691c4de442a870f3116\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 01:45:11.988649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434263846.mount: Deactivated successfully. May 10 01:45:11.997019 env[1193]: time="2025-05-10T01:45:11.996960949Z" level=info msg="CreateContainer within sandbox \"ccb84ea1c4aaf04147fbd455e3c63e189a9bdf7e245fe691c4de442a870f3116\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7649ccbfebf3fdf1707da7d642d88a8bcade6186058ae2e51cbdfa9543b2abff\"" May 10 01:45:11.997833 env[1193]: time="2025-05-10T01:45:11.997797508Z" level=info msg="StartContainer for \"7649ccbfebf3fdf1707da7d642d88a8bcade6186058ae2e51cbdfa9543b2abff\"" May 10 01:45:12.017702 systemd[1]: Started cri-containerd-7649ccbfebf3fdf1707da7d642d88a8bcade6186058ae2e51cbdfa9543b2abff.scope. May 10 01:45:12.059052 env[1193]: time="2025-05-10T01:45:12.058997360Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 01:45:12.078499 env[1193]: time="2025-05-10T01:45:12.078406340Z" level=info msg="StartContainer for \"7649ccbfebf3fdf1707da7d642d88a8bcade6186058ae2e51cbdfa9543b2abff\" returns successfully" May 10 01:45:12.079829 env[1193]: time="2025-05-10T01:45:12.079791169Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc\"" May 10 01:45:12.080668 env[1193]: time="2025-05-10T01:45:12.080620424Z" level=info msg="StartContainer for \"c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc\"" May 10 01:45:12.111828 systemd[1]: Started cri-containerd-c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc.scope. May 10 01:45:12.162470 env[1193]: time="2025-05-10T01:45:12.161807633Z" level=info msg="StartContainer for \"c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc\" returns successfully" May 10 01:45:12.186066 systemd[1]: cri-containerd-c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc.scope: Deactivated successfully. May 10 01:45:12.225594 env[1193]: time="2025-05-10T01:45:12.225430611Z" level=info msg="shim disconnected" id=c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc May 10 01:45:12.225594 env[1193]: time="2025-05-10T01:45:12.225495093Z" level=warning msg="cleaning up after shim disconnected" id=c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc namespace=k8s.io May 10 01:45:12.225594 env[1193]: time="2025-05-10T01:45:12.225511203Z" level=info msg="cleaning up dead shim" May 10 01:45:12.246423 env[1193]: time="2025-05-10T01:45:12.246355132Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3406 runtime=io.containerd.runc.v2\n" May 10 01:45:12.655637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount746114680.mount: Deactivated successfully. May 10 01:45:12.692741 kubelet[1461]: E0510 01:45:12.692645 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:12.732961 kubelet[1461]: E0510 01:45:12.732904 1461 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 01:45:13.072716 env[1193]: time="2025-05-10T01:45:13.072193861Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 01:45:13.088192 kubelet[1461]: I0510 01:45:13.088102 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-wswk2" podStartSLOduration=2.905142386 podStartE2EDuration="6.088065779s" podCreationTimestamp="2025-05-10 01:45:07 +0000 UTC" firstStartedPulling="2025-05-10 01:45:08.788965762 +0000 UTC m=+81.913485090" lastFinishedPulling="2025-05-10 01:45:11.971889147 +0000 UTC m=+85.096408483" observedRunningTime="2025-05-10 01:45:13.087890027 +0000 UTC m=+86.212409377" watchObservedRunningTime="2025-05-10 01:45:13.088065779 +0000 UTC m=+86.212585128" May 10 01:45:13.090475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391184057.mount: Deactivated successfully. May 10 01:45:13.098816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317982832.mount: Deactivated successfully. May 10 01:45:13.103823 env[1193]: time="2025-05-10T01:45:13.103775236Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507\"" May 10 01:45:13.104609 env[1193]: time="2025-05-10T01:45:13.104395604Z" level=info msg="StartContainer for \"d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507\"" May 10 01:45:13.128613 systemd[1]: Started cri-containerd-d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507.scope. May 10 01:45:13.190431 env[1193]: time="2025-05-10T01:45:13.190375303Z" level=info msg="StartContainer for \"d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507\" returns successfully" May 10 01:45:13.196403 systemd[1]: cri-containerd-d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507.scope: Deactivated successfully. May 10 01:45:13.224494 env[1193]: time="2025-05-10T01:45:13.224434815Z" level=info msg="shim disconnected" id=d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507 May 10 01:45:13.224494 env[1193]: time="2025-05-10T01:45:13.224493256Z" level=warning msg="cleaning up after shim disconnected" id=d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507 namespace=k8s.io May 10 01:45:13.224841 env[1193]: time="2025-05-10T01:45:13.224508470Z" level=info msg="cleaning up dead shim" May 10 01:45:13.234308 env[1193]: time="2025-05-10T01:45:13.234258460Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3463 runtime=io.containerd.runc.v2\n" May 10 01:45:13.693628 kubelet[1461]: E0510 01:45:13.693529 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:14.072843 env[1193]: time="2025-05-10T01:45:14.072449414Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 01:45:14.086705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589776785.mount: Deactivated successfully. May 10 01:45:14.095056 env[1193]: time="2025-05-10T01:45:14.094971251Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36\"" May 10 01:45:14.096011 env[1193]: time="2025-05-10T01:45:14.095976973Z" level=info msg="StartContainer for \"24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36\"" May 10 01:45:14.121935 systemd[1]: Started cri-containerd-24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36.scope. May 10 01:45:14.160532 systemd[1]: cri-containerd-24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36.scope: Deactivated successfully. May 10 01:45:14.163385 env[1193]: time="2025-05-10T01:45:14.163166367Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb174826a_671d_455c_b1ec_e252ac3882c1.slice/cri-containerd-24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36.scope/memory.events\": no such file or directory" May 10 01:45:14.165528 env[1193]: time="2025-05-10T01:45:14.165442750Z" level=info msg="StartContainer for \"24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36\" returns successfully" May 10 01:45:14.197719 env[1193]: time="2025-05-10T01:45:14.197647964Z" level=info msg="shim disconnected" id=24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36 May 10 01:45:14.198081 env[1193]: time="2025-05-10T01:45:14.198050276Z" level=warning msg="cleaning up after shim disconnected" id=24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36 namespace=k8s.io May 10 01:45:14.198226 env[1193]: time="2025-05-10T01:45:14.198199010Z" level=info msg="cleaning up dead shim" May 10 01:45:14.208707 env[1193]: time="2025-05-10T01:45:14.208638917Z" level=warning msg="cleanup warnings time=\"2025-05-10T01:45:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3521 runtime=io.containerd.runc.v2\n" May 10 01:45:14.654843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36-rootfs.mount: Deactivated successfully. May 10 01:45:14.694693 kubelet[1461]: E0510 01:45:14.694634 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:15.066854 kubelet[1461]: W0510 01:45:15.066690 1461 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb174826a_671d_455c_b1ec_e252ac3882c1.slice/cri-containerd-c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6.scope WatchSource:0}: task c7f43dc1dd7b245a9a772d516453980d86d81a5a59f05730e0fc0d7162d7d0e6 not found: not found May 10 01:45:15.078896 env[1193]: time="2025-05-10T01:45:15.078848119Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 01:45:15.097818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704282008.mount: Deactivated successfully. May 10 01:45:15.107058 env[1193]: time="2025-05-10T01:45:15.106970187Z" level=info msg="CreateContainer within sandbox \"b3faf6cefd2fc5ec1c7b9d7acd8743ebb6600dcb62efd0cce01f6bc87de08a1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d25e2ae2cf3cc038c4cd9c934547151a5754286f796f328ea808da0f5af1ce9b\"" May 10 01:45:15.107811 env[1193]: time="2025-05-10T01:45:15.107769395Z" level=info msg="StartContainer for \"d25e2ae2cf3cc038c4cd9c934547151a5754286f796f328ea808da0f5af1ce9b\"" May 10 01:45:15.131695 systemd[1]: Started cri-containerd-d25e2ae2cf3cc038c4cd9c934547151a5754286f796f328ea808da0f5af1ce9b.scope. May 10 01:45:15.181560 env[1193]: time="2025-05-10T01:45:15.181511484Z" level=info msg="StartContainer for \"d25e2ae2cf3cc038c4cd9c934547151a5754286f796f328ea808da0f5af1ce9b\" returns successfully" May 10 01:45:15.696110 kubelet[1461]: E0510 01:45:15.696059 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:15.857651 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 01:45:16.696659 kubelet[1461]: E0510 01:45:16.696597 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:17.698210 kubelet[1461]: E0510 01:45:17.698119 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:18.179141 kubelet[1461]: W0510 01:45:18.179084 1461 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb174826a_671d_455c_b1ec_e252ac3882c1.slice/cri-containerd-c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc.scope WatchSource:0}: task c92418d508a0b1cc626b57ab94d658d06310e33aa2a34a45775db2d2ed7cf2cc not found: not found May 10 01:45:18.699308 kubelet[1461]: E0510 01:45:18.699242 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:19.219452 systemd-networkd[1014]: lxc_health: Link UP May 10 01:45:19.228687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 01:45:19.231847 systemd-networkd[1014]: lxc_health: Gained carrier May 10 01:45:19.511040 kubelet[1461]: I0510 01:45:19.510842 1461 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-srhwc" podStartSLOduration=8.51080751 podStartE2EDuration="8.51080751s" podCreationTimestamp="2025-05-10 01:45:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 01:45:16.103605824 +0000 UTC m=+89.228125153" watchObservedRunningTime="2025-05-10 01:45:19.51080751 +0000 UTC m=+92.635326841" May 10 01:45:19.585967 systemd[1]: run-containerd-runc-k8s.io-d25e2ae2cf3cc038c4cd9c934547151a5754286f796f328ea808da0f5af1ce9b-runc.j4IGHP.mount: Deactivated successfully. May 10 01:45:19.700249 kubelet[1461]: E0510 01:45:19.700140 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:19.750570 kubelet[1461]: E0510 01:45:19.750331 1461 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44968->127.0.0.1:35445: write tcp 127.0.0.1:44968->127.0.0.1:35445: write: connection reset by peer May 10 01:45:20.701267 kubelet[1461]: E0510 01:45:20.701196 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:21.060898 systemd-networkd[1014]: lxc_health: Gained IPv6LL May 10 01:45:21.292370 kubelet[1461]: W0510 01:45:21.292313 1461 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb174826a_671d_455c_b1ec_e252ac3882c1.slice/cri-containerd-d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507.scope WatchSource:0}: task d806d4e4b2b96d62b6b4173391367911aa66c9a673b4326db9eb2e1513527507 not found: not found May 10 01:45:21.701428 kubelet[1461]: E0510 01:45:21.701378 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:22.702531 kubelet[1461]: E0510 01:45:22.702460 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:23.703611 kubelet[1461]: E0510 01:45:23.703527 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:24.405990 kubelet[1461]: W0510 01:45:24.405929 1461 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb174826a_671d_455c_b1ec_e252ac3882c1.slice/cri-containerd-24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36.scope WatchSource:0}: task 24c45cc3fb55132f835be3022ac754b7220e1fd3734c418a50165565cd385d36 not found: not found May 10 01:45:24.704174 kubelet[1461]: E0510 01:45:24.704018 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:25.705067 kubelet[1461]: E0510 01:45:25.704954 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:26.705933 kubelet[1461]: E0510 01:45:26.705869 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:27.611145 kubelet[1461]: E0510 01:45:27.611069 1461 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:27.707360 kubelet[1461]: E0510 01:45:27.707288 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:28.708365 kubelet[1461]: E0510 01:45:28.708271 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:29.708893 kubelet[1461]: E0510 01:45:29.708834 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 10 01:45:30.710704 kubelet[1461]: E0510 01:45:30.710620 1461 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"