May 15 08:52:55.852089 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 14 23:14:51 -00 2025 May 15 08:52:55.852141 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 08:52:55.852165 kernel: BIOS-provided physical RAM map: May 15 08:52:55.852187 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 15 08:52:55.852203 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 15 08:52:55.852219 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 15 08:52:55.852238 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 15 08:52:55.852255 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 15 08:52:55.852271 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 15 08:52:55.852286 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 15 08:52:55.852302 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 15 08:52:55.852318 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 15 08:52:55.852337 kernel: NX (Execute Disable) protection: active May 15 08:52:55.852353 kernel: SMBIOS 3.0.0 present. May 15 08:52:55.852373 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 15 08:52:55.852390 kernel: Hypervisor detected: KVM May 15 08:52:55.852407 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 08:52:55.856261 kernel: kvm-clock: cpu 0, msr a7196001, primary cpu clock May 15 08:52:55.856311 kernel: kvm-clock: using sched offset of 4039930436 cycles May 15 08:52:55.856331 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 08:52:55.856350 kernel: tsc: Detected 1996.249 MHz processor May 15 08:52:55.856368 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 08:52:55.856387 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 08:52:55.856405 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 15 08:52:55.856468 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 08:52:55.856489 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 15 08:52:55.856507 kernel: ACPI: Early table checksum verification disabled May 15 08:52:55.856530 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 15 08:52:55.856547 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 08:52:55.856565 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 08:52:55.856583 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 08:52:55.856600 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 15 08:52:55.856618 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 08:52:55.856636 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 08:52:55.856655 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 15 08:52:55.856676 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 15 08:52:55.856693 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 15 08:52:55.856711 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 15 08:52:55.856729 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 15 08:52:55.856746 kernel: No NUMA configuration found May 15 08:52:55.856770 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 15 08:52:55.856788 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 15 08:52:55.856809 kernel: Zone ranges: May 15 08:52:55.856828 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 08:52:55.856846 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 15 08:52:55.856865 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 15 08:52:55.856883 kernel: Movable zone start for each node May 15 08:52:55.856901 kernel: Early memory node ranges May 15 08:52:55.856919 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 15 08:52:55.856938 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 15 08:52:55.856959 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 15 08:52:55.856977 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 15 08:52:55.856995 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 08:52:55.857013 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 15 08:52:55.857032 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 15 08:52:55.857050 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 08:52:55.857068 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 08:52:55.857087 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 08:52:55.857105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 08:52:55.857126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 08:52:55.857145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 08:52:55.857163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 08:52:55.857181 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 08:52:55.857200 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 08:52:55.857218 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 15 08:52:55.857236 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 15 08:52:55.857254 kernel: Booting paravirtualized kernel on KVM May 15 08:52:55.857273 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 08:52:55.857296 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 15 08:52:55.857314 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 15 08:52:55.857333 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 15 08:52:55.857351 kernel: pcpu-alloc: [0] 0 1 May 15 08:52:55.857368 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 May 15 08:52:55.857387 kernel: kvm-guest: PV spinlocks disabled, no host support May 15 08:52:55.857405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 15 08:52:55.857446 kernel: Policy zone: Normal May 15 08:52:55.857470 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 08:52:55.857494 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 08:52:55.857512 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 08:52:55.857531 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 08:52:55.857549 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 08:52:55.857568 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 225236K reserved, 0K cma-reserved) May 15 08:52:55.857587 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 15 08:52:55.857605 kernel: ftrace: allocating 34584 entries in 136 pages May 15 08:52:55.857624 kernel: ftrace: allocated 136 pages with 2 groups May 15 08:52:55.857645 kernel: rcu: Hierarchical RCU implementation. May 15 08:52:55.857665 kernel: rcu: RCU event tracing is enabled. May 15 08:52:55.857684 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 15 08:52:55.857703 kernel: Rude variant of Tasks RCU enabled. May 15 08:52:55.857721 kernel: Tracing variant of Tasks RCU enabled. May 15 08:52:55.857739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 08:52:55.857759 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 15 08:52:55.857777 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 15 08:52:55.857795 kernel: Console: colour VGA+ 80x25 May 15 08:52:55.857817 kernel: printk: console [tty0] enabled May 15 08:52:55.857835 kernel: printk: console [ttyS0] enabled May 15 08:52:55.857853 kernel: ACPI: Core revision 20210730 May 15 08:52:55.857871 kernel: APIC: Switch to symmetric I/O mode setup May 15 08:52:55.857889 kernel: x2apic enabled May 15 08:52:55.857907 kernel: Switched APIC routing to physical x2apic. May 15 08:52:55.857926 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 08:52:55.857944 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 08:52:55.857963 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 15 08:52:55.857985 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 15 08:52:55.858003 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 15 08:52:55.858022 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 08:52:55.858040 kernel: Spectre V2 : Mitigation: Retpolines May 15 08:52:55.858059 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 08:52:55.858077 kernel: Speculative Store Bypass: Vulnerable May 15 08:52:55.858095 kernel: x86/fpu: x87 FPU will use FXSAVE May 15 08:52:55.858113 kernel: Freeing SMP alternatives memory: 32K May 15 08:52:55.858131 kernel: pid_max: default: 32768 minimum: 301 May 15 08:52:55.858152 kernel: LSM: Security Framework initializing May 15 08:52:55.858189 kernel: SELinux: Initializing. May 15 08:52:55.858208 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 08:52:55.858227 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 08:52:55.858246 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 15 08:52:55.858265 kernel: Performance Events: AMD PMU driver. May 15 08:52:55.858295 kernel: ... version: 0 May 15 08:52:55.858317 kernel: ... bit width: 48 May 15 08:52:55.858335 kernel: ... generic registers: 4 May 15 08:52:55.858354 kernel: ... value mask: 0000ffffffffffff May 15 08:52:55.858373 kernel: ... max period: 00007fffffffffff May 15 08:52:55.858392 kernel: ... fixed-purpose events: 0 May 15 08:52:55.858414 kernel: ... event mask: 000000000000000f May 15 08:52:55.858478 kernel: signal: max sigframe size: 1440 May 15 08:52:55.858499 kernel: rcu: Hierarchical SRCU implementation. May 15 08:52:55.858518 kernel: smp: Bringing up secondary CPUs ... May 15 08:52:55.858537 kernel: x86: Booting SMP configuration: May 15 08:52:55.858560 kernel: .... node #0, CPUs: #1 May 15 08:52:55.858579 kernel: kvm-clock: cpu 1, msr a7196041, secondary cpu clock May 15 08:52:55.858598 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 May 15 08:52:55.858617 kernel: smp: Brought up 1 node, 2 CPUs May 15 08:52:55.858636 kernel: smpboot: Max logical packages: 2 May 15 08:52:55.858656 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 15 08:52:55.858674 kernel: devtmpfs: initialized May 15 08:52:55.858693 kernel: x86/mm: Memory block size: 128MB May 15 08:52:55.858713 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 08:52:55.858735 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 15 08:52:55.858755 kernel: pinctrl core: initialized pinctrl subsystem May 15 08:52:55.858774 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 08:52:55.858793 kernel: audit: initializing netlink subsys (disabled) May 15 08:52:55.858813 kernel: audit: type=2000 audit(1747299174.811:1): state=initialized audit_enabled=0 res=1 May 15 08:52:55.858832 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 08:52:55.858851 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 08:52:55.858870 kernel: cpuidle: using governor menu May 15 08:52:55.858889 kernel: ACPI: bus type PCI registered May 15 08:52:55.858911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 08:52:55.858931 kernel: dca service started, version 1.12.1 May 15 08:52:55.858950 kernel: PCI: Using configuration type 1 for base access May 15 08:52:55.858969 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 08:52:55.858989 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 08:52:55.859008 kernel: ACPI: Added _OSI(Module Device) May 15 08:52:55.859028 kernel: ACPI: Added _OSI(Processor Device) May 15 08:52:55.859047 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 08:52:55.859066 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 08:52:55.859088 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 08:52:55.859107 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 08:52:55.859126 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 08:52:55.859145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 08:52:55.859164 kernel: ACPI: Interpreter enabled May 15 08:52:55.859183 kernel: ACPI: PM: (supports S0 S3 S5) May 15 08:52:55.859202 kernel: ACPI: Using IOAPIC for interrupt routing May 15 08:52:55.859222 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 08:52:55.859241 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 15 08:52:55.859263 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 08:52:55.860350 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 15 08:52:55.860593 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. May 15 08:52:55.860624 kernel: acpiphp: Slot [3] registered May 15 08:52:55.860643 kernel: acpiphp: Slot [4] registered May 15 08:52:55.860661 kernel: acpiphp: Slot [5] registered May 15 08:52:55.860679 kernel: acpiphp: Slot [6] registered May 15 08:52:55.860697 kernel: acpiphp: Slot [7] registered May 15 08:52:55.860723 kernel: acpiphp: Slot [8] registered May 15 08:52:55.860741 kernel: acpiphp: Slot [9] registered May 15 08:52:55.860758 kernel: acpiphp: Slot [10] registered May 15 08:52:55.860776 kernel: acpiphp: Slot [11] registered May 15 08:52:55.860794 kernel: acpiphp: Slot [12] registered May 15 08:52:55.860811 kernel: acpiphp: Slot [13] registered May 15 08:52:55.860829 kernel: acpiphp: Slot [14] registered May 15 08:52:55.860847 kernel: acpiphp: Slot [15] registered May 15 08:52:55.860865 kernel: acpiphp: Slot [16] registered May 15 08:52:55.860885 kernel: acpiphp: Slot [17] registered May 15 08:52:55.860903 kernel: acpiphp: Slot [18] registered May 15 08:52:55.860921 kernel: acpiphp: Slot [19] registered May 15 08:52:55.860939 kernel: acpiphp: Slot [20] registered May 15 08:52:55.860956 kernel: acpiphp: Slot [21] registered May 15 08:52:55.860974 kernel: acpiphp: Slot [22] registered May 15 08:52:55.860992 kernel: acpiphp: Slot [23] registered May 15 08:52:55.861010 kernel: acpiphp: Slot [24] registered May 15 08:52:55.861028 kernel: acpiphp: Slot [25] registered May 15 08:52:55.861045 kernel: acpiphp: Slot [26] registered May 15 08:52:55.861066 kernel: acpiphp: Slot [27] registered May 15 08:52:55.861084 kernel: acpiphp: Slot [28] registered May 15 08:52:55.861101 kernel: acpiphp: Slot [29] registered May 15 08:52:55.861119 kernel: acpiphp: Slot [30] registered May 15 08:52:55.861137 kernel: acpiphp: Slot [31] registered May 15 08:52:55.861154 kernel: PCI host bridge to bus 0000:00 May 15 08:52:55.861339 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 08:52:55.861544 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 08:52:55.861723 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 08:52:55.861888 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 08:52:55.862050 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 15 08:52:55.862241 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 08:52:55.862500 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 15 08:52:55.862720 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 15 08:52:55.862942 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 15 08:52:55.863141 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 15 08:52:55.863304 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 15 08:52:55.867560 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 15 08:52:55.867691 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 15 08:52:55.867779 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 15 08:52:55.867876 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 15 08:52:55.867971 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 15 08:52:55.868054 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 15 08:52:55.868146 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 15 08:52:55.868234 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 15 08:52:55.868323 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 15 08:52:55.868409 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 15 08:52:55.868520 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 15 08:52:55.868609 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 08:52:55.868705 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 15 08:52:55.868790 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 15 08:52:55.868873 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 15 08:52:55.868956 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 15 08:52:55.869039 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 15 08:52:55.869130 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 15 08:52:55.869219 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 15 08:52:55.869304 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 15 08:52:55.869389 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 15 08:52:55.869500 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 15 08:52:55.869587 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 15 08:52:55.869672 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 15 08:52:55.869762 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 15 08:52:55.869852 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 15 08:52:55.869937 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 15 08:52:55.870021 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 15 08:52:55.870033 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 08:52:55.870042 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 08:52:55.870051 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 08:52:55.870059 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 08:52:55.870070 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 15 08:52:55.870078 kernel: iommu: Default domain type: Translated May 15 08:52:55.870087 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 08:52:55.870184 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 15 08:52:55.870274 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 08:52:55.870365 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 15 08:52:55.870378 kernel: vgaarb: loaded May 15 08:52:55.870387 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 08:52:55.870396 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 08:52:55.870408 kernel: PTP clock support registered May 15 08:52:55.870417 kernel: PCI: Using ACPI for IRQ routing May 15 08:52:55.876491 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 08:52:55.876516 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 15 08:52:55.876526 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 15 08:52:55.876536 kernel: clocksource: Switched to clocksource kvm-clock May 15 08:52:55.876546 kernel: VFS: Disk quotas dquot_6.6.0 May 15 08:52:55.876555 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 08:52:55.876564 kernel: pnp: PnP ACPI init May 15 08:52:55.876719 kernel: pnp 00:03: [dma 2] May 15 08:52:55.876735 kernel: pnp: PnP ACPI: found 5 devices May 15 08:52:55.876745 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 08:52:55.876754 kernel: NET: Registered PF_INET protocol family May 15 08:52:55.876763 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 08:52:55.876772 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 08:52:55.876780 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 08:52:55.876789 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 08:52:55.876801 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 08:52:55.876810 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 08:52:55.876819 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 08:52:55.876828 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 08:52:55.876836 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 08:52:55.876845 kernel: NET: Registered PF_XDP protocol family May 15 08:52:55.876931 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 08:52:55.877012 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 08:52:55.877090 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 08:52:55.877173 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 15 08:52:55.877251 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 15 08:52:55.877349 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 15 08:52:55.877466 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 15 08:52:55.877560 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds May 15 08:52:55.877573 kernel: PCI: CLS 0 bytes, default 64 May 15 08:52:55.877582 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 15 08:52:55.877591 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 15 08:52:55.877603 kernel: Initialise system trusted keyrings May 15 08:52:55.877612 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 08:52:55.877621 kernel: Key type asymmetric registered May 15 08:52:55.877630 kernel: Asymmetric key parser 'x509' registered May 15 08:52:55.877638 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 08:52:55.877647 kernel: io scheduler mq-deadline registered May 15 08:52:55.877656 kernel: io scheduler kyber registered May 15 08:52:55.877665 kernel: io scheduler bfq registered May 15 08:52:55.877673 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 08:52:55.877686 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 15 08:52:55.877694 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 15 08:52:55.877703 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 15 08:52:55.877712 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 15 08:52:55.877721 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 08:52:55.877730 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 08:52:55.877739 kernel: random: crng init done May 15 08:52:55.877747 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 08:52:55.877756 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 08:52:55.877766 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 08:52:55.877775 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 08:52:55.877864 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 08:52:55.877947 kernel: rtc_cmos 00:04: registered as rtc0 May 15 08:52:55.878027 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T08:52:55 UTC (1747299175) May 15 08:52:55.878107 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 15 08:52:55.878120 kernel: NET: Registered PF_INET6 protocol family May 15 08:52:55.878128 kernel: Segment Routing with IPv6 May 15 08:52:55.878140 kernel: In-situ OAM (IOAM) with IPv6 May 15 08:52:55.878149 kernel: NET: Registered PF_PACKET protocol family May 15 08:52:55.878157 kernel: Key type dns_resolver registered May 15 08:52:55.878181 kernel: IPI shorthand broadcast: enabled May 15 08:52:55.878190 kernel: sched_clock: Marking stable (855264322, 165733601)->(1090796017, -69798094) May 15 08:52:55.878199 kernel: registered taskstats version 1 May 15 08:52:55.878207 kernel: Loading compiled-in X.509 certificates May 15 08:52:55.878216 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: a3400373b5c34ccb74f940604f224840f2b40bdd' May 15 08:52:55.878225 kernel: Key type .fscrypt registered May 15 08:52:55.878235 kernel: Key type fscrypt-provisioning registered May 15 08:52:55.878244 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 08:52:55.878253 kernel: ima: Allocated hash algorithm: sha1 May 15 08:52:55.878261 kernel: ima: No architecture policies found May 15 08:52:55.878270 kernel: clk: Disabling unused clocks May 15 08:52:55.878278 kernel: Freeing unused kernel image (initmem) memory: 47456K May 15 08:52:55.878287 kernel: Write protecting the kernel read-only data: 28672k May 15 08:52:55.878295 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 08:52:55.878305 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 08:52:55.878315 kernel: Run /init as init process May 15 08:52:55.878323 kernel: with arguments: May 15 08:52:55.878332 kernel: /init May 15 08:52:55.878340 kernel: with environment: May 15 08:52:55.878349 kernel: HOME=/ May 15 08:52:55.878357 kernel: TERM=linux May 15 08:52:55.878366 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 08:52:55.878377 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 08:52:55.878392 systemd[1]: Detected virtualization kvm. May 15 08:52:55.878402 systemd[1]: Detected architecture x86-64. May 15 08:52:55.878412 systemd[1]: Running in initrd. May 15 08:52:55.878436 systemd[1]: No hostname configured, using default hostname. May 15 08:52:55.878446 systemd[1]: Hostname set to . May 15 08:52:55.878456 systemd[1]: Initializing machine ID from VM UUID. May 15 08:52:55.878465 systemd[1]: Queued start job for default target initrd.target. May 15 08:52:55.878478 systemd[1]: Started systemd-ask-password-console.path. May 15 08:52:55.878487 systemd[1]: Reached target cryptsetup.target. May 15 08:52:55.878497 systemd[1]: Reached target paths.target. May 15 08:52:55.878505 systemd[1]: Reached target slices.target. May 15 08:52:55.878515 systemd[1]: Reached target swap.target. May 15 08:52:55.878523 systemd[1]: Reached target timers.target. May 15 08:52:55.878533 systemd[1]: Listening on iscsid.socket. May 15 08:52:55.878542 systemd[1]: Listening on iscsiuio.socket. May 15 08:52:55.878553 systemd[1]: Listening on systemd-journald-audit.socket. May 15 08:52:55.878573 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 08:52:55.878584 systemd[1]: Listening on systemd-journald.socket. May 15 08:52:55.878594 systemd[1]: Listening on systemd-networkd.socket. May 15 08:52:55.878604 systemd[1]: Listening on systemd-udevd-control.socket. May 15 08:52:55.878613 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 08:52:55.878625 systemd[1]: Reached target sockets.target. May 15 08:52:55.878635 systemd[1]: Starting kmod-static-nodes.service... May 15 08:52:55.878645 systemd[1]: Finished network-cleanup.service. May 15 08:52:55.878654 systemd[1]: Starting systemd-fsck-usr.service... May 15 08:52:55.878664 systemd[1]: Starting systemd-journald.service... May 15 08:52:55.878674 systemd[1]: Starting systemd-modules-load.service... May 15 08:52:55.878683 systemd[1]: Starting systemd-resolved.service... May 15 08:52:55.878693 systemd[1]: Starting systemd-vconsole-setup.service... May 15 08:52:55.878702 systemd[1]: Finished kmod-static-nodes.service. May 15 08:52:55.878714 systemd[1]: Finished systemd-fsck-usr.service. May 15 08:52:55.878728 systemd-journald[186]: Journal started May 15 08:52:55.878784 systemd-journald[186]: Runtime Journal (/run/log/journal/399bd93f0b7547a6becc633f22d1b20f) is 8.0M, max 78.4M, 70.4M free. May 15 08:52:55.863500 systemd-modules-load[187]: Inserted module 'overlay' May 15 08:52:55.895004 systemd[1]: Started systemd-resolved.service. May 15 08:52:55.895032 kernel: audit: type=1130 audit(1747299175.887:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.866950 systemd-resolved[188]: Positive Trust Anchors: May 15 08:52:55.913508 systemd[1]: Started systemd-journald.service. May 15 08:52:55.913535 kernel: audit: type=1130 audit(1747299175.894:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.913563 kernel: audit: type=1130 audit(1747299175.902:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.866962 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 08:52:55.917822 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 08:52:55.917843 kernel: audit: type=1130 audit(1747299175.917:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.867002 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 08:52:55.932815 kernel: Bridge firewalling registered May 15 08:52:55.870778 systemd-resolved[188]: Defaulting to hostname 'linux'. May 15 08:52:55.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.903636 systemd[1]: Finished systemd-vconsole-setup.service. May 15 08:52:55.945040 kernel: audit: type=1130 audit(1747299175.935:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.917758 systemd[1]: Reached target nss-lookup.target. May 15 08:52:55.923498 systemd[1]: Starting dracut-cmdline-ask.service... May 15 08:52:55.925484 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 08:52:55.926125 systemd-modules-load[187]: Inserted module 'br_netfilter' May 15 08:52:55.935527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 08:52:55.954455 kernel: SCSI subsystem initialized May 15 08:52:55.954967 systemd[1]: Finished dracut-cmdline-ask.service. May 15 08:52:55.960649 kernel: audit: type=1130 audit(1747299175.955:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.961604 systemd[1]: Starting dracut-cmdline.service... May 15 08:52:55.972959 dracut-cmdline[203]: dracut-dracut-053 May 15 08:52:55.975334 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 08:52:55.982382 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 08:52:55.982455 kernel: device-mapper: uevent: version 1.0.3 May 15 08:52:55.985463 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 08:52:55.988571 systemd-modules-load[187]: Inserted module 'dm_multipath' May 15 08:52:55.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.989998 systemd[1]: Finished systemd-modules-load.service. May 15 08:52:55.996791 kernel: audit: type=1130 audit(1747299175.990:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:55.991245 systemd[1]: Starting systemd-sysctl.service... May 15 08:52:56.003066 systemd[1]: Finished systemd-sysctl.service. May 15 08:52:56.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.009458 kernel: audit: type=1130 audit(1747299176.003:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.048462 kernel: Loading iSCSI transport class v2.0-870. May 15 08:52:56.069447 kernel: iscsi: registered transport (tcp) May 15 08:52:56.096457 kernel: iscsi: registered transport (qla4xxx) May 15 08:52:56.096525 kernel: QLogic iSCSI HBA Driver May 15 08:52:56.150869 systemd[1]: Finished dracut-cmdline.service. May 15 08:52:56.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.154029 systemd[1]: Starting dracut-pre-udev.service... May 15 08:52:56.159525 kernel: audit: type=1130 audit(1747299176.151:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.237519 kernel: raid6: sse2x4 gen() 9341 MB/s May 15 08:52:56.255522 kernel: raid6: sse2x4 xor() 5063 MB/s May 15 08:52:56.273520 kernel: raid6: sse2x2 gen() 13981 MB/s May 15 08:52:56.291530 kernel: raid6: sse2x2 xor() 8789 MB/s May 15 08:52:56.309527 kernel: raid6: sse2x1 gen() 9679 MB/s May 15 08:52:56.327880 kernel: raid6: sse2x1 xor() 7008 MB/s May 15 08:52:56.327943 kernel: raid6: using algorithm sse2x2 gen() 13981 MB/s May 15 08:52:56.327970 kernel: raid6: .... xor() 8789 MB/s, rmw enabled May 15 08:52:56.329093 kernel: raid6: using ssse3x2 recovery algorithm May 15 08:52:56.348852 kernel: xor: measuring software checksum speed May 15 08:52:56.348920 kernel: prefetch64-sse : 18354 MB/sec May 15 08:52:56.350150 kernel: generic_sse : 16495 MB/sec May 15 08:52:56.350237 kernel: xor: using function: prefetch64-sse (18354 MB/sec) May 15 08:52:56.468495 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 08:52:56.484927 systemd[1]: Finished dracut-pre-udev.service. May 15 08:52:56.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.486000 audit: BPF prog-id=7 op=LOAD May 15 08:52:56.486000 audit: BPF prog-id=8 op=LOAD May 15 08:52:56.488224 systemd[1]: Starting systemd-udevd.service... May 15 08:52:56.501564 systemd-udevd[385]: Using default interface naming scheme 'v252'. May 15 08:52:56.506209 systemd[1]: Started systemd-udevd.service. May 15 08:52:56.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.516882 systemd[1]: Starting dracut-pre-trigger.service... May 15 08:52:56.537046 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation May 15 08:52:56.585722 systemd[1]: Finished dracut-pre-trigger.service. May 15 08:52:56.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.588754 systemd[1]: Starting systemd-udev-trigger.service... May 15 08:52:56.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:56.628964 systemd[1]: Finished systemd-udev-trigger.service. May 15 08:52:56.707460 kernel: libata version 3.00 loaded. May 15 08:52:56.710570 kernel: ata_piix 0000:00:01.1: version 2.13 May 15 08:52:56.745272 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 15 08:52:56.749821 kernel: scsi host0: ata_piix May 15 08:52:56.749977 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 08:52:56.749995 kernel: GPT:17805311 != 20971519 May 15 08:52:56.750008 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 08:52:56.750021 kernel: GPT:17805311 != 20971519 May 15 08:52:56.750033 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 08:52:56.750045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 08:52:56.750057 kernel: scsi host1: ata_piix May 15 08:52:56.750204 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 15 08:52:56.750218 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 15 08:52:56.932452 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) May 15 08:52:56.934077 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 08:52:56.941913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 08:52:56.946120 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 08:52:56.947536 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 08:52:56.949730 systemd[1]: Starting disk-uuid.service... May 15 08:52:56.954995 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 08:52:56.970116 disk-uuid[471]: Primary Header is updated. May 15 08:52:56.970116 disk-uuid[471]: Secondary Entries is updated. May 15 08:52:56.970116 disk-uuid[471]: Secondary Header is updated. May 15 08:52:56.979445 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 08:52:56.984440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 08:52:58.007479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 08:52:58.008238 disk-uuid[472]: The operation has completed successfully. May 15 08:52:58.058036 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 08:52:58.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.058278 systemd[1]: Finished disk-uuid.service. May 15 08:52:58.078148 systemd[1]: Starting verity-setup.service... May 15 08:52:58.116808 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 15 08:52:58.219965 systemd[1]: Found device dev-mapper-usr.device. May 15 08:52:58.222330 systemd[1]: Finished verity-setup.service. May 15 08:52:58.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.225499 systemd[1]: Mounting sysusr-usr.mount... May 15 08:52:58.362484 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 08:52:58.364104 systemd[1]: Mounted sysusr-usr.mount. May 15 08:52:58.365466 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 08:52:58.367112 systemd[1]: Starting ignition-setup.service... May 15 08:52:58.371069 systemd[1]: Starting parse-ip-for-networkd.service... May 15 08:52:58.387637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 08:52:58.387696 kernel: BTRFS info (device vda6): using free space tree May 15 08:52:58.387715 kernel: BTRFS info (device vda6): has skinny extents May 15 08:52:58.405866 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 08:52:58.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.421323 systemd[1]: Finished ignition-setup.service. May 15 08:52:58.422843 systemd[1]: Starting ignition-fetch-offline.service... May 15 08:52:58.500949 systemd[1]: Finished parse-ip-for-networkd.service. May 15 08:52:58.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.501000 audit: BPF prog-id=9 op=LOAD May 15 08:52:58.502982 systemd[1]: Starting systemd-networkd.service... May 15 08:52:58.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.527175 systemd-networkd[642]: lo: Link UP May 15 08:52:58.527186 systemd-networkd[642]: lo: Gained carrier May 15 08:52:58.527639 systemd-networkd[642]: Enumeration completed May 15 08:52:58.527845 systemd[1]: Started systemd-networkd.service. May 15 08:52:58.527847 systemd-networkd[642]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 08:52:58.529035 systemd[1]: Reached target network.target. May 15 08:52:58.529192 systemd-networkd[642]: eth0: Link UP May 15 08:52:58.529196 systemd-networkd[642]: eth0: Gained carrier May 15 08:52:58.535324 systemd[1]: Starting iscsiuio.service... May 15 08:52:58.542076 systemd[1]: Started iscsiuio.service. May 15 08:52:58.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.544568 systemd[1]: Starting iscsid.service... May 15 08:52:58.548357 iscsid[652]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 08:52:58.548357 iscsid[652]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 08:52:58.548357 iscsid[652]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 08:52:58.548357 iscsid[652]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 08:52:58.548357 iscsid[652]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 08:52:58.548357 iscsid[652]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 08:52:58.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.550620 systemd[1]: Started iscsid.service. May 15 08:52:58.551854 systemd-networkd[642]: eth0: DHCPv4 address 172.24.4.191/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 08:52:58.555079 systemd[1]: Starting dracut-initqueue.service... May 15 08:52:58.570523 systemd[1]: Finished dracut-initqueue.service. May 15 08:52:58.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.571670 systemd[1]: Reached target remote-fs-pre.target. May 15 08:52:58.572123 systemd[1]: Reached target remote-cryptsetup.target. May 15 08:52:58.572628 systemd[1]: Reached target remote-fs.target. May 15 08:52:58.573971 systemd[1]: Starting dracut-pre-mount.service... May 15 08:52:58.584084 systemd[1]: Finished dracut-pre-mount.service. May 15 08:52:58.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.716113 ignition[565]: Ignition 2.14.0 May 15 08:52:58.718020 ignition[565]: Stage: fetch-offline May 15 08:52:58.719392 ignition[565]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:52:58.720661 ignition[565]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:52:58.723722 ignition[565]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:52:58.724089 ignition[565]: parsed url from cmdline: "" May 15 08:52:58.724102 ignition[565]: no config URL provided May 15 08:52:58.724119 ignition[565]: reading system config file "/usr/lib/ignition/user.ign" May 15 08:52:58.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:58.727131 systemd[1]: Finished ignition-fetch-offline.service. May 15 08:52:58.724144 ignition[565]: no config at "/usr/lib/ignition/user.ign" May 15 08:52:58.731402 systemd[1]: Starting ignition-fetch.service... May 15 08:52:58.724162 ignition[565]: failed to fetch config: resource requires networking May 15 08:52:58.724931 ignition[565]: Ignition finished successfully May 15 08:52:58.749190 ignition[666]: Ignition 2.14.0 May 15 08:52:58.749206 ignition[666]: Stage: fetch May 15 08:52:58.749382 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:52:58.749413 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:52:58.750982 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:52:58.751135 ignition[666]: parsed url from cmdline: "" May 15 08:52:58.751141 ignition[666]: no config URL provided May 15 08:52:58.751151 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 15 08:52:58.751165 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 15 08:52:58.756999 ignition[666]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 15 08:52:58.757057 ignition[666]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 15 08:52:58.757184 ignition[666]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 15 08:52:59.047697 ignition[666]: GET result: OK May 15 08:52:59.047882 ignition[666]: parsing config with SHA512: 15a96f40f2cb7999cf22f00b4faaa5ab35d5bab5a16c048a5f973d9923c473de93fe8ccc91919df64c0ebd4fe0b39067121bd0cfa551c64250d2e2adf4e411a1 May 15 08:52:59.066931 unknown[666]: fetched base config from "system" May 15 08:52:59.066963 unknown[666]: fetched base config from "system" May 15 08:52:59.068159 ignition[666]: fetch: fetch complete May 15 08:52:59.066978 unknown[666]: fetched user config from "openstack" May 15 08:52:59.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.068172 ignition[666]: fetch: fetch passed May 15 08:52:59.071322 systemd[1]: Finished ignition-fetch.service. May 15 08:52:59.068257 ignition[666]: Ignition finished successfully May 15 08:52:59.074631 systemd[1]: Starting ignition-kargs.service... May 15 08:52:59.084621 ignition[672]: Ignition 2.14.0 May 15 08:52:59.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.088085 systemd[1]: Finished ignition-kargs.service. May 15 08:52:59.084628 ignition[672]: Stage: kargs May 15 08:52:59.090638 systemd[1]: Starting ignition-disks.service... May 15 08:52:59.084739 ignition[672]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:52:59.084760 ignition[672]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:52:59.085667 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:52:59.087028 ignition[672]: kargs: kargs passed May 15 08:52:59.087096 ignition[672]: Ignition finished successfully May 15 08:52:59.108344 ignition[677]: Ignition 2.14.0 May 15 08:52:59.108369 ignition[677]: Stage: disks May 15 08:52:59.108661 ignition[677]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:52:59.108702 ignition[677]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:52:59.110885 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:52:59.113635 ignition[677]: disks: disks passed May 15 08:52:59.113739 ignition[677]: Ignition finished successfully May 15 08:52:59.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.114823 systemd[1]: Finished ignition-disks.service. May 15 08:52:59.115955 systemd[1]: Reached target initrd-root-device.target. May 15 08:52:59.117516 systemd[1]: Reached target local-fs-pre.target. May 15 08:52:59.119163 systemd[1]: Reached target local-fs.target. May 15 08:52:59.120854 systemd[1]: Reached target sysinit.target. May 15 08:52:59.122963 systemd[1]: Reached target basic.target. May 15 08:52:59.127007 systemd[1]: Starting systemd-fsck-root.service... May 15 08:52:59.156814 systemd-fsck[684]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks May 15 08:52:59.169399 systemd[1]: Finished systemd-fsck-root.service. May 15 08:52:59.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.170778 systemd[1]: Mounting sysroot.mount... May 15 08:52:59.193462 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 08:52:59.194330 systemd[1]: Mounted sysroot.mount. May 15 08:52:59.194981 systemd[1]: Reached target initrd-root-fs.target. May 15 08:52:59.198292 systemd[1]: Mounting sysroot-usr.mount... May 15 08:52:59.199145 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 08:52:59.199859 systemd[1]: Starting flatcar-openstack-hostname.service... May 15 08:52:59.200387 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 08:52:59.200418 systemd[1]: Reached target ignition-diskful.target. May 15 08:52:59.204981 systemd[1]: Mounted sysroot-usr.mount. May 15 08:52:59.208717 systemd[1]: Starting initrd-setup-root.service... May 15 08:52:59.221841 initrd-setup-root[695]: cut: /sysroot/etc/passwd: No such file or directory May 15 08:52:59.245052 initrd-setup-root[703]: cut: /sysroot/etc/group: No such file or directory May 15 08:52:59.253778 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 08:52:59.267210 initrd-setup-root[712]: cut: /sysroot/etc/shadow: No such file or directory May 15 08:52:59.275485 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (709) May 15 08:52:59.284342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 08:52:59.284407 kernel: BTRFS info (device vda6): using free space tree May 15 08:52:59.284477 kernel: BTRFS info (device vda6): has skinny extents May 15 08:52:59.290091 initrd-setup-root[722]: cut: /sysroot/etc/gshadow: No such file or directory May 15 08:52:59.311195 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 08:52:59.375823 systemd[1]: Finished initrd-setup-root.service. May 15 08:52:59.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.378728 systemd[1]: Starting ignition-mount.service... May 15 08:52:59.381405 systemd[1]: Starting sysroot-boot.service... May 15 08:52:59.417796 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 15 08:52:59.418019 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 15 08:52:59.441243 systemd[1]: Finished sysroot-boot.service. May 15 08:52:59.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.451864 ignition[759]: INFO : Ignition 2.14.0 May 15 08:52:59.451864 ignition[759]: INFO : Stage: mount May 15 08:52:59.453114 ignition[759]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:52:59.453114 ignition[759]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:52:59.453114 ignition[759]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:52:59.456392 ignition[759]: INFO : mount: mount passed May 15 08:52:59.456392 ignition[759]: INFO : Ignition finished successfully May 15 08:52:59.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.454924 systemd[1]: Finished ignition-mount.service. May 15 08:52:59.468858 coreos-metadata[690]: May 15 08:52:59.468 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 15 08:52:59.488250 coreos-metadata[690]: May 15 08:52:59.488 INFO Fetch successful May 15 08:52:59.489011 coreos-metadata[690]: May 15 08:52:59.488 INFO wrote hostname ci-3510-3-7-n-fb2247adc4.novalocal to /sysroot/etc/hostname May 15 08:52:59.493249 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 15 08:52:59.493347 systemd[1]: Finished flatcar-openstack-hostname.service. May 15 08:52:59.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:52:59.495465 systemd[1]: Starting ignition-files.service... May 15 08:52:59.503008 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 08:52:59.513483 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (767) May 15 08:52:59.517585 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 08:52:59.517613 kernel: BTRFS info (device vda6): using free space tree May 15 08:52:59.517625 kernel: BTRFS info (device vda6): has skinny extents May 15 08:52:59.527327 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 08:52:59.548481 ignition[786]: INFO : Ignition 2.14.0 May 15 08:52:59.548481 ignition[786]: INFO : Stage: files May 15 08:52:59.549617 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:52:59.549617 ignition[786]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:52:59.551362 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:52:59.555369 ignition[786]: DEBUG : files: compiled without relabeling support, skipping May 15 08:52:59.556741 ignition[786]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 08:52:59.556741 ignition[786]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 08:52:59.561920 ignition[786]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 08:52:59.562852 ignition[786]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 08:52:59.564113 unknown[786]: wrote ssh authorized keys file for user: core May 15 08:52:59.564812 ignition[786]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 08:52:59.565561 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 08:52:59.568239 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 15 08:52:59.642830 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 08:52:59.821081 systemd-networkd[642]: eth0: Gained IPv6LL May 15 08:53:00.442359 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 15 08:53:00.446535 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 08:53:00.447411 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 15 08:53:01.223360 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 08:53:01.844658 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 08:53:01.844658 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 08:53:01.849016 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 May 15 08:53:02.407768 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 08:53:04.801038 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" May 15 08:53:04.802643 ignition[786]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" May 15 08:53:04.803493 ignition[786]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" May 15 08:53:04.804301 ignition[786]: INFO : files: op(d): [started] processing unit "prepare-helm.service" May 15 08:53:04.806337 ignition[786]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 08:53:04.808355 ignition[786]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 08:53:04.808355 ignition[786]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" May 15 08:53:04.808355 ignition[786]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 15 08:53:04.808355 ignition[786]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 15 08:53:04.808355 ignition[786]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 15 08:53:04.808355 ignition[786]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 15 08:53:04.832753 kernel: kauditd_printk_skb: 27 callbacks suppressed May 15 08:53:04.832782 kernel: audit: type=1130 audit(1747299184.817:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.815877 systemd[1]: Finished ignition-files.service. May 15 08:53:04.839011 ignition[786]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 08:53:04.839011 ignition[786]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 08:53:04.839011 ignition[786]: INFO : files: files passed May 15 08:53:04.839011 ignition[786]: INFO : Ignition finished successfully May 15 08:53:04.860673 kernel: audit: type=1130 audit(1747299184.839:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.860704 kernel: audit: type=1130 audit(1747299184.847:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.860717 kernel: audit: type=1131 audit(1747299184.847:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.819084 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 08:53:04.831151 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 08:53:04.863646 initrd-setup-root-after-ignition[811]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 08:53:04.832901 systemd[1]: Starting ignition-quench.service... May 15 08:53:04.838765 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 08:53:04.840662 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 08:53:04.840858 systemd[1]: Finished ignition-quench.service. May 15 08:53:04.848109 systemd[1]: Reached target ignition-complete.target. May 15 08:53:04.861494 systemd[1]: Starting initrd-parse-etc.service... May 15 08:53:04.890638 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 08:53:04.892512 systemd[1]: Finished initrd-parse-etc.service. May 15 08:53:04.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.900662 systemd[1]: Reached target initrd-fs.target. May 15 08:53:04.918922 kernel: audit: type=1130 audit(1747299184.894:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.918983 kernel: audit: type=1131 audit(1747299184.900:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.917119 systemd[1]: Reached target initrd.target. May 15 08:53:04.917732 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 08:53:04.918584 systemd[1]: Starting dracut-pre-pivot.service... May 15 08:53:04.934931 systemd[1]: Finished dracut-pre-pivot.service. May 15 08:53:04.941599 kernel: audit: type=1130 audit(1747299184.935:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.936278 systemd[1]: Starting initrd-cleanup.service... May 15 08:53:04.951407 systemd[1]: Stopped target nss-lookup.target. May 15 08:53:04.952611 systemd[1]: Stopped target remote-cryptsetup.target. May 15 08:53:04.953810 systemd[1]: Stopped target timers.target. May 15 08:53:04.954988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 08:53:04.955732 systemd[1]: Stopped dracut-pre-pivot.service. May 15 08:53:04.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.957032 systemd[1]: Stopped target initrd.target. May 15 08:53:04.961814 kernel: audit: type=1131 audit(1747299184.956:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.962377 systemd[1]: Stopped target basic.target. May 15 08:53:04.962961 systemd[1]: Stopped target ignition-complete.target. May 15 08:53:04.963945 systemd[1]: Stopped target ignition-diskful.target. May 15 08:53:04.964988 systemd[1]: Stopped target initrd-root-device.target. May 15 08:53:04.966044 systemd[1]: Stopped target remote-fs.target. May 15 08:53:04.967116 systemd[1]: Stopped target remote-fs-pre.target. May 15 08:53:04.968113 systemd[1]: Stopped target sysinit.target. May 15 08:53:04.969095 systemd[1]: Stopped target local-fs.target. May 15 08:53:04.970027 systemd[1]: Stopped target local-fs-pre.target. May 15 08:53:04.971090 systemd[1]: Stopped target swap.target. May 15 08:53:04.972058 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 08:53:04.978507 kernel: audit: type=1131 audit(1747299184.972:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.972181 systemd[1]: Stopped dracut-pre-mount.service. May 15 08:53:04.973162 systemd[1]: Stopped target cryptsetup.target. May 15 08:53:04.985241 kernel: audit: type=1131 audit(1747299184.979:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.978975 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 08:53:04.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.979077 systemd[1]: Stopped dracut-initqueue.service. May 15 08:53:04.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.980138 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 08:53:04.980288 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 08:53:04.985886 systemd[1]: ignition-files.service: Deactivated successfully. May 15 08:53:04.986025 systemd[1]: Stopped ignition-files.service. May 15 08:53:04.987835 systemd[1]: Stopping ignition-mount.service... May 15 08:53:04.995324 iscsid[652]: iscsid shutting down. May 15 08:53:04.997111 systemd[1]: Stopping iscsid.service... May 15 08:53:04.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.003878 ignition[824]: INFO : Ignition 2.14.0 May 15 08:53:05.003878 ignition[824]: INFO : Stage: umount May 15 08:53:05.003878 ignition[824]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 15 08:53:05.003878 ignition[824]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a May 15 08:53:05.003878 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 15 08:53:05.003878 ignition[824]: INFO : umount: umount passed May 15 08:53:05.003878 ignition[824]: INFO : Ignition finished successfully May 15 08:53:05.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:04.998906 systemd[1]: Stopping sysroot-boot.service... May 15 08:53:04.999385 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 08:53:04.999598 systemd[1]: Stopped systemd-udev-trigger.service. May 15 08:53:05.000326 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 08:53:05.000511 systemd[1]: Stopped dracut-pre-trigger.service. May 15 08:53:05.003242 systemd[1]: iscsid.service: Deactivated successfully. May 15 08:53:05.003346 systemd[1]: Stopped iscsid.service. May 15 08:53:05.005084 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 08:53:05.005164 systemd[1]: Finished initrd-cleanup.service. May 15 08:53:05.005968 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 08:53:05.006048 systemd[1]: Stopped ignition-mount.service. May 15 08:53:05.011977 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 08:53:05.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.012042 systemd[1]: Stopped ignition-disks.service. May 15 08:53:05.012562 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 08:53:05.012601 systemd[1]: Stopped ignition-kargs.service. May 15 08:53:05.013091 systemd[1]: ignition-fetch.service: Deactivated successfully. May 15 08:53:05.013127 systemd[1]: Stopped ignition-fetch.service. May 15 08:53:05.013646 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 08:53:05.013685 systemd[1]: Stopped ignition-fetch-offline.service. May 15 08:53:05.014239 systemd[1]: Stopped target paths.target. May 15 08:53:05.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.014840 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 08:53:05.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.022609 systemd[1]: Stopped systemd-ask-password-console.path. May 15 08:53:05.023404 systemd[1]: Stopped target slices.target. May 15 08:53:05.024545 systemd[1]: Stopped target sockets.target. May 15 08:53:05.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.025630 systemd[1]: iscsid.socket: Deactivated successfully. May 15 08:53:05.025670 systemd[1]: Closed iscsid.socket. May 15 08:53:05.026683 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 08:53:05.026725 systemd[1]: Stopped ignition-setup.service. May 15 08:53:05.027709 systemd[1]: Stopping iscsiuio.service... May 15 08:53:05.037002 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 08:53:05.037494 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 08:53:05.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.037591 systemd[1]: Stopped iscsiuio.service. May 15 08:53:05.038381 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 08:53:05.038479 systemd[1]: Stopped sysroot-boot.service. May 15 08:53:05.039238 systemd[1]: Stopped target network.target. May 15 08:53:05.040333 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 08:53:05.040367 systemd[1]: Closed iscsiuio.socket. May 15 08:53:05.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.041251 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 08:53:05.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.041291 systemd[1]: Stopped initrd-setup-root.service. May 15 08:53:05.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.042458 systemd[1]: Stopping systemd-networkd.service... May 15 08:53:05.043354 systemd[1]: Stopping systemd-resolved.service... May 15 08:53:05.045482 systemd-networkd[642]: eth0: DHCPv6 lease lost May 15 08:53:05.057000 audit: BPF prog-id=9 op=UNLOAD May 15 08:53:05.046572 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 08:53:05.046658 systemd[1]: Stopped systemd-networkd.service. May 15 08:53:05.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.048311 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 08:53:05.048344 systemd[1]: Closed systemd-networkd.socket. May 15 08:53:05.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.049854 systemd[1]: Stopping network-cleanup.service... May 15 08:53:05.053041 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 08:53:05.063000 audit: BPF prog-id=6 op=UNLOAD May 15 08:53:05.053098 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 08:53:05.054209 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 08:53:05.054278 systemd[1]: Stopped systemd-sysctl.service. May 15 08:53:05.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.055371 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 08:53:05.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.055410 systemd[1]: Stopped systemd-modules-load.service. May 15 08:53:05.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.056291 systemd[1]: Stopping systemd-udevd.service... May 15 08:53:05.058284 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 08:53:05.058804 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 08:53:05.058897 systemd[1]: Stopped systemd-resolved.service. May 15 08:53:05.061279 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 08:53:05.061399 systemd[1]: Stopped systemd-udevd.service. May 15 08:53:05.063403 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 08:53:05.063521 systemd[1]: Closed systemd-udevd-control.socket. May 15 08:53:05.065718 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 08:53:05.065750 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 08:53:05.066822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 08:53:05.066864 systemd[1]: Stopped dracut-pre-udev.service. May 15 08:53:05.067866 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 08:53:05.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.067906 systemd[1]: Stopped dracut-cmdline.service. May 15 08:53:05.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.068826 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 08:53:05.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.068866 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 08:53:05.070782 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 08:53:05.077131 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 08:53:05.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.077200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 08:53:05.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.078376 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 08:53:05.078417 systemd[1]: Stopped kmod-static-nodes.service. May 15 08:53:05.079106 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 08:53:05.079143 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 08:53:05.080950 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 08:53:05.081413 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 08:53:05.081547 systemd[1]: Stopped network-cleanup.service. May 15 08:53:05.082407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 08:53:05.082509 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 08:53:05.083471 systemd[1]: Reached target initrd-switch-root.target. May 15 08:53:05.084955 systemd[1]: Starting initrd-switch-root.service... May 15 08:53:05.104960 systemd[1]: Switching root. May 15 08:53:05.127072 systemd-journald[186]: Journal stopped May 15 08:53:09.716373 systemd-journald[186]: Received SIGTERM from PID 1 (n/a). May 15 08:53:09.717184 kernel: SELinux: Class mctp_socket not defined in policy. May 15 08:53:09.717218 kernel: SELinux: Class anon_inode not defined in policy. May 15 08:53:09.717232 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 08:53:09.717247 kernel: SELinux: policy capability network_peer_controls=1 May 15 08:53:09.717259 kernel: SELinux: policy capability open_perms=1 May 15 08:53:09.717270 kernel: SELinux: policy capability extended_socket_class=1 May 15 08:53:09.717281 kernel: SELinux: policy capability always_check_network=0 May 15 08:53:09.717293 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 08:53:09.717304 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 08:53:09.717314 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 08:53:09.717325 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 08:53:09.717340 systemd[1]: Successfully loaded SELinux policy in 103.760ms. May 15 08:53:09.717362 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.111ms. May 15 08:53:09.717376 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 08:53:09.717389 systemd[1]: Detected virtualization kvm. May 15 08:53:09.717400 systemd[1]: Detected architecture x86-64. May 15 08:53:09.717412 systemd[1]: Detected first boot. May 15 08:53:09.717446 systemd[1]: Hostname set to . May 15 08:53:09.723602 systemd[1]: Initializing machine ID from VM UUID. May 15 08:53:09.723624 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 08:53:09.723637 systemd[1]: Populated /etc with preset unit settings. May 15 08:53:09.723650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 08:53:09.723668 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 08:53:09.723683 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 08:53:09.723697 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 08:53:09.723711 systemd[1]: Stopped initrd-switch-root.service. May 15 08:53:09.723724 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 08:53:09.723736 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 08:53:09.723749 systemd[1]: Created slice system-addon\x2drun.slice. May 15 08:53:09.723762 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 15 08:53:09.723774 systemd[1]: Created slice system-getty.slice. May 15 08:53:09.723786 systemd[1]: Created slice system-modprobe.slice. May 15 08:53:09.723798 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 08:53:09.723810 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 08:53:09.723825 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 08:53:09.723842 systemd[1]: Created slice user.slice. May 15 08:53:09.723855 systemd[1]: Started systemd-ask-password-console.path. May 15 08:53:09.723868 systemd[1]: Started systemd-ask-password-wall.path. May 15 08:53:09.723880 systemd[1]: Set up automount boot.automount. May 15 08:53:09.723894 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 08:53:09.723908 systemd[1]: Stopped target initrd-switch-root.target. May 15 08:53:09.723921 systemd[1]: Stopped target initrd-fs.target. May 15 08:53:09.723933 systemd[1]: Stopped target initrd-root-fs.target. May 15 08:53:09.723946 systemd[1]: Reached target integritysetup.target. May 15 08:53:09.723958 systemd[1]: Reached target remote-cryptsetup.target. May 15 08:53:09.723970 systemd[1]: Reached target remote-fs.target. May 15 08:53:09.723982 systemd[1]: Reached target slices.target. May 15 08:53:09.723994 systemd[1]: Reached target swap.target. May 15 08:53:09.724007 systemd[1]: Reached target torcx.target. May 15 08:53:09.724020 systemd[1]: Reached target veritysetup.target. May 15 08:53:09.724036 systemd[1]: Listening on systemd-coredump.socket. May 15 08:53:09.724050 systemd[1]: Listening on systemd-initctl.socket. May 15 08:53:09.724062 systemd[1]: Listening on systemd-networkd.socket. May 15 08:53:09.724075 systemd[1]: Listening on systemd-udevd-control.socket. May 15 08:53:09.724087 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 08:53:09.724100 systemd[1]: Listening on systemd-userdbd.socket. May 15 08:53:09.724111 systemd[1]: Mounting dev-hugepages.mount... May 15 08:53:09.724123 systemd[1]: Mounting dev-mqueue.mount... May 15 08:53:09.724136 systemd[1]: Mounting media.mount... May 15 08:53:09.724152 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:09.724164 systemd[1]: Mounting sys-kernel-debug.mount... May 15 08:53:09.724176 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 08:53:09.724188 systemd[1]: Mounting tmp.mount... May 15 08:53:09.724201 systemd[1]: Starting flatcar-tmpfiles.service... May 15 08:53:09.724217 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 08:53:09.724229 systemd[1]: Starting kmod-static-nodes.service... May 15 08:53:09.724242 systemd[1]: Starting modprobe@configfs.service... May 15 08:53:09.724253 systemd[1]: Starting modprobe@dm_mod.service... May 15 08:53:09.724267 systemd[1]: Starting modprobe@drm.service... May 15 08:53:09.724278 systemd[1]: Starting modprobe@efi_pstore.service... May 15 08:53:09.724290 systemd[1]: Starting modprobe@fuse.service... May 15 08:53:09.724301 systemd[1]: Starting modprobe@loop.service... May 15 08:53:09.724313 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 08:53:09.724324 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 08:53:09.724336 systemd[1]: Stopped systemd-fsck-root.service. May 15 08:53:09.724348 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 08:53:09.724360 systemd[1]: Stopped systemd-fsck-usr.service. May 15 08:53:09.724373 systemd[1]: Stopped systemd-journald.service. May 15 08:53:09.724385 systemd[1]: Starting systemd-journald.service... May 15 08:53:09.724396 kernel: loop: module loaded May 15 08:53:09.724407 systemd[1]: Starting systemd-modules-load.service... May 15 08:53:09.724438 systemd[1]: Starting systemd-network-generator.service... May 15 08:53:09.724454 systemd[1]: Starting systemd-remount-fs.service... May 15 08:53:09.724466 systemd[1]: Starting systemd-udev-trigger.service... May 15 08:53:09.724477 systemd[1]: verity-setup.service: Deactivated successfully. May 15 08:53:09.724489 systemd[1]: Stopped verity-setup.service. May 15 08:53:09.724503 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:09.724515 systemd[1]: Mounted dev-hugepages.mount. May 15 08:53:09.724526 systemd[1]: Mounted dev-mqueue.mount. May 15 08:53:09.724537 systemd[1]: Mounted media.mount. May 15 08:53:09.724549 systemd[1]: Mounted sys-kernel-debug.mount. May 15 08:53:09.724561 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 08:53:09.724571 kernel: fuse: init (API version 7.34) May 15 08:53:09.724582 systemd[1]: Mounted tmp.mount. May 15 08:53:09.724593 systemd[1]: Finished kmod-static-nodes.service. May 15 08:53:09.724607 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 08:53:09.724618 systemd[1]: Finished modprobe@configfs.service. May 15 08:53:09.724630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 08:53:09.724642 systemd[1]: Finished modprobe@dm_mod.service. May 15 08:53:09.724658 systemd-journald[934]: Journal started May 15 08:53:09.724720 systemd-journald[934]: Runtime Journal (/run/log/journal/399bd93f0b7547a6becc633f22d1b20f) is 8.0M, max 78.4M, 70.4M free. May 15 08:53:05.440000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 08:53:05.535000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 08:53:05.535000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 08:53:05.536000 audit: BPF prog-id=10 op=LOAD May 15 08:53:05.536000 audit: BPF prog-id=10 op=UNLOAD May 15 08:53:05.536000 audit: BPF prog-id=11 op=LOAD May 15 08:53:05.536000 audit: BPF prog-id=11 op=UNLOAD May 15 08:53:05.693000 audit[856]: AVC avc: denied { associate } for pid=856 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 08:53:05.693000 audit[856]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178c2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=839 pid=856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 08:53:05.693000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 08:53:05.696000 audit[856]: AVC avc: denied { associate } for pid=856 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 08:53:05.696000 audit[856]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000117999 a2=1ed a3=0 items=2 ppid=839 pid=856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 08:53:05.696000 audit: CWD cwd="/" May 15 08:53:05.696000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:05.696000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:05.696000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 08:53:09.489000 audit: BPF prog-id=12 op=LOAD May 15 08:53:09.489000 audit: BPF prog-id=3 op=UNLOAD May 15 08:53:09.489000 audit: BPF prog-id=13 op=LOAD May 15 08:53:09.489000 audit: BPF prog-id=14 op=LOAD May 15 08:53:09.489000 audit: BPF prog-id=4 op=UNLOAD May 15 08:53:09.489000 audit: BPF prog-id=5 op=UNLOAD May 15 08:53:09.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.728536 systemd[1]: Started systemd-journald.service. May 15 08:53:09.502000 audit: BPF prog-id=12 op=UNLOAD May 15 08:53:09.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.659000 audit: BPF prog-id=15 op=LOAD May 15 08:53:09.659000 audit: BPF prog-id=16 op=LOAD May 15 08:53:09.660000 audit: BPF prog-id=17 op=LOAD May 15 08:53:09.660000 audit: BPF prog-id=13 op=UNLOAD May 15 08:53:09.660000 audit: BPF prog-id=14 op=UNLOAD May 15 08:53:09.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.714000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 08:53:09.714000 audit[934]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff0aeee440 a2=4000 a3=7fff0aeee4dc items=0 ppid=1 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 08:53:09.714000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 08:53:09.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.690625 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 08:53:09.487541 systemd[1]: Queued start job for default target multi-user.target. May 15 08:53:05.691492 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 08:53:09.487556 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 08:53:05.691515 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 08:53:09.490893 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 08:53:09.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.691554 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 08:53:09.727851 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 08:53:05.691567 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 08:53:09.727973 systemd[1]: Finished modprobe@drm.service. May 15 08:53:05.691601 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 08:53:09.728628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 08:53:05.691616 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 08:53:09.728741 systemd[1]: Finished modprobe@efi_pstore.service. May 15 08:53:05.691854 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 08:53:09.729394 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 08:53:09.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.691896 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 08:53:09.729543 systemd[1]: Finished modprobe@fuse.service. May 15 08:53:05.691912 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 08:53:09.730254 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 08:53:05.692927 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 08:53:09.730370 systemd[1]: Finished modprobe@loop.service. May 15 08:53:05.692966 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 08:53:09.731224 systemd[1]: Finished systemd-modules-load.service. May 15 08:53:05.692987 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 15 08:53:05.693005 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 08:53:09.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:05.693024 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 15 08:53:05.693041 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 08:53:09.042754 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:09Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 08:53:09.732172 systemd[1]: Finished systemd-network-generator.service. May 15 08:53:09.043341 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:09Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 08:53:09.043523 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:09Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 08:53:09.043745 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:09Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 08:53:09.043819 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:09Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 08:53:09.043912 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-05-15T08:53:09Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 08:53:09.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.733156 systemd[1]: Finished systemd-remount-fs.service. May 15 08:53:09.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.736115 systemd[1]: Reached target network-pre.target. May 15 08:53:09.739570 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 08:53:09.741055 systemd[1]: Mounting sys-kernel-config.mount... May 15 08:53:09.744815 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 08:53:09.747237 systemd[1]: Starting systemd-hwdb-update.service... May 15 08:53:09.748740 systemd[1]: Starting systemd-journal-flush.service... May 15 08:53:09.749276 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 08:53:09.750293 systemd[1]: Starting systemd-random-seed.service... May 15 08:53:09.750940 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 08:53:09.751962 systemd[1]: Starting systemd-sysctl.service... May 15 08:53:09.753858 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 08:53:09.756992 systemd[1]: Mounted sys-kernel-config.mount. May 15 08:53:09.765117 systemd-journald[934]: Time spent on flushing to /var/log/journal/399bd93f0b7547a6becc633f22d1b20f is 35.832ms for 1100 entries. May 15 08:53:09.765117 systemd-journald[934]: System Journal (/var/log/journal/399bd93f0b7547a6becc633f22d1b20f) is 8.0M, max 584.8M, 576.8M free. May 15 08:53:09.824691 systemd-journald[934]: Received client request to flush runtime journal. May 15 08:53:09.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.778310 systemd[1]: Finished systemd-random-seed.service. May 15 08:53:09.778965 systemd[1]: Reached target first-boot-complete.target. May 15 08:53:09.830409 udevadm[965]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 08:53:09.795819 systemd[1]: Finished systemd-sysctl.service. May 15 08:53:09.797993 systemd[1]: Finished flatcar-tmpfiles.service. May 15 08:53:09.799668 systemd[1]: Starting systemd-sysusers.service... May 15 08:53:09.815994 systemd[1]: Finished systemd-udev-trigger.service. May 15 08:53:09.817739 systemd[1]: Starting systemd-udev-settle.service... May 15 08:53:09.828849 systemd[1]: Finished systemd-journal-flush.service. May 15 08:53:09.837465 kernel: kauditd_printk_skb: 93 callbacks suppressed May 15 08:53:09.837510 kernel: audit: type=1130 audit(1747299189.828:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.856530 systemd[1]: Finished systemd-sysusers.service. May 15 08:53:09.858211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 08:53:09.864962 kernel: audit: type=1130 audit(1747299189.856:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:09.911711 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 08:53:09.918562 kernel: audit: type=1130 audit(1747299189.911:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.544659 systemd[1]: Finished systemd-hwdb-update.service. May 15 08:53:10.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.563184 kernel: audit: type=1130 audit(1747299190.545:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.563280 kernel: audit: type=1334 audit(1747299190.558:136): prog-id=18 op=LOAD May 15 08:53:10.558000 audit: BPF prog-id=18 op=LOAD May 15 08:53:10.560302 systemd[1]: Starting systemd-udevd.service... May 15 08:53:10.558000 audit: BPF prog-id=19 op=LOAD May 15 08:53:10.569492 kernel: audit: type=1334 audit(1747299190.558:137): prog-id=19 op=LOAD May 15 08:53:10.569614 kernel: audit: type=1334 audit(1747299190.558:138): prog-id=7 op=UNLOAD May 15 08:53:10.558000 audit: BPF prog-id=7 op=UNLOAD May 15 08:53:10.558000 audit: BPF prog-id=8 op=UNLOAD May 15 08:53:10.576156 kernel: audit: type=1334 audit(1747299190.558:139): prog-id=8 op=UNLOAD May 15 08:53:10.607319 systemd-udevd[969]: Using default interface naming scheme 'v252'. May 15 08:53:10.647457 systemd[1]: Started systemd-udevd.service. May 15 08:53:10.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.664459 kernel: audit: type=1130 audit(1747299190.651:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.672199 systemd[1]: Starting systemd-networkd.service... May 15 08:53:10.669000 audit: BPF prog-id=20 op=LOAD May 15 08:53:10.680653 kernel: audit: type=1334 audit(1747299190.669:141): prog-id=20 op=LOAD May 15 08:53:10.687071 systemd[1]: Starting systemd-userdbd.service... May 15 08:53:10.684000 audit: BPF prog-id=21 op=LOAD May 15 08:53:10.684000 audit: BPF prog-id=22 op=LOAD May 15 08:53:10.684000 audit: BPF prog-id=23 op=LOAD May 15 08:53:10.751705 systemd[1]: Started systemd-userdbd.service. May 15 08:53:10.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.755237 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 15 08:53:10.776235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 08:53:10.814000 audit[982]: AVC avc: denied { confidentiality } for pid=982 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 08:53:10.814000 audit[982]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5642b0a58110 a1=338ac a2=7fea9de43bc5 a3=5 items=110 ppid=969 pid=982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 08:53:10.814000 audit: CWD cwd="/" May 15 08:53:10.814000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=1 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=2 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=3 name=(null) inode=13598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=4 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=5 name=(null) inode=13599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=6 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=7 name=(null) inode=13600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=8 name=(null) inode=13600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=9 name=(null) inode=13601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=10 name=(null) inode=13600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=11 name=(null) inode=13602 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=12 name=(null) inode=13600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=13 name=(null) inode=13603 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=14 name=(null) inode=13600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=15 name=(null) inode=13604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=16 name=(null) inode=13600 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=17 name=(null) inode=13605 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=18 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=19 name=(null) inode=13606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=20 name=(null) inode=13606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=21 name=(null) inode=13607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=22 name=(null) inode=13606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=23 name=(null) inode=13608 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=24 name=(null) inode=13606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=25 name=(null) inode=13609 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=26 name=(null) inode=13606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=27 name=(null) inode=13610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=28 name=(null) inode=13606 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=29 name=(null) inode=13611 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=30 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=31 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=32 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=33 name=(null) inode=13613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=34 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=35 name=(null) inode=13614 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=36 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=37 name=(null) inode=13615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=38 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=39 name=(null) inode=13616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=40 name=(null) inode=13612 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=41 name=(null) inode=13617 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=42 name=(null) inode=13597 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=43 name=(null) inode=13618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=44 name=(null) inode=13618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=45 name=(null) inode=13619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=46 name=(null) inode=13618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=47 name=(null) inode=13620 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=48 name=(null) inode=13618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=49 name=(null) inode=13621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=50 name=(null) inode=13618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=51 name=(null) inode=13622 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=52 name=(null) inode=13618 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=53 name=(null) inode=13623 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=55 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=56 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=57 name=(null) inode=13625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=58 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=59 name=(null) inode=13626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=60 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=61 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=62 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=63 name=(null) inode=13628 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=64 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=65 name=(null) inode=13629 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=66 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=67 name=(null) inode=13630 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=68 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=69 name=(null) inode=13631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=70 name=(null) inode=13627 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=71 name=(null) inode=13632 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=72 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=73 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=74 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=75 name=(null) inode=13634 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=76 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=77 name=(null) inode=13635 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=78 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=79 name=(null) inode=13636 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=80 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=81 name=(null) inode=13637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=82 name=(null) inode=13633 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=83 name=(null) inode=13638 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=84 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=85 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=86 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=87 name=(null) inode=13640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=88 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=89 name=(null) inode=13641 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=90 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=91 name=(null) inode=13642 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=92 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=93 name=(null) inode=13643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=94 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=95 name=(null) inode=13644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=96 name=(null) inode=13624 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=97 name=(null) inode=13645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=98 name=(null) inode=13645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=99 name=(null) inode=13646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=100 name=(null) inode=13645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=101 name=(null) inode=13647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=102 name=(null) inode=13645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=103 name=(null) inode=13648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=104 name=(null) inode=13645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=105 name=(null) inode=13649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=106 name=(null) inode=13645 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=107 name=(null) inode=13650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PATH item=109 name=(null) inode=13651 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 08:53:10.814000 audit: PROCTITLE proctitle="(udev-worker)" May 15 08:53:10.859456 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 08:53:10.859542 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 15 08:53:10.867455 kernel: ACPI: button: Power Button [PWRF] May 15 08:53:10.881479 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 08:53:10.946450 kernel: mousedev: PS/2 mouse device common for all mice May 15 08:53:10.966245 systemd[1]: Finished systemd-udev-settle.service. May 15 08:53:10.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:10.969953 systemd[1]: Starting lvm2-activation-early.service... May 15 08:53:11.177790 systemd-networkd[990]: lo: Link UP May 15 08:53:11.177816 systemd-networkd[990]: lo: Gained carrier May 15 08:53:11.178829 systemd-networkd[990]: Enumeration completed May 15 08:53:11.179038 systemd[1]: Started systemd-networkd.service. May 15 08:53:11.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:11.181810 systemd-networkd[990]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 08:53:11.205543 systemd-networkd[990]: eth0: Link UP May 15 08:53:11.205783 systemd-networkd[990]: eth0: Gained carrier May 15 08:53:11.225909 systemd-networkd[990]: eth0: DHCPv4 address 172.24.4.191/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 15 08:53:11.302240 lvm[1003]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 08:53:11.352906 systemd[1]: Finished lvm2-activation-early.service. May 15 08:53:11.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:11.354356 systemd[1]: Reached target cryptsetup.target. May 15 08:53:11.357572 systemd[1]: Starting lvm2-activation.service... May 15 08:53:11.369392 lvm[1004]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 08:53:11.411311 systemd[1]: Finished lvm2-activation.service. May 15 08:53:11.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:11.412692 systemd[1]: Reached target local-fs-pre.target. May 15 08:53:11.413839 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 08:53:11.413899 systemd[1]: Reached target local-fs.target. May 15 08:53:11.415025 systemd[1]: Reached target machines.target. May 15 08:53:11.418537 systemd[1]: Starting ldconfig.service... May 15 08:53:11.420882 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 08:53:11.420974 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 08:53:11.423208 systemd[1]: Starting systemd-boot-update.service... May 15 08:53:11.427296 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 08:53:11.431109 systemd[1]: Starting systemd-machine-id-commit.service... May 15 08:53:11.440632 systemd[1]: Starting systemd-sysext.service... May 15 08:53:11.456756 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1006 (bootctl) May 15 08:53:11.459089 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 08:53:11.488653 systemd[1]: Unmounting usr-share-oem.mount... May 15 08:53:11.508221 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 08:53:11.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:11.530624 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 08:53:11.530990 systemd[1]: Unmounted usr-share-oem.mount. May 15 08:53:11.572536 kernel: loop0: detected capacity change from 0 to 218376 May 15 08:53:12.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.238518 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 08:53:12.239801 systemd[1]: Finished systemd-machine-id-commit.service. May 15 08:53:12.275815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 08:53:12.318592 kernel: loop1: detected capacity change from 0 to 218376 May 15 08:53:12.368917 (sd-sysext)[1021]: Using extensions 'kubernetes'. May 15 08:53:12.369941 (sd-sysext)[1021]: Merged extensions into '/usr'. May 15 08:53:12.408869 systemd-fsck[1017]: fsck.fat 4.2 (2021-01-31) May 15 08:53:12.408869 systemd-fsck[1017]: /dev/vda1: 790 files, 120690/258078 clusters May 15 08:53:12.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.415742 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 08:53:12.418584 systemd[1]: Mounting boot.mount... May 15 08:53:12.419095 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:12.424859 systemd[1]: Mounting usr-share-oem.mount... May 15 08:53:12.426468 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 08:53:12.428021 systemd[1]: Starting modprobe@dm_mod.service... May 15 08:53:12.431535 systemd[1]: Starting modprobe@efi_pstore.service... May 15 08:53:12.434169 systemd[1]: Starting modprobe@loop.service... May 15 08:53:12.435287 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 08:53:12.435467 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 08:53:12.435725 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:12.439004 systemd[1]: Mounted usr-share-oem.mount. May 15 08:53:12.439824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 08:53:12.439958 systemd[1]: Finished modprobe@dm_mod.service. May 15 08:53:12.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.441369 systemd[1]: Finished systemd-sysext.service. May 15 08:53:12.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.442044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 08:53:12.442178 systemd[1]: Finished modprobe@efi_pstore.service. May 15 08:53:12.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.442949 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 08:53:12.443072 systemd[1]: Finished modprobe@loop.service. May 15 08:53:12.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.445802 systemd[1]: Starting ensure-sysext.service... May 15 08:53:12.446310 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 08:53:12.446368 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 08:53:12.447378 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 08:53:12.456537 systemd[1]: Reloading. May 15 08:53:12.503353 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 08:53:12.508073 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 08:53:12.510646 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 08:53:12.551032 /usr/lib/systemd/system-generators/torcx-generator[1048]: time="2025-05-15T08:53:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 08:53:12.558154 /usr/lib/systemd/system-generators/torcx-generator[1048]: time="2025-05-15T08:53:12Z" level=info msg="torcx already run" May 15 08:53:12.687361 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 08:53:12.687948 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 08:53:12.711703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 08:53:12.748634 systemd-networkd[990]: eth0: Gained IPv6LL May 15 08:53:12.774000 audit: BPF prog-id=24 op=LOAD May 15 08:53:12.774000 audit: BPF prog-id=20 op=UNLOAD May 15 08:53:12.775000 audit: BPF prog-id=25 op=LOAD May 15 08:53:12.775000 audit: BPF prog-id=26 op=LOAD May 15 08:53:12.775000 audit: BPF prog-id=18 op=UNLOAD May 15 08:53:12.775000 audit: BPF prog-id=19 op=UNLOAD May 15 08:53:12.777000 audit: BPF prog-id=27 op=LOAD May 15 08:53:12.777000 audit: BPF prog-id=21 op=UNLOAD May 15 08:53:12.777000 audit: BPF prog-id=28 op=LOAD May 15 08:53:12.777000 audit: BPF prog-id=29 op=LOAD May 15 08:53:12.777000 audit: BPF prog-id=22 op=UNLOAD May 15 08:53:12.777000 audit: BPF prog-id=23 op=UNLOAD May 15 08:53:12.778000 audit: BPF prog-id=30 op=LOAD May 15 08:53:12.778000 audit: BPF prog-id=15 op=UNLOAD May 15 08:53:12.778000 audit: BPF prog-id=31 op=LOAD May 15 08:53:12.778000 audit: BPF prog-id=32 op=LOAD May 15 08:53:12.778000 audit: BPF prog-id=16 op=UNLOAD May 15 08:53:12.778000 audit: BPF prog-id=17 op=UNLOAD May 15 08:53:12.788843 systemd[1]: Mounted boot.mount. May 15 08:53:12.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.814237 systemd[1]: Finished systemd-boot-update.service. May 15 08:53:12.827030 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:12.827271 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 08:53:12.828741 systemd[1]: Starting modprobe@dm_mod.service... May 15 08:53:12.831185 systemd[1]: Starting modprobe@efi_pstore.service... May 15 08:53:12.834942 systemd[1]: Starting modprobe@loop.service... May 15 08:53:12.835542 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 08:53:12.835669 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 08:53:12.835800 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:12.836749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 08:53:12.836878 systemd[1]: Finished modprobe@dm_mod.service. May 15 08:53:12.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.838611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 08:53:12.838728 systemd[1]: Finished modprobe@efi_pstore.service. May 15 08:53:12.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.840057 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 08:53:12.840178 systemd[1]: Finished modprobe@loop.service. May 15 08:53:12.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.841784 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 08:53:12.841885 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 08:53:12.849693 systemd[1]: Finished ensure-sysext.service. May 15 08:53:12.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.851820 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:12.852210 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 08:53:12.853773 systemd[1]: Starting modprobe@dm_mod.service... May 15 08:53:12.855807 systemd[1]: Starting modprobe@drm.service... May 15 08:53:12.858497 systemd[1]: Starting modprobe@efi_pstore.service... May 15 08:53:12.862515 systemd[1]: Starting modprobe@loop.service... May 15 08:53:12.863267 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 08:53:12.863556 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 08:53:12.865728 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 08:53:12.866487 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 08:53:12.867163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 08:53:12.867531 systemd[1]: Finished modprobe@dm_mod.service. May 15 08:53:12.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.868667 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 08:53:12.868807 systemd[1]: Finished modprobe@drm.service. May 15 08:53:12.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.870036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 08:53:12.870238 systemd[1]: Finished modprobe@efi_pstore.service. May 15 08:53:12.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.871643 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 08:53:12.877762 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 08:53:12.877905 systemd[1]: Finished modprobe@loop.service. May 15 08:53:12.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.878588 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 08:53:12.881510 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 08:53:12.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.925954 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 08:53:12.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.928320 systemd[1]: Starting audit-rules.service... May 15 08:53:12.930007 systemd[1]: Starting clean-ca-certificates.service... May 15 08:53:12.931768 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 08:53:12.932000 audit: BPF prog-id=33 op=LOAD May 15 08:53:12.934414 systemd[1]: Starting systemd-resolved.service... May 15 08:53:12.937000 audit: BPF prog-id=34 op=LOAD May 15 08:53:12.939045 systemd[1]: Starting systemd-timesyncd.service... May 15 08:53:12.940628 systemd[1]: Starting systemd-update-utmp.service... May 15 08:53:12.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.949628 systemd[1]: Finished clean-ca-certificates.service. May 15 08:53:12.950314 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 08:53:12.952000 audit[1110]: SYSTEM_BOOT pid=1110 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 08:53:12.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.956247 systemd[1]: Finished systemd-update-utmp.service. May 15 08:53:12.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 08:53:12.995166 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 08:53:13.020000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 08:53:13.020000 audit[1124]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb671fbe0 a2=420 a3=0 items=0 ppid=1104 pid=1124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 08:53:13.020000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 08:53:13.020739 augenrules[1124]: No rules May 15 08:53:13.021876 systemd[1]: Finished audit-rules.service. May 15 08:53:13.031585 ldconfig[1005]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 08:53:13.042486 systemd-resolved[1108]: Positive Trust Anchors: May 15 08:53:13.042706 systemd-resolved[1108]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 08:53:13.042744 systemd-resolved[1108]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 08:53:13.052142 systemd[1]: Finished ldconfig.service. May 15 08:53:13.054157 systemd[1]: Starting systemd-update-done.service... May 15 08:53:13.054788 systemd[1]: Started systemd-timesyncd.service. May 15 08:53:13.055472 systemd[1]: Reached target time-set.target. May 15 08:53:13.060188 systemd-resolved[1108]: Using system hostname 'ci-3510-3-7-n-fb2247adc4.novalocal'. May 15 08:53:13.061974 systemd[1]: Started systemd-resolved.service. May 15 08:53:13.062661 systemd[1]: Reached target network.target. May 15 08:53:13.063183 systemd[1]: Reached target network-online.target. May 15 08:53:13.063679 systemd[1]: Reached target nss-lookup.target. May 15 08:53:13.064546 systemd[1]: Finished systemd-update-done.service. May 15 08:53:13.065073 systemd[1]: Reached target sysinit.target. May 15 08:53:13.065613 systemd[1]: Started motdgen.path. May 15 08:53:13.066061 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 08:53:13.066768 systemd[1]: Started logrotate.timer. May 15 08:53:13.067281 systemd[1]: Started mdadm.timer. May 15 08:53:13.067741 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 08:53:13.068201 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 08:53:13.068233 systemd[1]: Reached target paths.target. May 15 08:53:13.068856 systemd[1]: Reached target timers.target. May 15 08:53:13.069675 systemd[1]: Listening on dbus.socket. May 15 08:53:13.071227 systemd[1]: Starting docker.socket... May 15 08:53:13.075373 systemd[1]: Listening on sshd.socket. May 15 08:53:13.077358 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 08:53:13.078664 systemd[1]: Listening on docker.socket. May 15 08:53:13.079917 systemd[1]: Reached target sockets.target. May 15 08:53:13.080874 systemd[1]: Reached target basic.target. May 15 08:53:13.081876 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 08:53:13.081940 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 08:53:13.083901 systemd[1]: Starting containerd.service... May 15 08:53:13.086549 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 15 08:53:13.089098 systemd[1]: Starting dbus.service... May 15 08:53:13.090884 systemd[1]: Starting enable-oem-cloudinit.service... May 15 08:53:13.092732 systemd[1]: Starting extend-filesystems.service... May 15 08:53:13.093470 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 08:53:13.098594 systemd[1]: Starting kubelet.service... May 15 08:53:13.100704 systemd[1]: Starting motdgen.service... May 15 08:53:13.102824 systemd[1]: Starting prepare-helm.service... May 15 08:53:13.106659 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 08:53:13.113590 systemd[1]: Starting sshd-keygen.service... May 15 08:53:13.117603 systemd[1]: Starting systemd-logind.service... May 15 08:53:13.118593 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 08:53:13.118666 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 08:53:13.119156 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 08:53:13.121652 systemd[1]: Starting update-engine.service... May 15 08:53:13.123628 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 08:53:13.134250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 08:53:13.134488 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 08:53:13.159510 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 08:53:13.162494 jq[1138]: false May 15 08:53:13.159691 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 08:53:13.163650 jq[1150]: true May 15 08:53:13.174294 systemd[1]: motdgen.service: Deactivated successfully. May 15 08:53:13.174582 systemd[1]: Finished motdgen.service. May 15 08:53:13.189368 jq[1165]: true May 15 08:53:13.204759 extend-filesystems[1139]: Found loop1 May 15 08:53:13.205769 extend-filesystems[1139]: Found vda May 15 08:53:13.205769 extend-filesystems[1139]: Found vda1 May 15 08:53:13.205769 extend-filesystems[1139]: Found vda2 May 15 08:53:13.205769 extend-filesystems[1139]: Found vda3 May 15 08:53:13.205769 extend-filesystems[1139]: Found usr May 15 08:53:13.205769 extend-filesystems[1139]: Found vda4 May 15 08:53:13.205769 extend-filesystems[1139]: Found vda6 May 15 08:53:13.205769 extend-filesystems[1139]: Found vda7 May 15 08:53:13.205769 extend-filesystems[1139]: Found vda9 May 15 08:53:13.205769 extend-filesystems[1139]: Checking size of /dev/vda9 May 15 08:53:13.225638 env[1162]: time="2025-05-15T08:53:13.225561857Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 08:53:13.262908 env[1162]: time="2025-05-15T08:53:13.262858367Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 08:53:13.263604 env[1162]: time="2025-05-15T08:53:13.263583326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 08:53:13.265026 env[1162]: time="2025-05-15T08:53:13.264996657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 08:53:13.265105 env[1162]: time="2025-05-15T08:53:13.265088730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 08:53:13.265442 env[1162]: time="2025-05-15T08:53:13.265375006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 08:53:13.265543 env[1162]: time="2025-05-15T08:53:13.265525399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 08:53:13.266539 env[1162]: time="2025-05-15T08:53:13.266520234Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 08:53:13.266605 env[1162]: time="2025-05-15T08:53:13.266590376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 08:53:13.266747 env[1162]: time="2025-05-15T08:53:13.266728645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 08:53:13.269824 env[1162]: time="2025-05-15T08:53:13.269790718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 08:53:13.270168 env[1162]: time="2025-05-15T08:53:13.270142949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 08:53:13.270242 env[1162]: time="2025-05-15T08:53:13.270227046Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 08:53:13.270357 env[1162]: time="2025-05-15T08:53:13.270336802Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 08:53:13.270479 env[1162]: time="2025-05-15T08:53:13.270462728Z" level=info msg="metadata content store policy set" policy=shared May 15 08:53:13.270791 systemd-timesyncd[1109]: Contacted time server 45.61.187.39:123 (0.flatcar.pool.ntp.org). May 15 08:53:13.271159 systemd-timesyncd[1109]: Initial clock synchronization to Thu 2025-05-15 08:53:13.376871 UTC. May 15 08:53:13.298617 tar[1153]: linux-amd64/LICENSE May 15 08:53:13.299453 tar[1153]: linux-amd64/helm May 15 08:53:13.299285 systemd-logind[1148]: Watching system buttons on /dev/input/event1 (Power Button) May 15 08:53:13.299306 systemd-logind[1148]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 08:53:13.299852 systemd-logind[1148]: New seat seat0. May 15 08:53:13.331202 extend-filesystems[1139]: Resized partition /dev/vda9 May 15 08:53:13.381231 extend-filesystems[1193]: resize2fs 1.46.5 (30-Dec-2021) May 15 08:53:13.495477 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 15 08:53:13.509788 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 15 08:53:13.532783 dbus-daemon[1135]: [system] SELinux support is enabled May 15 08:53:13.533059 systemd[1]: Started dbus.service. May 15 08:53:13.574586 extend-filesystems[1193]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 08:53:13.574586 extend-filesystems[1193]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 08:53:13.574586 extend-filesystems[1193]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 15 08:53:13.537024 dbus-daemon[1135]: [system] Successfully activated service 'org.freedesktop.systemd1' May 15 08:53:13.536304 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574605909Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574668787Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574688704Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574756952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574880564Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574906202Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574924867Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574948622Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574966625Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.574985661Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.575007652Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.575025125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.575211745Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 08:53:13.602505 env[1162]: time="2025-05-15T08:53:13.575310470Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 08:53:13.603236 update_engine[1149]: I0515 08:53:13.576001 1149 main.cc:92] Flatcar Update Engine starting May 15 08:53:13.603655 extend-filesystems[1139]: Resized filesystem in /dev/vda9 May 15 08:53:13.536332 systemd[1]: Reached target system-config.target. May 15 08:53:13.609628 bash[1184]: Updated "/home/core/.ssh/authorized_keys" May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.575707204Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.575738333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.575754373Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.575854551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.575877684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.575894496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576007678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576026844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576042223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576078661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576100953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576120670Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576265702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576284768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 08:53:13.609756 env[1162]: time="2025-05-15T08:53:13.576302060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 08:53:13.536967 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 08:53:13.611361 env[1162]: time="2025-05-15T08:53:13.576324362Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 08:53:13.611361 env[1162]: time="2025-05-15T08:53:13.576351202Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 08:53:13.611361 env[1162]: time="2025-05-15T08:53:13.576372783Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 08:53:13.611361 env[1162]: time="2025-05-15T08:53:13.576402849Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 08:53:13.611361 env[1162]: time="2025-05-15T08:53:13.576470847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 08:53:13.536990 systemd[1]: Reached target user-config.target. May 15 08:53:13.537648 systemd[1]: Started systemd-logind.service. May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.576713151Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.576786038Z" level=info msg="Connect containerd service" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.576821374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.577565119Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.577864510Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.578224005Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.578278297Z" level=info msg="containerd successfully booted in 0.353651s" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.579957255Z" level=info msg="Start subscribing containerd event" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.580006247Z" level=info msg="Start recovering state" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.580068634Z" level=info msg="Start event monitor" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.580089854Z" level=info msg="Start snapshots syncer" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.580101215Z" level=info msg="Start cni network conf syncer for default" May 15 08:53:13.611629 env[1162]: time="2025-05-15T08:53:13.580110162Z" level=info msg="Start streaming server" May 15 08:53:13.571888 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 08:53:13.572202 systemd[1]: Finished extend-filesystems.service. May 15 08:53:13.579507 systemd[1]: Started containerd.service. May 15 08:53:13.600720 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 08:53:13.609212 systemd[1]: Started update-engine.service. May 15 08:53:13.615841 systemd[1]: Started locksmithd.service. May 15 08:53:13.619963 update_engine[1149]: I0515 08:53:13.619910 1149 update_check_scheduler.cc:74] Next update check in 11m32s May 15 08:53:14.007579 tar[1153]: linux-amd64/README.md May 15 08:53:14.012696 systemd[1]: Finished prepare-helm.service. May 15 08:53:14.224501 sshd_keygen[1158]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 08:53:14.250610 locksmithd[1198]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 08:53:14.269001 systemd[1]: Finished sshd-keygen.service. May 15 08:53:14.271206 systemd[1]: Starting issuegen.service... May 15 08:53:14.277989 systemd[1]: issuegen.service: Deactivated successfully. May 15 08:53:14.278162 systemd[1]: Finished issuegen.service. May 15 08:53:14.280335 systemd[1]: Starting systemd-user-sessions.service... May 15 08:53:14.288064 systemd[1]: Finished systemd-user-sessions.service. May 15 08:53:14.290347 systemd[1]: Started getty@tty1.service. May 15 08:53:14.292095 systemd[1]: Started serial-getty@ttyS0.service. May 15 08:53:14.292904 systemd[1]: Reached target getty.target. May 15 08:53:15.451057 systemd[1]: Started kubelet.service. May 15 08:53:16.851282 kubelet[1221]: E0515 08:53:16.851217 1221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 08:53:16.855149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 08:53:16.855546 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 08:53:16.856150 systemd[1]: kubelet.service: Consumed 2.015s CPU time. May 15 08:53:20.230134 coreos-metadata[1134]: May 15 08:53:20.229 WARN failed to locate config-drive, using the metadata service API instead May 15 08:53:20.326576 coreos-metadata[1134]: May 15 08:53:20.326 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 15 08:53:20.647033 coreos-metadata[1134]: May 15 08:53:20.646 INFO Fetch successful May 15 08:53:20.647375 coreos-metadata[1134]: May 15 08:53:20.647 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 15 08:53:20.661108 coreos-metadata[1134]: May 15 08:53:20.660 INFO Fetch successful May 15 08:53:20.666845 unknown[1134]: wrote ssh authorized keys file for user: core May 15 08:53:20.699898 update-ssh-keys[1230]: Updated "/home/core/.ssh/authorized_keys" May 15 08:53:20.701548 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 15 08:53:20.703354 systemd[1]: Reached target multi-user.target. May 15 08:53:20.706394 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 08:53:20.722539 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 08:53:20.723078 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 08:53:20.726671 systemd[1]: Startup finished in 962ms (kernel) + 9.684s (initrd) + 15.423s (userspace) = 26.070s. May 15 08:53:23.078865 systemd[1]: Created slice system-sshd.slice. May 15 08:53:23.082204 systemd[1]: Started sshd@0-172.24.4.191:22-172.24.4.1:56744.service. May 15 08:53:24.076690 sshd[1233]: Accepted publickey for core from 172.24.4.1 port 56744 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:53:24.081512 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:53:24.113621 systemd-logind[1148]: New session 1 of user core. May 15 08:53:24.118500 systemd[1]: Created slice user-500.slice. May 15 08:53:24.121508 systemd[1]: Starting user-runtime-dir@500.service... May 15 08:53:24.145085 systemd[1]: Finished user-runtime-dir@500.service. May 15 08:53:24.149594 systemd[1]: Starting user@500.service... May 15 08:53:24.157963 (systemd)[1236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 08:53:24.286486 systemd[1236]: Queued start job for default target default.target. May 15 08:53:24.287397 systemd[1236]: Reached target paths.target. May 15 08:53:24.287442 systemd[1236]: Reached target sockets.target. May 15 08:53:24.287459 systemd[1236]: Reached target timers.target. May 15 08:53:24.287487 systemd[1236]: Reached target basic.target. May 15 08:53:24.287589 systemd[1]: Started user@500.service. May 15 08:53:24.288495 systemd[1]: Started session-1.scope. May 15 08:53:24.289533 systemd[1236]: Reached target default.target. May 15 08:53:24.289909 systemd[1236]: Startup finished in 119ms. May 15 08:53:24.771769 systemd[1]: Started sshd@1-172.24.4.191:22-172.24.4.1:44150.service. May 15 08:53:25.964588 sshd[1245]: Accepted publickey for core from 172.24.4.1 port 44150 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:53:25.968405 sshd[1245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:53:25.979548 systemd-logind[1148]: New session 2 of user core. May 15 08:53:25.980150 systemd[1]: Started session-2.scope. May 15 08:53:26.701042 sshd[1245]: pam_unix(sshd:session): session closed for user core May 15 08:53:26.707875 systemd[1]: Started sshd@2-172.24.4.191:22-172.24.4.1:44154.service. May 15 08:53:26.710762 systemd[1]: sshd@1-172.24.4.191:22-172.24.4.1:44150.service: Deactivated successfully. May 15 08:53:26.712386 systemd[1]: session-2.scope: Deactivated successfully. May 15 08:53:26.715864 systemd-logind[1148]: Session 2 logged out. Waiting for processes to exit. May 15 08:53:26.718701 systemd-logind[1148]: Removed session 2. May 15 08:53:27.040945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 08:53:27.041528 systemd[1]: Stopped kubelet.service. May 15 08:53:27.041609 systemd[1]: kubelet.service: Consumed 2.015s CPU time. May 15 08:53:27.044325 systemd[1]: Starting kubelet.service... May 15 08:53:27.279595 systemd[1]: Started kubelet.service. May 15 08:53:27.472822 kubelet[1257]: E0515 08:53:27.471883 1257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 08:53:27.478895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 08:53:27.479179 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 08:53:28.263134 sshd[1250]: Accepted publickey for core from 172.24.4.1 port 44154 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:53:28.266006 sshd[1250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:53:28.276738 systemd-logind[1148]: New session 3 of user core. May 15 08:53:28.277624 systemd[1]: Started session-3.scope. May 15 08:53:29.093952 sshd[1250]: pam_unix(sshd:session): session closed for user core May 15 08:53:29.100204 systemd[1]: Started sshd@3-172.24.4.191:22-172.24.4.1:44168.service. May 15 08:53:29.101402 systemd[1]: sshd@2-172.24.4.191:22-172.24.4.1:44154.service: Deactivated successfully. May 15 08:53:29.104110 systemd[1]: session-3.scope: Deactivated successfully. May 15 08:53:29.107126 systemd-logind[1148]: Session 3 logged out. Waiting for processes to exit. May 15 08:53:29.110137 systemd-logind[1148]: Removed session 3. May 15 08:53:30.302629 sshd[1265]: Accepted publickey for core from 172.24.4.1 port 44168 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:53:30.305337 sshd[1265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:53:30.314573 systemd-logind[1148]: New session 4 of user core. May 15 08:53:30.316837 systemd[1]: Started session-4.scope. May 15 08:53:30.931162 sshd[1265]: pam_unix(sshd:session): session closed for user core May 15 08:53:30.937899 systemd[1]: Started sshd@4-172.24.4.191:22-172.24.4.1:44184.service. May 15 08:53:30.942139 systemd[1]: sshd@3-172.24.4.191:22-172.24.4.1:44168.service: Deactivated successfully. May 15 08:53:30.943585 systemd[1]: session-4.scope: Deactivated successfully. May 15 08:53:30.945943 systemd-logind[1148]: Session 4 logged out. Waiting for processes to exit. May 15 08:53:30.948499 systemd-logind[1148]: Removed session 4. May 15 08:53:32.423074 sshd[1271]: Accepted publickey for core from 172.24.4.1 port 44184 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:53:32.426809 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:53:32.437938 systemd-logind[1148]: New session 5 of user core. May 15 08:53:32.439163 systemd[1]: Started session-5.scope. May 15 08:53:32.970292 sudo[1275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 08:53:32.971557 sudo[1275]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 08:53:33.056771 systemd[1]: Starting docker.service... May 15 08:53:33.127467 env[1285]: time="2025-05-15T08:53:33.127378040Z" level=info msg="Starting up" May 15 08:53:33.129788 env[1285]: time="2025-05-15T08:53:33.129756431Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 08:53:33.129788 env[1285]: time="2025-05-15T08:53:33.129777693Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 08:53:33.129788 env[1285]: time="2025-05-15T08:53:33.129804031Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 08:53:33.129788 env[1285]: time="2025-05-15T08:53:33.129817198Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 08:53:33.135168 env[1285]: time="2025-05-15T08:53:33.135111841Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 08:53:33.135372 env[1285]: time="2025-05-15T08:53:33.135331172Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 08:53:33.135622 env[1285]: time="2025-05-15T08:53:33.135579227Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 08:53:33.135794 env[1285]: time="2025-05-15T08:53:33.135759073Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 08:53:33.149538 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3162925919-merged.mount: Deactivated successfully. May 15 08:53:33.191056 env[1285]: time="2025-05-15T08:53:33.191006081Z" level=info msg="Loading containers: start." May 15 08:53:33.368532 kernel: Initializing XFRM netlink socket May 15 08:53:33.414367 env[1285]: time="2025-05-15T08:53:33.414292812Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 08:53:33.515127 systemd-networkd[990]: docker0: Link UP May 15 08:53:33.531603 env[1285]: time="2025-05-15T08:53:33.531572840Z" level=info msg="Loading containers: done." May 15 08:53:33.544011 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2079218912-merged.mount: Deactivated successfully. May 15 08:53:33.552648 env[1285]: time="2025-05-15T08:53:33.552577211Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 08:53:33.552807 env[1285]: time="2025-05-15T08:53:33.552773003Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 08:53:33.552908 env[1285]: time="2025-05-15T08:53:33.552878642Z" level=info msg="Daemon has completed initialization" May 15 08:53:33.579147 systemd[1]: Started docker.service. May 15 08:53:33.591573 env[1285]: time="2025-05-15T08:53:33.591498420Z" level=info msg="API listen on /run/docker.sock" May 15 08:53:35.262567 env[1162]: time="2025-05-15T08:53:35.262492293Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 08:53:36.054560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370979892.mount: Deactivated successfully. May 15 08:53:37.540932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 08:53:37.541381 systemd[1]: Stopped kubelet.service. May 15 08:53:37.544364 systemd[1]: Starting kubelet.service... May 15 08:53:37.685876 systemd[1]: Started kubelet.service. May 15 08:53:37.969482 kubelet[1412]: E0515 08:53:37.968703 1412 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 08:53:37.973052 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 08:53:37.973337 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 08:53:38.488628 env[1162]: time="2025-05-15T08:53:38.488486488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:38.494497 env[1162]: time="2025-05-15T08:53:38.492718217Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:38.500905 env[1162]: time="2025-05-15T08:53:38.500802151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:38.509341 env[1162]: time="2025-05-15T08:53:38.509223078Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:38.515453 env[1162]: time="2025-05-15T08:53:38.512968305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 15 08:53:38.516633 env[1162]: time="2025-05-15T08:53:38.516599585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 08:53:41.110911 env[1162]: time="2025-05-15T08:53:41.110735043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:41.115667 env[1162]: time="2025-05-15T08:53:41.115621748Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:41.120319 env[1162]: time="2025-05-15T08:53:41.120270810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:41.124270 env[1162]: time="2025-05-15T08:53:41.124221642Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:41.126400 env[1162]: time="2025-05-15T08:53:41.126351453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 15 08:53:41.128124 env[1162]: time="2025-05-15T08:53:41.128085890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 08:53:43.218337 env[1162]: time="2025-05-15T08:53:43.218239361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:43.223970 env[1162]: time="2025-05-15T08:53:43.221409409Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:43.228500 env[1162]: time="2025-05-15T08:53:43.226834199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:43.231368 env[1162]: time="2025-05-15T08:53:43.231329498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:43.232418 env[1162]: time="2025-05-15T08:53:43.232354854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 15 08:53:43.233306 env[1162]: time="2025-05-15T08:53:43.233247474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 08:53:44.753416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427044108.mount: Deactivated successfully. May 15 08:53:45.800611 env[1162]: time="2025-05-15T08:53:45.800523619Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:45.805536 env[1162]: time="2025-05-15T08:53:45.805479976Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:45.810468 env[1162]: time="2025-05-15T08:53:45.810387602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:45.815039 env[1162]: time="2025-05-15T08:53:45.814987359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:45.817939 env[1162]: time="2025-05-15T08:53:45.816727898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 15 08:53:45.819496 env[1162]: time="2025-05-15T08:53:45.819389382Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 08:53:46.436385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003020788.mount: Deactivated successfully. May 15 08:53:48.041181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 15 08:53:48.041691 systemd[1]: Stopped kubelet.service. May 15 08:53:48.044857 systemd[1]: Starting kubelet.service... May 15 08:53:48.316239 systemd[1]: Started kubelet.service. May 15 08:53:48.436151 kubelet[1422]: E0515 08:53:48.436049 1422 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 08:53:48.439513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 08:53:48.439659 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 08:53:48.618949 env[1162]: time="2025-05-15T08:53:48.617813999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:48.623909 env[1162]: time="2025-05-15T08:53:48.623811247Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:48.629955 env[1162]: time="2025-05-15T08:53:48.629854837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:48.635925 env[1162]: time="2025-05-15T08:53:48.635819359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:48.639219 env[1162]: time="2025-05-15T08:53:48.639081830Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 15 08:53:48.640675 env[1162]: time="2025-05-15T08:53:48.640597610Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 08:53:49.272367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2153711621.mount: Deactivated successfully. May 15 08:53:49.328228 env[1162]: time="2025-05-15T08:53:49.328046082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:49.336714 env[1162]: time="2025-05-15T08:53:49.336618459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:49.363751 env[1162]: time="2025-05-15T08:53:49.363660829Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:49.377677 env[1162]: time="2025-05-15T08:53:49.377583524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:49.379578 env[1162]: time="2025-05-15T08:53:49.379504891Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 15 08:53:49.381156 env[1162]: time="2025-05-15T08:53:49.381093286Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 08:53:50.580082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472510570.mount: Deactivated successfully. May 15 08:53:55.830012 env[1162]: time="2025-05-15T08:53:55.829929061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:55.833903 env[1162]: time="2025-05-15T08:53:55.833850835Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:55.837449 env[1162]: time="2025-05-15T08:53:55.837376855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:55.840922 env[1162]: time="2025-05-15T08:53:55.840877344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:53:55.843787 env[1162]: time="2025-05-15T08:53:55.843722911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 15 08:53:58.541017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 15 08:53:58.541462 systemd[1]: Stopped kubelet.service. May 15 08:53:58.546915 systemd[1]: Starting kubelet.service... May 15 08:53:58.682634 update_engine[1149]: I0515 08:53:58.682539 1149 update_attempter.cc:509] Updating boot flags... May 15 08:53:58.955343 systemd[1]: Started kubelet.service. May 15 08:53:59.102473 kubelet[1463]: E0515 08:53:59.099830 1463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 08:53:59.103930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 08:53:59.104058 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 08:54:00.360686 systemd[1]: Stopped kubelet.service. May 15 08:54:00.371544 systemd[1]: Starting kubelet.service... May 15 08:54:00.410466 systemd[1]: Reloading. May 15 08:54:00.542518 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2025-05-15T08:54:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 08:54:00.542905 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2025-05-15T08:54:00Z" level=info msg="torcx already run" May 15 08:54:00.850126 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 08:54:00.850147 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 08:54:00.874395 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 08:54:00.975223 systemd[1]: Started kubelet.service. May 15 08:54:00.981920 systemd[1]: Stopping kubelet.service... May 15 08:54:00.983075 systemd[1]: kubelet.service: Deactivated successfully. May 15 08:54:00.983236 systemd[1]: Stopped kubelet.service. May 15 08:54:00.984835 systemd[1]: Starting kubelet.service... May 15 08:54:01.102361 systemd[1]: Started kubelet.service. May 15 08:54:01.166569 kubelet[1556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 08:54:01.166569 kubelet[1556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 08:54:01.166569 kubelet[1556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 08:54:01.167307 kubelet[1556]: I0515 08:54:01.166638 1556 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 08:54:01.559110 kubelet[1556]: I0515 08:54:01.559019 1556 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 08:54:01.560327 kubelet[1556]: I0515 08:54:01.560281 1556 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 08:54:01.561741 kubelet[1556]: I0515 08:54:01.561705 1556 server.go:954] "Client rotation is on, will bootstrap in background" May 15 08:54:02.346502 kubelet[1556]: E0515 08:54:02.346351 1556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:02.356569 kubelet[1556]: I0515 08:54:02.356511 1556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 08:54:02.393266 kubelet[1556]: E0515 08:54:02.393219 1556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 08:54:02.393494 kubelet[1556]: I0515 08:54:02.393480 1556 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 08:54:02.398585 kubelet[1556]: I0515 08:54:02.398567 1556 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 08:54:02.398973 kubelet[1556]: I0515 08:54:02.398941 1556 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 08:54:02.399315 kubelet[1556]: I0515 08:54:02.399032 1556 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-fb2247adc4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 08:54:02.399502 kubelet[1556]: I0515 08:54:02.399488 1556 topology_manager.go:138] "Creating topology manager with none policy" May 15 08:54:02.399568 kubelet[1556]: I0515 08:54:02.399559 1556 container_manager_linux.go:304] "Creating device plugin manager" May 15 08:54:02.399790 kubelet[1556]: I0515 08:54:02.399777 1556 state_mem.go:36] "Initialized new in-memory state store" May 15 08:54:02.497262 kubelet[1556]: I0515 08:54:02.495320 1556 kubelet.go:446] "Attempting to sync node with API server" May 15 08:54:02.497262 kubelet[1556]: I0515 08:54:02.495416 1556 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 08:54:02.497262 kubelet[1556]: I0515 08:54:02.496106 1556 kubelet.go:352] "Adding apiserver pod source" May 15 08:54:02.497262 kubelet[1556]: I0515 08:54:02.496198 1556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 08:54:02.529180 kubelet[1556]: W0515 08:54:02.528931 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:02.530511 kubelet[1556]: E0515 08:54:02.530420 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:02.530925 kubelet[1556]: W0515 08:54:02.530854 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-fb2247adc4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:02.531139 kubelet[1556]: E0515 08:54:02.531093 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-fb2247adc4.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:02.532352 kubelet[1556]: I0515 08:54:02.532314 1556 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 08:54:02.534278 kubelet[1556]: I0515 08:54:02.534243 1556 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 08:54:02.601793 kubelet[1556]: W0515 08:54:02.598830 1556 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 08:54:02.732553 kubelet[1556]: I0515 08:54:02.732500 1556 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 08:54:02.732998 kubelet[1556]: I0515 08:54:02.732966 1556 server.go:1287] "Started kubelet" May 15 08:54:03.133932 kubelet[1556]: E0515 08:54:03.130618 1556 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.191:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.191:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-n-fb2247adc4.novalocal.183fa76757662bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-n-fb2247adc4.novalocal,UID:ci-3510-3-7-n-fb2247adc4.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-n-fb2247adc4.novalocal,},FirstTimestamp:2025-05-15 08:54:02.732825515 +0000 UTC m=+1.620207745,LastTimestamp:2025-05-15 08:54:02.732825515 +0000 UTC m=+1.620207745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-n-fb2247adc4.novalocal,}" May 15 08:54:03.135203 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 08:54:03.135342 kubelet[1556]: I0515 08:54:03.135126 1556 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 08:54:03.138644 kubelet[1556]: I0515 08:54:03.138563 1556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 08:54:03.139845 kubelet[1556]: I0515 08:54:03.139804 1556 server.go:490] "Adding debug handlers to kubelet server" May 15 08:54:03.144879 kubelet[1556]: I0515 08:54:03.144683 1556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 08:54:03.145646 kubelet[1556]: I0515 08:54:03.145611 1556 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 08:54:03.147114 kubelet[1556]: I0515 08:54:03.147072 1556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 08:54:03.153860 kubelet[1556]: E0515 08:54:03.153786 1556 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 08:54:03.154348 kubelet[1556]: E0515 08:54:03.154247 1556 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" May 15 08:54:03.154532 kubelet[1556]: I0515 08:54:03.154397 1556 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 08:54:03.155109 kubelet[1556]: I0515 08:54:03.155052 1556 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 08:54:03.155235 kubelet[1556]: I0515 08:54:03.155198 1556 reconciler.go:26] "Reconciler: start to sync state" May 15 08:54:03.157105 kubelet[1556]: I0515 08:54:03.156926 1556 factory.go:221] Registration of the systemd container factory successfully May 15 08:54:03.157241 kubelet[1556]: I0515 08:54:03.157129 1556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 08:54:03.158515 kubelet[1556]: W0515 08:54:03.158331 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:03.158515 kubelet[1556]: E0515 08:54:03.158490 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:03.160265 kubelet[1556]: E0515 08:54:03.160194 1556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-fb2247adc4.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="200ms" May 15 08:54:03.172985 kubelet[1556]: I0515 08:54:03.172930 1556 factory.go:221] Registration of the containerd container factory successfully May 15 08:54:03.238148 kubelet[1556]: I0515 08:54:03.238067 1556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 08:54:03.239370 kubelet[1556]: I0515 08:54:03.239346 1556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 08:54:03.239454 kubelet[1556]: I0515 08:54:03.239400 1556 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 08:54:03.239546 kubelet[1556]: I0515 08:54:03.239475 1556 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 08:54:03.239546 kubelet[1556]: I0515 08:54:03.239515 1556 kubelet.go:2388] "Starting kubelet main sync loop" May 15 08:54:03.239614 kubelet[1556]: E0515 08:54:03.239573 1556 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 08:54:03.242002 kubelet[1556]: W0515 08:54:03.241977 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:03.242194 kubelet[1556]: E0515 08:54:03.242167 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:03.243283 kubelet[1556]: I0515 08:54:03.243263 1556 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 08:54:03.243283 kubelet[1556]: I0515 08:54:03.243282 1556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 08:54:03.243381 kubelet[1556]: I0515 08:54:03.243307 1556 state_mem.go:36] "Initialized new in-memory state store" May 15 08:54:03.253218 kubelet[1556]: I0515 08:54:03.253179 1556 policy_none.go:49] "None policy: Start" May 15 08:54:03.253289 kubelet[1556]: I0515 08:54:03.253226 1556 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 08:54:03.253289 kubelet[1556]: I0515 08:54:03.253254 1556 state_mem.go:35] "Initializing new in-memory state store" May 15 08:54:03.254723 kubelet[1556]: E0515 08:54:03.254703 1556 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" May 15 08:54:03.266874 systemd[1]: Created slice kubepods.slice. May 15 08:54:03.271379 systemd[1]: Created slice kubepods-burstable.slice. May 15 08:54:03.274649 systemd[1]: Created slice kubepods-besteffort.slice. May 15 08:54:03.280278 kubelet[1556]: I0515 08:54:03.280257 1556 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 08:54:03.280583 kubelet[1556]: I0515 08:54:03.280567 1556 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 08:54:03.280741 kubelet[1556]: I0515 08:54:03.280680 1556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 08:54:03.281903 kubelet[1556]: I0515 08:54:03.281888 1556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 08:54:03.284204 kubelet[1556]: E0515 08:54:03.284173 1556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 08:54:03.284344 kubelet[1556]: E0515 08:54:03.284327 1556 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" May 15 08:54:03.357492 kubelet[1556]: I0515 08:54:03.357330 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40f83d04d779dff0882600b005113919-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"40f83d04d779dff0882600b005113919\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.358481 kubelet[1556]: I0515 08:54:03.357546 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.358481 kubelet[1556]: I0515 08:54:03.357674 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.358481 kubelet[1556]: I0515 08:54:03.357756 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/683a8adee74c61723aa9810df73966ef-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"683a8adee74c61723aa9810df73966ef\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.358481 kubelet[1556]: I0515 08:54:03.357829 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40f83d04d779dff0882600b005113919-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"40f83d04d779dff0882600b005113919\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.358481 kubelet[1556]: I0515 08:54:03.357909 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.359195 kubelet[1556]: I0515 08:54:03.358021 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.359195 kubelet[1556]: I0515 08:54:03.358081 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40f83d04d779dff0882600b005113919-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"40f83d04d779dff0882600b005113919\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.359195 kubelet[1556]: I0515 08:54:03.358156 1556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.361305 kubelet[1556]: E0515 08:54:03.361230 1556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-fb2247adc4.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="400ms" May 15 08:54:03.364936 systemd[1]: Created slice kubepods-burstable-pod683a8adee74c61723aa9810df73966ef.slice. May 15 08:54:03.387276 kubelet[1556]: E0515 08:54:03.387090 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.389924 kubelet[1556]: I0515 08:54:03.389872 1556 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.391876 kubelet[1556]: E0515 08:54:03.391823 1556 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.399337 systemd[1]: Created slice kubepods-burstable-podded5dc67e33568765f17180634d3668a.slice. May 15 08:54:03.415416 kubelet[1556]: E0515 08:54:03.415256 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.419147 systemd[1]: Created slice kubepods-burstable-pod40f83d04d779dff0882600b005113919.slice. May 15 08:54:03.427109 kubelet[1556]: E0515 08:54:03.427064 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.432336 kubelet[1556]: W0515 08:54:03.432238 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-fb2247adc4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:03.432531 kubelet[1556]: E0515 08:54:03.432375 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.191:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-n-fb2247adc4.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:03.596340 kubelet[1556]: I0515 08:54:03.596295 1556 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.597309 kubelet[1556]: E0515 08:54:03.597224 1556 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:03.690777 env[1162]: time="2025-05-15T08:54:03.689882106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal,Uid:683a8adee74c61723aa9810df73966ef,Namespace:kube-system,Attempt:0,}" May 15 08:54:03.722941 env[1162]: time="2025-05-15T08:54:03.722791284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal,Uid:ded5dc67e33568765f17180634d3668a,Namespace:kube-system,Attempt:0,}" May 15 08:54:03.729065 env[1162]: time="2025-05-15T08:54:03.728999637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal,Uid:40f83d04d779dff0882600b005113919,Namespace:kube-system,Attempt:0,}" May 15 08:54:03.767028 kubelet[1556]: E0515 08:54:03.766909 1556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-fb2247adc4.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="800ms" May 15 08:54:03.892404 kubelet[1556]: W0515 08:54:03.892280 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:03.892676 kubelet[1556]: E0515 08:54:03.892421 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.191:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:04.001651 kubelet[1556]: I0515 08:54:04.001461 1556 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:04.002684 kubelet[1556]: E0515 08:54:04.002610 1556 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:04.467365 kubelet[1556]: E0515 08:54:04.467300 1556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.191:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:04.489752 kubelet[1556]: W0515 08:54:04.489647 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:04.490063 kubelet[1556]: E0515 08:54:04.490016 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.191:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:04.515604 kubelet[1556]: W0515 08:54:04.515380 1556 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.191:6443: connect: connection refused May 15 08:54:04.515604 kubelet[1556]: E0515 08:54:04.515540 1556 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.191:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.191:6443: connect: connection refused" logger="UnhandledError" May 15 08:54:04.531616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101729020.mount: Deactivated successfully. May 15 08:54:04.543040 env[1162]: time="2025-05-15T08:54:04.542886425Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.548134 env[1162]: time="2025-05-15T08:54:04.548062782Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.550365 env[1162]: time="2025-05-15T08:54:04.550249631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.554723 env[1162]: time="2025-05-15T08:54:04.554635863Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.557714 env[1162]: time="2025-05-15T08:54:04.557631152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.559956 env[1162]: time="2025-05-15T08:54:04.559817069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.561978 env[1162]: time="2025-05-15T08:54:04.561910317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.568847 kubelet[1556]: E0515 08:54:04.568745 1556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.191:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-n-fb2247adc4.novalocal?timeout=10s\": dial tcp 172.24.4.191:6443: connect: connection refused" interval="1.6s" May 15 08:54:04.569087 env[1162]: time="2025-05-15T08:54:04.569007340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.579166 env[1162]: time="2025-05-15T08:54:04.578999919Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.591677 env[1162]: time="2025-05-15T08:54:04.591542217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.594014 env[1162]: time="2025-05-15T08:54:04.593935114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.607003 env[1162]: time="2025-05-15T08:54:04.606797649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:04.638715 env[1162]: time="2025-05-15T08:54:04.638551222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:04.638715 env[1162]: time="2025-05-15T08:54:04.638650132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:04.639897 env[1162]: time="2025-05-15T08:54:04.639751057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:04.650316 env[1162]: time="2025-05-15T08:54:04.649955444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/101bba20235111ed8a9bf7a354c58a2ce79ce1b0772fdf2bc7f73291abd87b53 pid=1596 runtime=io.containerd.runc.v2 May 15 08:54:04.681088 env[1162]: time="2025-05-15T08:54:04.679056900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:04.681088 env[1162]: time="2025-05-15T08:54:04.679110784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:04.681088 env[1162]: time="2025-05-15T08:54:04.679124329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:04.681088 env[1162]: time="2025-05-15T08:54:04.679255001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef4f209c9e7a7cab88b7cad6deba9653206d36e42900208c28a6c3df15c25716 pid=1620 runtime=io.containerd.runc.v2 May 15 08:54:04.683654 systemd[1]: Started cri-containerd-101bba20235111ed8a9bf7a354c58a2ce79ce1b0772fdf2bc7f73291abd87b53.scope. May 15 08:54:04.715356 env[1162]: time="2025-05-15T08:54:04.713872239Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:04.715356 env[1162]: time="2025-05-15T08:54:04.713925972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:04.715356 env[1162]: time="2025-05-15T08:54:04.713940130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:04.715356 env[1162]: time="2025-05-15T08:54:04.714086272Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d1d7b668e103346b2dc5d4c486d0f10e55e2eda5f14469bd2540f7eef1585ea pid=1641 runtime=io.containerd.runc.v2 May 15 08:54:04.723035 systemd[1]: Started cri-containerd-ef4f209c9e7a7cab88b7cad6deba9653206d36e42900208c28a6c3df15c25716.scope. May 15 08:54:04.759009 env[1162]: time="2025-05-15T08:54:04.758958023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal,Uid:ded5dc67e33568765f17180634d3668a,Namespace:kube-system,Attempt:0,} returns sandbox id \"101bba20235111ed8a9bf7a354c58a2ce79ce1b0772fdf2bc7f73291abd87b53\"" May 15 08:54:04.773688 env[1162]: time="2025-05-15T08:54:04.773608990Z" level=info msg="CreateContainer within sandbox \"101bba20235111ed8a9bf7a354c58a2ce79ce1b0772fdf2bc7f73291abd87b53\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 08:54:04.792702 systemd[1]: Started cri-containerd-4d1d7b668e103346b2dc5d4c486d0f10e55e2eda5f14469bd2540f7eef1585ea.scope. May 15 08:54:04.806283 kubelet[1556]: I0515 08:54:04.806037 1556 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:04.806722 kubelet[1556]: E0515 08:54:04.806684 1556 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://172.24.4.191:6443/api/v1/nodes\": dial tcp 172.24.4.191:6443: connect: connection refused" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:04.829123 env[1162]: time="2025-05-15T08:54:04.829059861Z" level=info msg="CreateContainer within sandbox \"101bba20235111ed8a9bf7a354c58a2ce79ce1b0772fdf2bc7f73291abd87b53\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7733709cbdef9a9feb0a5cb433af8e1b7f1ce55a0205a990e3cfc2a27891daeb\"" May 15 08:54:04.830256 env[1162]: time="2025-05-15T08:54:04.830207065Z" level=info msg="StartContainer for \"7733709cbdef9a9feb0a5cb433af8e1b7f1ce55a0205a990e3cfc2a27891daeb\"" May 15 08:54:04.835580 env[1162]: time="2025-05-15T08:54:04.835542738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal,Uid:40f83d04d779dff0882600b005113919,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef4f209c9e7a7cab88b7cad6deba9653206d36e42900208c28a6c3df15c25716\"" May 15 08:54:04.841065 env[1162]: time="2025-05-15T08:54:04.841020627Z" level=info msg="CreateContainer within sandbox \"ef4f209c9e7a7cab88b7cad6deba9653206d36e42900208c28a6c3df15c25716\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 08:54:04.857802 env[1162]: time="2025-05-15T08:54:04.857741026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal,Uid:683a8adee74c61723aa9810df73966ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d1d7b668e103346b2dc5d4c486d0f10e55e2eda5f14469bd2540f7eef1585ea\"" May 15 08:54:04.861103 env[1162]: time="2025-05-15T08:54:04.861015043Z" level=info msg="CreateContainer within sandbox \"4d1d7b668e103346b2dc5d4c486d0f10e55e2eda5f14469bd2540f7eef1585ea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 08:54:04.873350 systemd[1]: Started cri-containerd-7733709cbdef9a9feb0a5cb433af8e1b7f1ce55a0205a990e3cfc2a27891daeb.scope. May 15 08:54:04.879787 env[1162]: time="2025-05-15T08:54:04.879740139Z" level=info msg="CreateContainer within sandbox \"ef4f209c9e7a7cab88b7cad6deba9653206d36e42900208c28a6c3df15c25716\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ae8a57c7375cd9e849b6a09753d7cb1369dd99292109897d38dcba3a1a4d1933\"" May 15 08:54:04.880557 env[1162]: time="2025-05-15T08:54:04.880533991Z" level=info msg="StartContainer for \"ae8a57c7375cd9e849b6a09753d7cb1369dd99292109897d38dcba3a1a4d1933\"" May 15 08:54:04.908932 env[1162]: time="2025-05-15T08:54:04.908031772Z" level=info msg="CreateContainer within sandbox \"4d1d7b668e103346b2dc5d4c486d0f10e55e2eda5f14469bd2540f7eef1585ea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d38b04b15f90c008cfe0981e4bdd97a619930d4ae7ffbcbbdc2f69c760384e56\"" May 15 08:54:04.908590 systemd[1]: Started cri-containerd-ae8a57c7375cd9e849b6a09753d7cb1369dd99292109897d38dcba3a1a4d1933.scope. May 15 08:54:04.914961 env[1162]: time="2025-05-15T08:54:04.914906195Z" level=info msg="StartContainer for \"d38b04b15f90c008cfe0981e4bdd97a619930d4ae7ffbcbbdc2f69c760384e56\"" May 15 08:54:04.946087 systemd[1]: Started cri-containerd-d38b04b15f90c008cfe0981e4bdd97a619930d4ae7ffbcbbdc2f69c760384e56.scope. May 15 08:54:04.980142 env[1162]: time="2025-05-15T08:54:04.979995473Z" level=info msg="StartContainer for \"7733709cbdef9a9feb0a5cb433af8e1b7f1ce55a0205a990e3cfc2a27891daeb\" returns successfully" May 15 08:54:05.046457 env[1162]: time="2025-05-15T08:54:05.046371424Z" level=info msg="StartContainer for \"ae8a57c7375cd9e849b6a09753d7cb1369dd99292109897d38dcba3a1a4d1933\" returns successfully" May 15 08:54:05.062114 env[1162]: time="2025-05-15T08:54:05.062063541Z" level=info msg="StartContainer for \"d38b04b15f90c008cfe0981e4bdd97a619930d4ae7ffbcbbdc2f69c760384e56\" returns successfully" May 15 08:54:05.264072 kubelet[1556]: E0515 08:54:05.260903 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:05.267951 kubelet[1556]: E0515 08:54:05.267905 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:05.270439 kubelet[1556]: E0515 08:54:05.268486 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:06.272632 kubelet[1556]: E0515 08:54:06.272568 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:06.274854 kubelet[1556]: E0515 08:54:06.274613 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:06.409371 kubelet[1556]: I0515 08:54:06.409326 1556 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.274307 kubelet[1556]: E0515 08:54:07.274256 1556 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.623136 kubelet[1556]: E0515 08:54:07.623069 1556 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.706328 kubelet[1556]: I0515 08:54:07.706280 1556 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.760395 kubelet[1556]: I0515 08:54:07.760340 1556 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.774449 kubelet[1556]: E0515 08:54:07.774361 1556 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.774449 kubelet[1556]: I0515 08:54:07.774418 1556 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.781765 kubelet[1556]: E0515 08:54:07.781709 1556 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.781940 kubelet[1556]: I0515 08:54:07.781771 1556 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:07.787112 kubelet[1556]: E0515 08:54:07.787076 1556 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:08.527992 kubelet[1556]: I0515 08:54:08.527925 1556 apiserver.go:52] "Watching apiserver" May 15 08:54:08.556346 kubelet[1556]: I0515 08:54:08.556269 1556 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 08:54:12.839566 systemd[1]: Reloading. May 15 08:54:13.008201 /usr/lib/systemd/system-generators/torcx-generator[1848]: time="2025-05-15T08:54:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 08:54:13.009416 /usr/lib/systemd/system-generators/torcx-generator[1848]: time="2025-05-15T08:54:13Z" level=info msg="torcx already run" May 15 08:54:13.044082 kubelet[1556]: I0515 08:54:13.043145 1556 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:13.054868 kubelet[1556]: W0515 08:54:13.054819 1556 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 08:54:13.161679 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 08:54:13.161707 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 08:54:13.187115 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 08:54:13.327603 systemd[1]: Stopping kubelet.service... May 15 08:54:13.353538 systemd[1]: kubelet.service: Deactivated successfully. May 15 08:54:13.353845 systemd[1]: Stopped kubelet.service. May 15 08:54:13.353979 systemd[1]: kubelet.service: Consumed 1.763s CPU time. May 15 08:54:13.356090 systemd[1]: Starting kubelet.service... May 15 08:54:13.714753 systemd[1]: Started kubelet.service. May 15 08:54:14.000794 kubelet[1899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 08:54:14.001278 kubelet[1899]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 08:54:14.001368 kubelet[1899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 08:54:14.001671 kubelet[1899]: I0515 08:54:14.001619 1899 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 08:54:14.013926 kubelet[1899]: I0515 08:54:14.013869 1899 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 08:54:14.014176 kubelet[1899]: I0515 08:54:14.014161 1899 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 08:54:14.014670 kubelet[1899]: I0515 08:54:14.014652 1899 server.go:954] "Client rotation is on, will bootstrap in background" May 15 08:54:14.017237 kubelet[1899]: I0515 08:54:14.017216 1899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 08:54:14.022964 kubelet[1899]: I0515 08:54:14.022910 1899 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 08:54:14.102144 sudo[1914]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 08:54:14.104712 sudo[1914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 15 08:54:14.117338 kubelet[1899]: E0515 08:54:14.117251 1899 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 08:54:14.118687 kubelet[1899]: I0515 08:54:14.118662 1899 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 08:54:14.143067 kubelet[1899]: I0515 08:54:14.142998 1899 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 08:54:14.148687 kubelet[1899]: I0515 08:54:14.146528 1899 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 08:54:14.149278 kubelet[1899]: I0515 08:54:14.148875 1899 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-n-fb2247adc4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 08:54:14.149876 kubelet[1899]: I0515 08:54:14.149842 1899 topology_manager.go:138] "Creating topology manager with none policy" May 15 08:54:14.150041 kubelet[1899]: I0515 08:54:14.150024 1899 container_manager_linux.go:304] "Creating device plugin manager" May 15 08:54:14.150508 kubelet[1899]: I0515 08:54:14.150416 1899 state_mem.go:36] "Initialized new in-memory state store" May 15 08:54:14.151143 kubelet[1899]: I0515 08:54:14.151124 1899 kubelet.go:446] "Attempting to sync node with API server" May 15 08:54:14.152547 kubelet[1899]: I0515 08:54:14.152517 1899 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 08:54:14.152729 kubelet[1899]: I0515 08:54:14.152714 1899 kubelet.go:352] "Adding apiserver pod source" May 15 08:54:14.152888 kubelet[1899]: I0515 08:54:14.152869 1899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 08:54:14.172117 kubelet[1899]: I0515 08:54:14.171964 1899 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 08:54:14.186825 kubelet[1899]: I0515 08:54:14.186766 1899 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 08:54:14.189198 kubelet[1899]: I0515 08:54:14.189163 1899 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 08:54:14.189585 kubelet[1899]: I0515 08:54:14.189567 1899 server.go:1287] "Started kubelet" May 15 08:54:14.199536 kubelet[1899]: I0515 08:54:14.199504 1899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 08:54:14.212779 kubelet[1899]: I0515 08:54:14.212723 1899 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 08:54:14.218354 kubelet[1899]: I0515 08:54:14.218324 1899 server.go:490] "Adding debug handlers to kubelet server" May 15 08:54:14.222078 kubelet[1899]: I0515 08:54:14.221911 1899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 08:54:14.239508 kubelet[1899]: I0515 08:54:14.239482 1899 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 08:54:14.239828 kubelet[1899]: I0515 08:54:14.236259 1899 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 08:54:14.240377 kubelet[1899]: I0515 08:54:14.225944 1899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 08:54:14.240629 kubelet[1899]: I0515 08:54:14.236309 1899 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 08:54:14.240916 kubelet[1899]: I0515 08:54:14.240901 1899 reconciler.go:26] "Reconciler: start to sync state" May 15 08:54:14.250496 kubelet[1899]: E0515 08:54:14.237634 1899 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-3510-3-7-n-fb2247adc4.novalocal\" not found" May 15 08:54:14.257210 kubelet[1899]: I0515 08:54:14.255705 1899 factory.go:221] Registration of the systemd container factory successfully May 15 08:54:14.257546 kubelet[1899]: I0515 08:54:14.257522 1899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 08:54:14.260901 kubelet[1899]: I0515 08:54:14.260880 1899 factory.go:221] Registration of the containerd container factory successfully May 15 08:54:14.262680 kubelet[1899]: E0515 08:54:14.262659 1899 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 08:54:14.301994 kubelet[1899]: I0515 08:54:14.301925 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 08:54:14.326880 kubelet[1899]: I0515 08:54:14.326840 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 08:54:14.327164 kubelet[1899]: I0515 08:54:14.327150 1899 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 08:54:14.327352 kubelet[1899]: I0515 08:54:14.327321 1899 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 08:54:14.327487 kubelet[1899]: I0515 08:54:14.327475 1899 kubelet.go:2388] "Starting kubelet main sync loop" May 15 08:54:14.327705 kubelet[1899]: E0515 08:54:14.327652 1899 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 08:54:14.382950 kubelet[1899]: I0515 08:54:14.382924 1899 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 08:54:14.383135 kubelet[1899]: I0515 08:54:14.383121 1899 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 08:54:14.383278 kubelet[1899]: I0515 08:54:14.383265 1899 state_mem.go:36] "Initialized new in-memory state store" May 15 08:54:14.383650 kubelet[1899]: I0515 08:54:14.383633 1899 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 08:54:14.383808 kubelet[1899]: I0515 08:54:14.383753 1899 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 08:54:14.383966 kubelet[1899]: I0515 08:54:14.383943 1899 policy_none.go:49] "None policy: Start" May 15 08:54:14.384165 kubelet[1899]: I0515 08:54:14.384151 1899 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 08:54:14.384841 kubelet[1899]: I0515 08:54:14.384826 1899 state_mem.go:35] "Initializing new in-memory state store" May 15 08:54:14.385174 kubelet[1899]: I0515 08:54:14.385159 1899 state_mem.go:75] "Updated machine memory state" May 15 08:54:14.391486 kubelet[1899]: I0515 08:54:14.391465 1899 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 08:54:14.391952 kubelet[1899]: I0515 08:54:14.391936 1899 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 08:54:14.392140 kubelet[1899]: I0515 08:54:14.392084 1899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 08:54:14.393879 kubelet[1899]: I0515 08:54:14.393861 1899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 08:54:14.403668 kubelet[1899]: E0515 08:54:14.403641 1899 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 08:54:14.429214 kubelet[1899]: I0515 08:54:14.429176 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.430920 kubelet[1899]: I0515 08:54:14.430885 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.431780 kubelet[1899]: I0515 08:54:14.431747 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.469921 kubelet[1899]: I0515 08:54:14.469853 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/683a8adee74c61723aa9810df73966ef-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"683a8adee74c61723aa9810df73966ef\") " pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.470254 kubelet[1899]: I0515 08:54:14.470229 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.470446 kubelet[1899]: I0515 08:54:14.470407 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.470599 kubelet[1899]: I0515 08:54:14.470581 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40f83d04d779dff0882600b005113919-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"40f83d04d779dff0882600b005113919\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.470733 kubelet[1899]: I0515 08:54:14.470715 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.470892 kubelet[1899]: I0515 08:54:14.470876 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.471059 kubelet[1899]: I0515 08:54:14.471036 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ded5dc67e33568765f17180634d3668a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"ded5dc67e33568765f17180634d3668a\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.471218 kubelet[1899]: I0515 08:54:14.471202 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40f83d04d779dff0882600b005113919-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"40f83d04d779dff0882600b005113919\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.471371 kubelet[1899]: I0515 08:54:14.471355 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40f83d04d779dff0882600b005113919-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal\" (UID: \"40f83d04d779dff0882600b005113919\") " pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.520113 kubelet[1899]: W0515 08:54:14.520009 1899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 08:54:14.522760 kubelet[1899]: I0515 08:54:14.522740 1899 kubelet_node_status.go:76] "Attempting to register node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.527257 kubelet[1899]: W0515 08:54:14.527228 1899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 08:54:14.607742 kubelet[1899]: W0515 08:54:14.607685 1899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 08:54:14.608306 kubelet[1899]: E0515 08:54:14.608199 1899 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.633242 kubelet[1899]: I0515 08:54:14.633188 1899 kubelet_node_status.go:125] "Node was previously registered" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.633949 kubelet[1899]: I0515 08:54:14.633895 1899 kubelet_node_status.go:79] "Successfully registered node" node="ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:14.989464 sudo[1914]: pam_unix(sudo:session): session closed for user root May 15 08:54:15.187552 kubelet[1899]: I0515 08:54:15.187488 1899 apiserver.go:52] "Watching apiserver" May 15 08:54:15.241273 kubelet[1899]: I0515 08:54:15.241049 1899 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 08:54:15.367330 kubelet[1899]: I0515 08:54:15.367274 1899 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:15.383919 kubelet[1899]: W0515 08:54:15.383894 1899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 15 08:54:15.384129 kubelet[1899]: E0515 08:54:15.384109 1899 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" May 15 08:54:15.417344 kubelet[1899]: I0515 08:54:15.417157 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-n-fb2247adc4.novalocal" podStartSLOduration=1.417085711 podStartE2EDuration="1.417085711s" podCreationTimestamp="2025-05-15 08:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 08:54:15.397372663 +0000 UTC m=+1.673593959" watchObservedRunningTime="2025-05-15 08:54:15.417085711 +0000 UTC m=+1.693306987" May 15 08:54:15.438729 kubelet[1899]: I0515 08:54:15.438661 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-n-fb2247adc4.novalocal" podStartSLOduration=1.438643581 podStartE2EDuration="1.438643581s" podCreationTimestamp="2025-05-15 08:54:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 08:54:15.417884346 +0000 UTC m=+1.694105632" watchObservedRunningTime="2025-05-15 08:54:15.438643581 +0000 UTC m=+1.714864857" May 15 08:54:16.986270 kubelet[1899]: I0515 08:54:16.986206 1899 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 08:54:16.988823 env[1162]: time="2025-05-15T08:54:16.988679064Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 08:54:16.989807 kubelet[1899]: I0515 08:54:16.989785 1899 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 08:54:17.318111 sudo[1275]: pam_unix(sudo:session): session closed for user root May 15 08:54:17.572487 sshd[1271]: pam_unix(sshd:session): session closed for user core May 15 08:54:17.590844 systemd[1]: sshd@4-172.24.4.191:22-172.24.4.1:44184.service: Deactivated successfully. May 15 08:54:17.600305 systemd[1]: session-5.scope: Deactivated successfully. May 15 08:54:17.601363 systemd[1]: session-5.scope: Consumed 7.668s CPU time. May 15 08:54:17.603835 systemd-logind[1148]: Session 5 logged out. Waiting for processes to exit. May 15 08:54:17.612116 systemd-logind[1148]: Removed session 5. May 15 08:54:17.913171 systemd[1]: Created slice kubepods-besteffort-podb412f1c5_6e45_4611_951c_d875f3dba59f.slice. May 15 08:54:17.948668 systemd[1]: Created slice kubepods-burstable-podcde9f3c0_d1b4_4ca7_932c_40cb7f5c515e.slice. May 15 08:54:17.960965 kubelet[1899]: W0515 08:54:17.960922 1899 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-n-fb2247adc4.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-n-fb2247adc4.novalocal' and this object May 15 08:54:17.961164 kubelet[1899]: E0515 08:54:17.961004 1899 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-7-n-fb2247adc4.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-n-fb2247adc4.novalocal' and this object" logger="UnhandledError" May 15 08:54:17.961580 kubelet[1899]: W0515 08:54:17.961554 1899 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-n-fb2247adc4.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-n-fb2247adc4.novalocal' and this object May 15 08:54:17.961670 kubelet[1899]: E0515 08:54:17.961579 1899 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-7-n-fb2247adc4.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-n-fb2247adc4.novalocal' and this object" logger="UnhandledError" May 15 08:54:17.961670 kubelet[1899]: W0515 08:54:17.961653 1899 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-n-fb2247adc4.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-n-fb2247adc4.novalocal' and this object May 15 08:54:17.961780 kubelet[1899]: E0515 08:54:17.961691 1899 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-7-n-fb2247adc4.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-n-fb2247adc4.novalocal' and this object" logger="UnhandledError" May 15 08:54:18.001842 kubelet[1899]: I0515 08:54:18.001779 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-etc-cni-netd\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.001842 kubelet[1899]: I0515 08:54:18.001830 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-lib-modules\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.001842 kubelet[1899]: I0515 08:54:18.001850 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-clustermesh-secrets\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002371 kubelet[1899]: I0515 08:54:18.001870 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-xtables-lock\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002371 kubelet[1899]: I0515 08:54:18.001889 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-bpf-maps\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002371 kubelet[1899]: I0515 08:54:18.001908 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqd6x\" (UniqueName: \"kubernetes.io/projected/b412f1c5-6e45-4611-951c-d875f3dba59f-kube-api-access-fqd6x\") pod \"kube-proxy-2nxmx\" (UID: \"b412f1c5-6e45-4611-951c-d875f3dba59f\") " pod="kube-system/kube-proxy-2nxmx" May 15 08:54:18.002371 kubelet[1899]: I0515 08:54:18.001929 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-run\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002371 kubelet[1899]: I0515 08:54:18.001954 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cni-path\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002371 kubelet[1899]: I0515 08:54:18.001972 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b412f1c5-6e45-4611-951c-d875f3dba59f-lib-modules\") pod \"kube-proxy-2nxmx\" (UID: \"b412f1c5-6e45-4611-951c-d875f3dba59f\") " pod="kube-system/kube-proxy-2nxmx" May 15 08:54:18.002694 kubelet[1899]: I0515 08:54:18.001989 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-config-path\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002694 kubelet[1899]: I0515 08:54:18.002006 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-kernel\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002694 kubelet[1899]: I0515 08:54:18.002023 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-cgroup\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002694 kubelet[1899]: I0515 08:54:18.002040 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-net\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002694 kubelet[1899]: I0515 08:54:18.002064 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hubble-tls\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002953 kubelet[1899]: I0515 08:54:18.002083 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjls\" (UniqueName: \"kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-kube-api-access-xbjls\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002953 kubelet[1899]: I0515 08:54:18.002101 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hostproc\") pod \"cilium-kzb9c\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " pod="kube-system/cilium-kzb9c" May 15 08:54:18.002953 kubelet[1899]: I0515 08:54:18.002120 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b412f1c5-6e45-4611-951c-d875f3dba59f-xtables-lock\") pod \"kube-proxy-2nxmx\" (UID: \"b412f1c5-6e45-4611-951c-d875f3dba59f\") " pod="kube-system/kube-proxy-2nxmx" May 15 08:54:18.002953 kubelet[1899]: I0515 08:54:18.002144 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b412f1c5-6e45-4611-951c-d875f3dba59f-kube-proxy\") pod \"kube-proxy-2nxmx\" (UID: \"b412f1c5-6e45-4611-951c-d875f3dba59f\") " pod="kube-system/kube-proxy-2nxmx" May 15 08:54:18.096635 kubelet[1899]: E0515 08:54:18.096332 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-xbjls lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-kzb9c" podUID="cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" May 15 08:54:18.116801 systemd[1]: Created slice kubepods-besteffort-pod17a8ce7e_f446_434e_8cef_f7795095c515.slice. May 15 08:54:18.131979 kubelet[1899]: I0515 08:54:18.131939 1899 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 08:54:18.204883 kubelet[1899]: I0515 08:54:18.204661 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17a8ce7e-f446-434e-8cef-f7795095c515-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pw84b\" (UID: \"17a8ce7e-f446-434e-8cef-f7795095c515\") " pod="kube-system/cilium-operator-6c4d7847fc-pw84b" May 15 08:54:18.205351 kubelet[1899]: I0515 08:54:18.205307 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxkx\" (UniqueName: \"kubernetes.io/projected/17a8ce7e-f446-434e-8cef-f7795095c515-kube-api-access-fxxkx\") pod \"cilium-operator-6c4d7847fc-pw84b\" (UID: \"17a8ce7e-f446-434e-8cef-f7795095c515\") " pod="kube-system/cilium-operator-6c4d7847fc-pw84b" May 15 08:54:18.229469 env[1162]: time="2025-05-15T08:54:18.229356249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nxmx,Uid:b412f1c5-6e45-4611-951c-d875f3dba59f,Namespace:kube-system,Attempt:0,}" May 15 08:54:18.429863 env[1162]: time="2025-05-15T08:54:18.429754947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:18.430106 env[1162]: time="2025-05-15T08:54:18.430076840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:18.430216 env[1162]: time="2025-05-15T08:54:18.430189085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:18.430551 env[1162]: time="2025-05-15T08:54:18.430521729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bf52332f1c9218c5713a6e5f91bd90721c184747dd9f2ec07294449dea0bb99 pid=1977 runtime=io.containerd.runc.v2 May 15 08:54:18.460678 systemd[1]: Started cri-containerd-7bf52332f1c9218c5713a6e5f91bd90721c184747dd9f2ec07294449dea0bb99.scope. May 15 08:54:18.498139 env[1162]: time="2025-05-15T08:54:18.498049195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nxmx,Uid:b412f1c5-6e45-4611-951c-d875f3dba59f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bf52332f1c9218c5713a6e5f91bd90721c184747dd9f2ec07294449dea0bb99\"" May 15 08:54:18.503856 env[1162]: time="2025-05-15T08:54:18.503811751Z" level=info msg="CreateContainer within sandbox \"7bf52332f1c9218c5713a6e5f91bd90721c184747dd9f2ec07294449dea0bb99\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 08:54:18.508307 kubelet[1899]: I0515 08:54:18.508268 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-net\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508511 kubelet[1899]: I0515 08:54:18.508341 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbjls\" (UniqueName: \"kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-kube-api-access-xbjls\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508511 kubelet[1899]: I0515 08:54:18.508457 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cni-path\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508609 kubelet[1899]: I0515 08:54:18.508509 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hostproc\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508609 kubelet[1899]: I0515 08:54:18.508582 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-bpf-maps\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508609 kubelet[1899]: I0515 08:54:18.508605 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-lib-modules\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508769 kubelet[1899]: I0515 08:54:18.508644 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-etc-cni-netd\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508769 kubelet[1899]: I0515 08:54:18.508668 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-kernel\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508769 kubelet[1899]: I0515 08:54:18.508696 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-cgroup\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508769 kubelet[1899]: I0515 08:54:18.508742 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-xtables-lock\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.508769 kubelet[1899]: I0515 08:54:18.508760 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-run\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.509003 kubelet[1899]: I0515 08:54:18.508977 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.509066 kubelet[1899]: I0515 08:54:18.509032 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510455 kubelet[1899]: I0515 08:54:18.509455 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510455 kubelet[1899]: I0515 08:54:18.509492 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cni-path" (OuterVolumeSpecName: "cni-path") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510455 kubelet[1899]: I0515 08:54:18.509509 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hostproc" (OuterVolumeSpecName: "hostproc") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510455 kubelet[1899]: I0515 08:54:18.509587 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510455 kubelet[1899]: I0515 08:54:18.509743 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510893 kubelet[1899]: I0515 08:54:18.509767 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510893 kubelet[1899]: I0515 08:54:18.509782 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.510893 kubelet[1899]: I0515 08:54:18.509784 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:54:18.512027 kubelet[1899]: I0515 08:54:18.511969 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-kube-api-access-xbjls" (OuterVolumeSpecName: "kube-api-access-xbjls") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "kube-api-access-xbjls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 08:54:18.537346 env[1162]: time="2025-05-15T08:54:18.537279115Z" level=info msg="CreateContainer within sandbox \"7bf52332f1c9218c5713a6e5f91bd90721c184747dd9f2ec07294449dea0bb99\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84a8856475fc47a339d23b92200b12438c0082e061023734358131740bb3c4d1\"" May 15 08:54:18.539061 env[1162]: time="2025-05-15T08:54:18.538444997Z" level=info msg="StartContainer for \"84a8856475fc47a339d23b92200b12438c0082e061023734358131740bb3c4d1\"" May 15 08:54:18.559173 systemd[1]: Started cri-containerd-84a8856475fc47a339d23b92200b12438c0082e061023734358131740bb3c4d1.scope. May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609704 1899 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-etc-cni-netd\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609744 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609759 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-cgroup\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609792 1899 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-xtables-lock\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609805 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-run\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609824 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-host-proc-sys-net\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.609929 kubelet[1899]: I0515 08:54:18.609838 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xbjls\" (UniqueName: \"kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-kube-api-access-xbjls\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.610383 kubelet[1899]: I0515 08:54:18.609873 1899 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cni-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.610383 kubelet[1899]: I0515 08:54:18.609886 1899 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hostproc\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.610383 kubelet[1899]: I0515 08:54:18.609896 1899 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-bpf-maps\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.610383 kubelet[1899]: I0515 08:54:18.609906 1899 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-lib-modules\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:18.619193 env[1162]: time="2025-05-15T08:54:18.619085052Z" level=info msg="StartContainer for \"84a8856475fc47a339d23b92200b12438c0082e061023734358131740bb3c4d1\" returns successfully" May 15 08:54:18.912027 kubelet[1899]: I0515 08:54:18.911993 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-clustermesh-secrets\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:18.918920 kubelet[1899]: I0515 08:54:18.918891 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 08:54:19.012684 kubelet[1899]: I0515 08:54:19.012639 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hubble-tls\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:19.013392 kubelet[1899]: I0515 08:54:19.013373 1899 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-clustermesh-secrets\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:19.015897 kubelet[1899]: I0515 08:54:19.015866 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 08:54:19.021045 env[1162]: time="2025-05-15T08:54:19.020581801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pw84b,Uid:17a8ce7e-f446-434e-8cef-f7795095c515,Namespace:kube-system,Attempt:0,}" May 15 08:54:19.049372 env[1162]: time="2025-05-15T08:54:19.049234317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:19.049372 env[1162]: time="2025-05-15T08:54:19.049285385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:19.049798 env[1162]: time="2025-05-15T08:54:19.049300193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:19.050146 env[1162]: time="2025-05-15T08:54:19.050084167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b pid=2150 runtime=io.containerd.runc.v2 May 15 08:54:19.068350 systemd[1]: Started cri-containerd-787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b.scope. May 15 08:54:19.113724 kubelet[1899]: I0515 08:54:19.113677 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-config-path\") pod \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\" (UID: \"cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e\") " May 15 08:54:19.113937 kubelet[1899]: I0515 08:54:19.113736 1899 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-hubble-tls\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:19.115695 kubelet[1899]: I0515 08:54:19.115666 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" (UID: "cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 08:54:19.134143 env[1162]: time="2025-05-15T08:54:19.132893527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pw84b,Uid:17a8ce7e-f446-434e-8cef-f7795095c515,Namespace:kube-system,Attempt:0,} returns sandbox id \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\"" May 15 08:54:19.134902 env[1162]: time="2025-05-15T08:54:19.134874682Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 08:54:19.140860 systemd[1]: var-lib-kubelet-pods-cde9f3c0\x2dd1b4\x2d4ca7\x2d932c\x2d40cb7f5c515e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxbjls.mount: Deactivated successfully. May 15 08:54:19.215345 kubelet[1899]: I0515 08:54:19.214545 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e-cilium-config-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:54:19.414624 systemd[1]: Removed slice kubepods-burstable-podcde9f3c0_d1b4_4ca7_932c_40cb7f5c515e.slice. May 15 08:54:19.426606 kubelet[1899]: I0515 08:54:19.421616 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2nxmx" podStartSLOduration=2.421552275 podStartE2EDuration="2.421552275s" podCreationTimestamp="2025-05-15 08:54:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 08:54:19.42062058 +0000 UTC m=+5.696841886" watchObservedRunningTime="2025-05-15 08:54:19.421552275 +0000 UTC m=+5.697773591" May 15 08:54:19.503846 systemd[1]: Created slice kubepods-burstable-pod7258f9b5_5376_4051_9abf_ffb49980a2b6.slice. May 15 08:54:19.620884 kubelet[1899]: I0515 08:54:19.620788 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-config-path\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.621395 kubelet[1899]: I0515 08:54:19.621349 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-bpf-maps\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.621908 kubelet[1899]: I0515 08:54:19.621856 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-net\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.622554 kubelet[1899]: I0515 08:54:19.622485 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-run\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.623133 kubelet[1899]: I0515 08:54:19.623089 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-lib-modules\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.624285 kubelet[1899]: I0515 08:54:19.624241 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-cgroup\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.624791 kubelet[1899]: I0515 08:54:19.624708 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-hubble-tls\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.625753 kubelet[1899]: I0515 08:54:19.624998 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-hostproc\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.627594 kubelet[1899]: I0515 08:54:19.627514 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-xtables-lock\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.627930 kubelet[1899]: I0515 08:54:19.627887 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-etc-cni-netd\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.628208 kubelet[1899]: I0515 08:54:19.628164 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cni-path\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.628718 kubelet[1899]: I0515 08:54:19.628673 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-kernel\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.629068 kubelet[1899]: I0515 08:54:19.629023 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7258f9b5-5376-4051-9abf-ffb49980a2b6-clustermesh-secrets\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.629556 kubelet[1899]: I0515 08:54:19.629484 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5k54\" (UniqueName: \"kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-kube-api-access-z5k54\") pod \"cilium-xl7nl\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " pod="kube-system/cilium-xl7nl" May 15 08:54:19.813519 env[1162]: time="2025-05-15T08:54:19.813376991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xl7nl,Uid:7258f9b5-5376-4051-9abf-ffb49980a2b6,Namespace:kube-system,Attempt:0,}" May 15 08:54:19.873971 env[1162]: time="2025-05-15T08:54:19.873812549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:19.874554 env[1162]: time="2025-05-15T08:54:19.873912810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:19.874554 env[1162]: time="2025-05-15T08:54:19.873947367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:19.874554 env[1162]: time="2025-05-15T08:54:19.874297133Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663 pid=2232 runtime=io.containerd.runc.v2 May 15 08:54:19.899903 systemd[1]: Started cri-containerd-942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663.scope. May 15 08:54:19.931512 env[1162]: time="2025-05-15T08:54:19.931453252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xl7nl,Uid:7258f9b5-5376-4051-9abf-ffb49980a2b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\"" May 15 08:54:20.339834 kubelet[1899]: I0515 08:54:20.339762 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e" path="/var/lib/kubelet/pods/cde9f3c0-d1b4-4ca7-932c-40cb7f5c515e/volumes" May 15 08:54:20.758782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117781035.mount: Deactivated successfully. May 15 08:54:21.880922 env[1162]: time="2025-05-15T08:54:21.880819667Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:21.885126 env[1162]: time="2025-05-15T08:54:21.885051413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:21.891807 env[1162]: time="2025-05-15T08:54:21.891732222Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 15 08:54:21.899872 env[1162]: time="2025-05-15T08:54:21.899817263Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 08:54:21.901568 env[1162]: time="2025-05-15T08:54:21.901512420Z" level=info msg="CreateContainer within sandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 08:54:21.901852 env[1162]: time="2025-05-15T08:54:21.901788626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:21.935487 env[1162]: time="2025-05-15T08:54:21.935329167Z" level=info msg="CreateContainer within sandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\"" May 15 08:54:21.938720 env[1162]: time="2025-05-15T08:54:21.937571244Z" level=info msg="StartContainer for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\"" May 15 08:54:21.982743 systemd[1]: Started cri-containerd-73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4.scope. May 15 08:54:21.995732 systemd[1]: run-containerd-runc-k8s.io-73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4-runc.jCnrxT.mount: Deactivated successfully. May 15 08:54:22.035701 env[1162]: time="2025-05-15T08:54:22.035653947Z" level=info msg="StartContainer for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" returns successfully" May 15 08:54:22.443716 kubelet[1899]: I0515 08:54:22.443650 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pw84b" podStartSLOduration=1.682978539 podStartE2EDuration="4.443630763s" podCreationTimestamp="2025-05-15 08:54:18 +0000 UTC" firstStartedPulling="2025-05-15 08:54:19.134237387 +0000 UTC m=+5.410458654" lastFinishedPulling="2025-05-15 08:54:21.894889562 +0000 UTC m=+8.171110878" observedRunningTime="2025-05-15 08:54:22.442606935 +0000 UTC m=+8.718828211" watchObservedRunningTime="2025-05-15 08:54:22.443630763 +0000 UTC m=+8.719852029" May 15 08:54:30.065942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244920793.mount: Deactivated successfully. May 15 08:54:34.567008 env[1162]: time="2025-05-15T08:54:34.566493002Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:34.577169 env[1162]: time="2025-05-15T08:54:34.577118171Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:34.580540 env[1162]: time="2025-05-15T08:54:34.580412068Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 08:54:34.581618 env[1162]: time="2025-05-15T08:54:34.581579212Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 15 08:54:34.592126 env[1162]: time="2025-05-15T08:54:34.592012306Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 08:54:34.620877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539346110.mount: Deactivated successfully. May 15 08:54:34.626826 env[1162]: time="2025-05-15T08:54:34.626788878Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\"" May 15 08:54:34.628882 env[1162]: time="2025-05-15T08:54:34.627725904Z" level=info msg="StartContainer for \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\"" May 15 08:54:34.669712 systemd[1]: Started cri-containerd-f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5.scope. May 15 08:54:34.736379 env[1162]: time="2025-05-15T08:54:34.736306326Z" level=info msg="StartContainer for \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\" returns successfully" May 15 08:54:34.741103 systemd[1]: cri-containerd-f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5.scope: Deactivated successfully. May 15 08:54:35.613764 systemd[1]: run-containerd-runc-k8s.io-f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5-runc.tBsXwZ.mount: Deactivated successfully. May 15 08:54:35.613973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5-rootfs.mount: Deactivated successfully. May 15 08:54:36.276372 env[1162]: time="2025-05-15T08:54:36.276033341Z" level=info msg="shim disconnected" id=f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5 May 15 08:54:36.277519 env[1162]: time="2025-05-15T08:54:36.277408679Z" level=warning msg="cleaning up after shim disconnected" id=f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5 namespace=k8s.io May 15 08:54:36.277785 env[1162]: time="2025-05-15T08:54:36.277742011Z" level=info msg="cleaning up dead shim" May 15 08:54:36.295175 env[1162]: time="2025-05-15T08:54:36.295058869Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2356 runtime=io.containerd.runc.v2\n" May 15 08:54:36.499302 env[1162]: time="2025-05-15T08:54:36.499216959Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 08:54:36.550535 env[1162]: time="2025-05-15T08:54:36.550379028Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\"" May 15 08:54:36.552507 env[1162]: time="2025-05-15T08:54:36.551639268Z" level=info msg="StartContainer for \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\"" May 15 08:54:36.599183 systemd[1]: Started cri-containerd-845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d.scope. May 15 08:54:36.648918 env[1162]: time="2025-05-15T08:54:36.648850764Z" level=info msg="StartContainer for \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\" returns successfully" May 15 08:54:36.663836 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 08:54:36.664345 systemd[1]: Stopped systemd-sysctl.service. May 15 08:54:36.668007 systemd[1]: Stopping systemd-sysctl.service... May 15 08:54:36.670757 systemd[1]: Starting systemd-sysctl.service... May 15 08:54:36.681925 systemd[1]: cri-containerd-845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d.scope: Deactivated successfully. May 15 08:54:36.688546 systemd[1]: Finished systemd-sysctl.service. May 15 08:54:36.712179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d-rootfs.mount: Deactivated successfully. May 15 08:54:36.716176 env[1162]: time="2025-05-15T08:54:36.716130869Z" level=info msg="shim disconnected" id=845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d May 15 08:54:36.716363 env[1162]: time="2025-05-15T08:54:36.716340737Z" level=warning msg="cleaning up after shim disconnected" id=845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d namespace=k8s.io May 15 08:54:36.716542 env[1162]: time="2025-05-15T08:54:36.716522482Z" level=info msg="cleaning up dead shim" May 15 08:54:36.725281 env[1162]: time="2025-05-15T08:54:36.725238405Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2421 runtime=io.containerd.runc.v2\n" May 15 08:54:37.507486 env[1162]: time="2025-05-15T08:54:37.506950879Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 08:54:37.583505 env[1162]: time="2025-05-15T08:54:37.583348515Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\"" May 15 08:54:37.585239 env[1162]: time="2025-05-15T08:54:37.584318755Z" level=info msg="StartContainer for \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\"" May 15 08:54:37.607328 systemd[1]: Started cri-containerd-7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7.scope. May 15 08:54:37.610714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3353455730.mount: Deactivated successfully. May 15 08:54:37.652869 env[1162]: time="2025-05-15T08:54:37.652804474Z" level=info msg="StartContainer for \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\" returns successfully" May 15 08:54:37.652882 systemd[1]: cri-containerd-7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7.scope: Deactivated successfully. May 15 08:54:37.679075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7-rootfs.mount: Deactivated successfully. May 15 08:54:37.689501 env[1162]: time="2025-05-15T08:54:37.689403676Z" level=info msg="shim disconnected" id=7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7 May 15 08:54:37.689685 env[1162]: time="2025-05-15T08:54:37.689490511Z" level=warning msg="cleaning up after shim disconnected" id=7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7 namespace=k8s.io May 15 08:54:37.689685 env[1162]: time="2025-05-15T08:54:37.689533491Z" level=info msg="cleaning up dead shim" May 15 08:54:37.699278 env[1162]: time="2025-05-15T08:54:37.699226484Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2478 runtime=io.containerd.runc.v2\n" May 15 08:54:38.528759 env[1162]: time="2025-05-15T08:54:38.528668742Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 08:54:38.573904 env[1162]: time="2025-05-15T08:54:38.573728507Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\"" May 15 08:54:38.575309 env[1162]: time="2025-05-15T08:54:38.575229412Z" level=info msg="StartContainer for \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\"" May 15 08:54:38.617608 systemd[1]: Started cri-containerd-029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0.scope. May 15 08:54:38.661224 systemd[1]: cri-containerd-029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0.scope: Deactivated successfully. May 15 08:54:38.663154 env[1162]: time="2025-05-15T08:54:38.662971716Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7258f9b5_5376_4051_9abf_ffb49980a2b6.slice/cri-containerd-029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0.scope/memory.events\": no such file or directory" May 15 08:54:38.668740 env[1162]: time="2025-05-15T08:54:38.668673392Z" level=info msg="StartContainer for \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\" returns successfully" May 15 08:54:38.699035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0-rootfs.mount: Deactivated successfully. May 15 08:54:38.709309 env[1162]: time="2025-05-15T08:54:38.709262794Z" level=info msg="shim disconnected" id=029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0 May 15 08:54:38.709562 env[1162]: time="2025-05-15T08:54:38.709310033Z" level=warning msg="cleaning up after shim disconnected" id=029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0 namespace=k8s.io May 15 08:54:38.709562 env[1162]: time="2025-05-15T08:54:38.709323008Z" level=info msg="cleaning up dead shim" May 15 08:54:38.718456 env[1162]: time="2025-05-15T08:54:38.718394570Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:54:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2534 runtime=io.containerd.runc.v2\n" May 15 08:54:39.543592 env[1162]: time="2025-05-15T08:54:39.543298646Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 08:54:39.592607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279383780.mount: Deactivated successfully. May 15 08:54:39.596315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178854626.mount: Deactivated successfully. May 15 08:54:39.607112 env[1162]: time="2025-05-15T08:54:39.607050778Z" level=info msg="CreateContainer within sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\"" May 15 08:54:39.610766 env[1162]: time="2025-05-15T08:54:39.610719740Z" level=info msg="StartContainer for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\"" May 15 08:54:39.647348 systemd[1]: run-containerd-runc-k8s.io-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578-runc.GTL6r0.mount: Deactivated successfully. May 15 08:54:39.653502 systemd[1]: Started cri-containerd-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578.scope. May 15 08:54:39.730139 env[1162]: time="2025-05-15T08:54:39.730030589Z" level=info msg="StartContainer for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" returns successfully" May 15 08:54:39.877585 kubelet[1899]: I0515 08:54:39.876590 1899 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 08:54:39.919283 systemd[1]: Created slice kubepods-burstable-pod14cce420_6c64_4822_9df6_66fa026165a5.slice. May 15 08:54:39.928766 systemd[1]: Created slice kubepods-burstable-pod02554a12_b24c_4b65_ac17_cdd34e01cbfe.slice. May 15 08:54:40.010363 kubelet[1899]: I0515 08:54:40.010267 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14cce420-6c64-4822-9df6-66fa026165a5-config-volume\") pod \"coredns-668d6bf9bc-9t9px\" (UID: \"14cce420-6c64-4822-9df6-66fa026165a5\") " pod="kube-system/coredns-668d6bf9bc-9t9px" May 15 08:54:40.010674 kubelet[1899]: I0515 08:54:40.010633 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75s2c\" (UniqueName: \"kubernetes.io/projected/02554a12-b24c-4b65-ac17-cdd34e01cbfe-kube-api-access-75s2c\") pod \"coredns-668d6bf9bc-6962h\" (UID: \"02554a12-b24c-4b65-ac17-cdd34e01cbfe\") " pod="kube-system/coredns-668d6bf9bc-6962h" May 15 08:54:40.010934 kubelet[1899]: I0515 08:54:40.010884 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nczqk\" (UniqueName: \"kubernetes.io/projected/14cce420-6c64-4822-9df6-66fa026165a5-kube-api-access-nczqk\") pod \"coredns-668d6bf9bc-9t9px\" (UID: \"14cce420-6c64-4822-9df6-66fa026165a5\") " pod="kube-system/coredns-668d6bf9bc-9t9px" May 15 08:54:40.011178 kubelet[1899]: I0515 08:54:40.011137 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02554a12-b24c-4b65-ac17-cdd34e01cbfe-config-volume\") pod \"coredns-668d6bf9bc-6962h\" (UID: \"02554a12-b24c-4b65-ac17-cdd34e01cbfe\") " pod="kube-system/coredns-668d6bf9bc-6962h" May 15 08:54:40.223970 env[1162]: time="2025-05-15T08:54:40.223389967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9t9px,Uid:14cce420-6c64-4822-9df6-66fa026165a5,Namespace:kube-system,Attempt:0,}" May 15 08:54:40.236484 env[1162]: time="2025-05-15T08:54:40.236407702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6962h,Uid:02554a12-b24c-4b65-ac17-cdd34e01cbfe,Namespace:kube-system,Attempt:0,}" May 15 08:54:40.636497 systemd[1]: run-containerd-runc-k8s.io-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578-runc.k4Np88.mount: Deactivated successfully. May 15 08:54:41.722918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 15 08:54:41.723478 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 08:54:41.723747 systemd-networkd[990]: cilium_host: Link UP May 15 08:54:41.724146 systemd-networkd[990]: cilium_net: Link UP May 15 08:54:41.726213 systemd-networkd[990]: cilium_net: Gained carrier May 15 08:54:41.727417 systemd-networkd[990]: cilium_host: Gained carrier May 15 08:54:41.837723 systemd-networkd[990]: cilium_vxlan: Link UP May 15 08:54:41.837732 systemd-networkd[990]: cilium_vxlan: Gained carrier May 15 08:54:41.884614 systemd-networkd[990]: cilium_net: Gained IPv6LL May 15 08:54:42.160510 kernel: NET: Registered PF_ALG protocol family May 15 08:54:42.476716 systemd-networkd[990]: cilium_host: Gained IPv6LL May 15 08:54:43.111184 systemd-networkd[990]: lxc_health: Link UP May 15 08:54:43.125625 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 08:54:43.126640 systemd-networkd[990]: lxc_health: Gained carrier May 15 08:54:43.304254 systemd-networkd[990]: lxc4df592abbc15: Link UP May 15 08:54:43.316549 kernel: eth0: renamed from tmp2f196 May 15 08:54:43.323381 systemd-networkd[990]: lxc2be4168a6d61: Link UP May 15 08:54:43.331500 kernel: eth0: renamed from tmpb5b10 May 15 08:54:43.337499 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4df592abbc15: link becomes ready May 15 08:54:43.338234 systemd-networkd[990]: lxc4df592abbc15: Gained carrier May 15 08:54:43.345648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2be4168a6d61: link becomes ready May 15 08:54:43.345940 systemd-networkd[990]: lxc2be4168a6d61: Gained carrier May 15 08:54:43.581312 systemd-networkd[990]: cilium_vxlan: Gained IPv6LL May 15 08:54:43.851770 kubelet[1899]: I0515 08:54:43.851614 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xl7nl" podStartSLOduration=10.199242646 podStartE2EDuration="24.851575699s" podCreationTimestamp="2025-05-15 08:54:19 +0000 UTC" firstStartedPulling="2025-05-15 08:54:19.933002573 +0000 UTC m=+6.209223839" lastFinishedPulling="2025-05-15 08:54:34.585335616 +0000 UTC m=+20.861556892" observedRunningTime="2025-05-15 08:54:40.57097455 +0000 UTC m=+26.847195826" watchObservedRunningTime="2025-05-15 08:54:43.851575699 +0000 UTC m=+30.127796965" May 15 08:54:44.215609 systemd-networkd[990]: lxc_health: Gained IPv6LL May 15 08:54:44.653017 systemd-networkd[990]: lxc4df592abbc15: Gained IPv6LL May 15 08:54:44.716688 systemd-networkd[990]: lxc2be4168a6d61: Gained IPv6LL May 15 08:54:48.044413 env[1162]: time="2025-05-15T08:54:48.044249840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:48.044413 env[1162]: time="2025-05-15T08:54:48.044381039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:48.046043 env[1162]: time="2025-05-15T08:54:48.044413981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:48.046043 env[1162]: time="2025-05-15T08:54:48.044873972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5b10525de059003204830a9574220d1d48aebe9a1d0a5bdb0314648ea09a1e7 pid=3071 runtime=io.containerd.runc.v2 May 15 08:54:48.085673 systemd[1]: run-containerd-runc-k8s.io-b5b10525de059003204830a9574220d1d48aebe9a1d0a5bdb0314648ea09a1e7-runc.A0P4tE.mount: Deactivated successfully. May 15 08:54:48.096015 systemd[1]: Started cri-containerd-b5b10525de059003204830a9574220d1d48aebe9a1d0a5bdb0314648ea09a1e7.scope. May 15 08:54:48.116157 env[1162]: time="2025-05-15T08:54:48.116040478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:54:48.116537 env[1162]: time="2025-05-15T08:54:48.116098057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:54:48.116721 env[1162]: time="2025-05-15T08:54:48.116509225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:54:48.117111 env[1162]: time="2025-05-15T08:54:48.117034379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f196256deaa2dda5b9fa767ba0a2d57541cf89e1c81bc3e8dc022aa24ce7dfb pid=3099 runtime=io.containerd.runc.v2 May 15 08:54:48.145641 systemd[1]: Started cri-containerd-2f196256deaa2dda5b9fa767ba0a2d57541cf89e1c81bc3e8dc022aa24ce7dfb.scope. May 15 08:54:48.199638 env[1162]: time="2025-05-15T08:54:48.199485137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6962h,Uid:02554a12-b24c-4b65-ac17-cdd34e01cbfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5b10525de059003204830a9574220d1d48aebe9a1d0a5bdb0314648ea09a1e7\"" May 15 08:54:48.209089 env[1162]: time="2025-05-15T08:54:48.209031009Z" level=info msg="CreateContainer within sandbox \"b5b10525de059003204830a9574220d1d48aebe9a1d0a5bdb0314648ea09a1e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 08:54:48.238971 env[1162]: time="2025-05-15T08:54:48.238911776Z" level=info msg="CreateContainer within sandbox \"b5b10525de059003204830a9574220d1d48aebe9a1d0a5bdb0314648ea09a1e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e0c3f736631009a4f5504d9dec88323867c7794c6f0346ccd70195a695703bdd\"" May 15 08:54:48.240092 env[1162]: time="2025-05-15T08:54:48.239943569Z" level=info msg="StartContainer for \"e0c3f736631009a4f5504d9dec88323867c7794c6f0346ccd70195a695703bdd\"" May 15 08:54:48.261388 env[1162]: time="2025-05-15T08:54:48.261325217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9t9px,Uid:14cce420-6c64-4822-9df6-66fa026165a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f196256deaa2dda5b9fa767ba0a2d57541cf89e1c81bc3e8dc022aa24ce7dfb\"" May 15 08:54:48.269035 env[1162]: time="2025-05-15T08:54:48.268972955Z" level=info msg="CreateContainer within sandbox \"2f196256deaa2dda5b9fa767ba0a2d57541cf89e1c81bc3e8dc022aa24ce7dfb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 08:54:48.281290 systemd[1]: Started cri-containerd-e0c3f736631009a4f5504d9dec88323867c7794c6f0346ccd70195a695703bdd.scope. May 15 08:54:48.317898 env[1162]: time="2025-05-15T08:54:48.317734866Z" level=info msg="CreateContainer within sandbox \"2f196256deaa2dda5b9fa767ba0a2d57541cf89e1c81bc3e8dc022aa24ce7dfb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"583c3bf3c07be390868bdee841c7e3c17c79da3199df0e1afb98e398e95a605f\"" May 15 08:54:48.319455 env[1162]: time="2025-05-15T08:54:48.319259973Z" level=info msg="StartContainer for \"583c3bf3c07be390868bdee841c7e3c17c79da3199df0e1afb98e398e95a605f\"" May 15 08:54:48.366303 systemd[1]: Started cri-containerd-583c3bf3c07be390868bdee841c7e3c17c79da3199df0e1afb98e398e95a605f.scope. May 15 08:54:48.399341 env[1162]: time="2025-05-15T08:54:48.397702925Z" level=info msg="StartContainer for \"e0c3f736631009a4f5504d9dec88323867c7794c6f0346ccd70195a695703bdd\" returns successfully" May 15 08:54:48.432345 env[1162]: time="2025-05-15T08:54:48.432252881Z" level=info msg="StartContainer for \"583c3bf3c07be390868bdee841c7e3c17c79da3199df0e1afb98e398e95a605f\" returns successfully" May 15 08:54:48.619136 kubelet[1899]: I0515 08:54:48.618935 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6962h" podStartSLOduration=30.618902668 podStartE2EDuration="30.618902668s" podCreationTimestamp="2025-05-15 08:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 08:54:48.592721635 +0000 UTC m=+34.868942931" watchObservedRunningTime="2025-05-15 08:54:48.618902668 +0000 UTC m=+34.895123964" May 15 08:54:48.640637 kubelet[1899]: I0515 08:54:48.640548 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9t9px" podStartSLOduration=30.640505034 podStartE2EDuration="30.640505034s" podCreationTimestamp="2025-05-15 08:54:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 08:54:48.618791949 +0000 UTC m=+34.895013225" watchObservedRunningTime="2025-05-15 08:54:48.640505034 +0000 UTC m=+34.916726310" May 15 08:58:35.205860 systemd[1]: Started sshd@5-172.24.4.191:22-172.24.4.1:54152.service. May 15 08:58:36.770516 sshd[3255]: Accepted publickey for core from 172.24.4.1 port 54152 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:58:36.774869 sshd[3255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:58:36.792305 systemd-logind[1148]: New session 6 of user core. May 15 08:58:36.793873 systemd[1]: Started session-6.scope. May 15 08:58:37.674927 sshd[3255]: pam_unix(sshd:session): session closed for user core May 15 08:58:37.680637 systemd[1]: sshd@5-172.24.4.191:22-172.24.4.1:54152.service: Deactivated successfully. May 15 08:58:37.683057 systemd[1]: session-6.scope: Deactivated successfully. May 15 08:58:37.684695 systemd-logind[1148]: Session 6 logged out. Waiting for processes to exit. May 15 08:58:37.690705 systemd-logind[1148]: Removed session 6. May 15 08:58:42.686114 systemd[1]: Started sshd@6-172.24.4.191:22-172.24.4.1:54156.service. May 15 08:58:43.985177 sshd[3267]: Accepted publickey for core from 172.24.4.1 port 54156 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:58:43.987993 sshd[3267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:58:44.000512 systemd-logind[1148]: New session 7 of user core. May 15 08:58:44.001169 systemd[1]: Started session-7.scope. May 15 08:58:44.759148 sshd[3267]: pam_unix(sshd:session): session closed for user core May 15 08:58:44.764865 systemd[1]: sshd@6-172.24.4.191:22-172.24.4.1:54156.service: Deactivated successfully. May 15 08:58:44.765695 systemd[1]: session-7.scope: Deactivated successfully. May 15 08:58:44.766907 systemd-logind[1148]: Session 7 logged out. Waiting for processes to exit. May 15 08:58:44.769365 systemd-logind[1148]: Removed session 7. May 15 08:58:49.768705 systemd[1]: Started sshd@7-172.24.4.191:22-172.24.4.1:57986.service. May 15 08:58:51.099769 sshd[3283]: Accepted publickey for core from 172.24.4.1 port 57986 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:58:51.102165 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:58:51.115023 systemd-logind[1148]: New session 8 of user core. May 15 08:58:51.117146 systemd[1]: Started session-8.scope. May 15 08:58:51.930759 sshd[3283]: pam_unix(sshd:session): session closed for user core May 15 08:58:51.937156 systemd[1]: sshd@7-172.24.4.191:22-172.24.4.1:57986.service: Deactivated successfully. May 15 08:58:51.939361 systemd[1]: session-8.scope: Deactivated successfully. May 15 08:58:51.942568 systemd-logind[1148]: Session 8 logged out. Waiting for processes to exit. May 15 08:58:51.944821 systemd-logind[1148]: Removed session 8. May 15 08:58:56.943547 systemd[1]: Started sshd@8-172.24.4.191:22-172.24.4.1:45564.service. May 15 08:58:58.232292 sshd[3295]: Accepted publickey for core from 172.24.4.1 port 45564 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:58:58.236666 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:58:58.249042 systemd-logind[1148]: New session 9 of user core. May 15 08:58:58.250083 systemd[1]: Started session-9.scope. May 15 08:58:59.105510 sshd[3295]: pam_unix(sshd:session): session closed for user core May 15 08:58:59.117604 systemd[1]: Started sshd@9-172.24.4.191:22-172.24.4.1:45572.service. May 15 08:58:59.119070 systemd[1]: sshd@8-172.24.4.191:22-172.24.4.1:45564.service: Deactivated successfully. May 15 08:58:59.120941 systemd[1]: session-9.scope: Deactivated successfully. May 15 08:58:59.123579 systemd-logind[1148]: Session 9 logged out. Waiting for processes to exit. May 15 08:58:59.126767 systemd-logind[1148]: Removed session 9. May 15 08:59:00.350501 sshd[3306]: Accepted publickey for core from 172.24.4.1 port 45572 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:00.353697 sshd[3306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:00.371051 systemd-logind[1148]: New session 10 of user core. May 15 08:59:00.371913 systemd[1]: Started session-10.scope. May 15 08:59:01.127581 sshd[3306]: pam_unix(sshd:session): session closed for user core May 15 08:59:01.137239 systemd[1]: sshd@9-172.24.4.191:22-172.24.4.1:45572.service: Deactivated successfully. May 15 08:59:01.140062 systemd[1]: session-10.scope: Deactivated successfully. May 15 08:59:01.143038 systemd-logind[1148]: Session 10 logged out. Waiting for processes to exit. May 15 08:59:01.150589 systemd[1]: Started sshd@10-172.24.4.191:22-172.24.4.1:45584.service. May 15 08:59:01.154731 systemd-logind[1148]: Removed session 10. May 15 08:59:02.334008 sshd[3316]: Accepted publickey for core from 172.24.4.1 port 45584 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:02.337311 sshd[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:02.348492 systemd-logind[1148]: New session 11 of user core. May 15 08:59:02.349912 systemd[1]: Started session-11.scope. May 15 08:59:03.111208 sshd[3316]: pam_unix(sshd:session): session closed for user core May 15 08:59:03.121787 systemd[1]: sshd@10-172.24.4.191:22-172.24.4.1:45584.service: Deactivated successfully. May 15 08:59:03.124973 systemd[1]: session-11.scope: Deactivated successfully. May 15 08:59:03.126715 systemd-logind[1148]: Session 11 logged out. Waiting for processes to exit. May 15 08:59:03.131030 systemd-logind[1148]: Removed session 11. May 15 08:59:08.127390 systemd[1]: Started sshd@11-172.24.4.191:22-172.24.4.1:39336.service. May 15 08:59:09.467066 sshd[3328]: Accepted publickey for core from 172.24.4.1 port 39336 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:09.470815 sshd[3328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:09.490733 systemd-logind[1148]: New session 12 of user core. May 15 08:59:09.491325 systemd[1]: Started session-12.scope. May 15 08:59:10.250235 sshd[3328]: pam_unix(sshd:session): session closed for user core May 15 08:59:10.256039 systemd[1]: sshd@11-172.24.4.191:22-172.24.4.1:39336.service: Deactivated successfully. May 15 08:59:10.257884 systemd[1]: session-12.scope: Deactivated successfully. May 15 08:59:10.260083 systemd-logind[1148]: Session 12 logged out. Waiting for processes to exit. May 15 08:59:10.263150 systemd-logind[1148]: Removed session 12. May 15 08:59:15.263680 systemd[1]: Started sshd@12-172.24.4.191:22-172.24.4.1:35836.service. May 15 08:59:16.591657 sshd[3342]: Accepted publickey for core from 172.24.4.1 port 35836 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:16.594668 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:16.605687 systemd-logind[1148]: New session 13 of user core. May 15 08:59:16.607540 systemd[1]: Started session-13.scope. May 15 08:59:17.367009 sshd[3342]: pam_unix(sshd:session): session closed for user core May 15 08:59:17.376741 systemd[1]: sshd@12-172.24.4.191:22-172.24.4.1:35836.service: Deactivated successfully. May 15 08:59:17.378746 systemd[1]: session-13.scope: Deactivated successfully. May 15 08:59:17.382987 systemd-logind[1148]: Session 13 logged out. Waiting for processes to exit. May 15 08:59:17.392078 systemd[1]: Started sshd@13-172.24.4.191:22-172.24.4.1:35844.service. May 15 08:59:17.395246 systemd-logind[1148]: Removed session 13. May 15 08:59:18.625669 sshd[3354]: Accepted publickey for core from 172.24.4.1 port 35844 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:18.629116 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:18.640667 systemd-logind[1148]: New session 14 of user core. May 15 08:59:18.642739 systemd[1]: Started session-14.scope. May 15 08:59:19.585863 sshd[3354]: pam_unix(sshd:session): session closed for user core May 15 08:59:19.594481 systemd[1]: sshd@13-172.24.4.191:22-172.24.4.1:35844.service: Deactivated successfully. May 15 08:59:19.596604 systemd[1]: session-14.scope: Deactivated successfully. May 15 08:59:19.598585 systemd-logind[1148]: Session 14 logged out. Waiting for processes to exit. May 15 08:59:19.603756 systemd[1]: Started sshd@14-172.24.4.191:22-172.24.4.1:35848.service. May 15 08:59:19.607209 systemd-logind[1148]: Removed session 14. May 15 08:59:20.850112 sshd[3365]: Accepted publickey for core from 172.24.4.1 port 35848 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:20.853234 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:20.865254 systemd-logind[1148]: New session 15 of user core. May 15 08:59:20.865976 systemd[1]: Started session-15.scope. May 15 08:59:22.891173 sshd[3365]: pam_unix(sshd:session): session closed for user core May 15 08:59:22.903919 systemd[1]: Started sshd@15-172.24.4.191:22-172.24.4.1:35858.service. May 15 08:59:22.914552 systemd[1]: sshd@14-172.24.4.191:22-172.24.4.1:35848.service: Deactivated successfully. May 15 08:59:22.916279 systemd[1]: session-15.scope: Deactivated successfully. May 15 08:59:22.919260 systemd-logind[1148]: Session 15 logged out. Waiting for processes to exit. May 15 08:59:22.922315 systemd-logind[1148]: Removed session 15. May 15 08:59:24.282175 sshd[3381]: Accepted publickey for core from 172.24.4.1 port 35858 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:24.285846 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:24.299342 systemd[1]: Started session-16.scope. May 15 08:59:24.300259 systemd-logind[1148]: New session 16 of user core. May 15 08:59:25.236636 sshd[3381]: pam_unix(sshd:session): session closed for user core May 15 08:59:25.251119 systemd[1]: sshd@15-172.24.4.191:22-172.24.4.1:35858.service: Deactivated successfully. May 15 08:59:25.254734 systemd[1]: session-16.scope: Deactivated successfully. May 15 08:59:25.257132 systemd-logind[1148]: Session 16 logged out. Waiting for processes to exit. May 15 08:59:25.275622 systemd[1]: Started sshd@16-172.24.4.191:22-172.24.4.1:58430.service. May 15 08:59:25.282242 systemd-logind[1148]: Removed session 16. May 15 08:59:26.618580 sshd[3391]: Accepted publickey for core from 172.24.4.1 port 58430 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:26.621568 sshd[3391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:26.632936 systemd-logind[1148]: New session 17 of user core. May 15 08:59:26.634179 systemd[1]: Started session-17.scope. May 15 08:59:27.435770 sshd[3391]: pam_unix(sshd:session): session closed for user core May 15 08:59:27.441707 systemd[1]: sshd@16-172.24.4.191:22-172.24.4.1:58430.service: Deactivated successfully. May 15 08:59:27.443675 systemd[1]: session-17.scope: Deactivated successfully. May 15 08:59:27.445175 systemd-logind[1148]: Session 17 logged out. Waiting for processes to exit. May 15 08:59:27.447803 systemd-logind[1148]: Removed session 17. May 15 08:59:32.445961 systemd[1]: Started sshd@17-172.24.4.191:22-172.24.4.1:58432.service. May 15 08:59:33.640007 sshd[3405]: Accepted publickey for core from 172.24.4.1 port 58432 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:33.643012 sshd[3405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:33.655228 systemd-logind[1148]: New session 18 of user core. May 15 08:59:33.658800 systemd[1]: Started session-18.scope. May 15 08:59:34.453534 sshd[3405]: pam_unix(sshd:session): session closed for user core May 15 08:59:34.459796 systemd[1]: sshd@17-172.24.4.191:22-172.24.4.1:58432.service: Deactivated successfully. May 15 08:59:34.463257 systemd[1]: session-18.scope: Deactivated successfully. May 15 08:59:34.464784 systemd-logind[1148]: Session 18 logged out. Waiting for processes to exit. May 15 08:59:34.467888 systemd-logind[1148]: Removed session 18. May 15 08:59:39.467536 systemd[1]: Started sshd@18-172.24.4.191:22-172.24.4.1:47804.service. May 15 08:59:40.652065 sshd[3417]: Accepted publickey for core from 172.24.4.1 port 47804 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:40.653393 sshd[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:40.664216 systemd-logind[1148]: New session 19 of user core. May 15 08:59:40.666240 systemd[1]: Started session-19.scope. May 15 08:59:41.464397 sshd[3417]: pam_unix(sshd:session): session closed for user core May 15 08:59:41.471939 systemd[1]: sshd@18-172.24.4.191:22-172.24.4.1:47804.service: Deactivated successfully. May 15 08:59:41.473849 systemd[1]: session-19.scope: Deactivated successfully. May 15 08:59:41.475341 systemd-logind[1148]: Session 19 logged out. Waiting for processes to exit. May 15 08:59:41.477565 systemd-logind[1148]: Removed session 19. May 15 08:59:46.491162 systemd[1]: Started sshd@19-172.24.4.191:22-172.24.4.1:43604.service. May 15 08:59:47.608671 sshd[3429]: Accepted publickey for core from 172.24.4.1 port 43604 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:47.612111 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:47.625577 systemd-logind[1148]: New session 20 of user core. May 15 08:59:47.626917 systemd[1]: Started session-20.scope. May 15 08:59:48.442020 sshd[3429]: pam_unix(sshd:session): session closed for user core May 15 08:59:48.458025 systemd[1]: Started sshd@20-172.24.4.191:22-172.24.4.1:43606.service. May 15 08:59:48.459582 systemd[1]: sshd@19-172.24.4.191:22-172.24.4.1:43604.service: Deactivated successfully. May 15 08:59:48.461810 systemd[1]: session-20.scope: Deactivated successfully. May 15 08:59:48.474514 systemd-logind[1148]: Session 20 logged out. Waiting for processes to exit. May 15 08:59:48.484130 systemd-logind[1148]: Removed session 20. May 15 08:59:49.606738 sshd[3439]: Accepted publickey for core from 172.24.4.1 port 43606 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:49.610010 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:49.623945 systemd-logind[1148]: New session 21 of user core. May 15 08:59:49.624894 systemd[1]: Started session-21.scope. May 15 08:59:51.869228 env[1162]: time="2025-05-15T08:59:51.868679837Z" level=info msg="StopContainer for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" with timeout 30 (s)" May 15 08:59:51.871767 env[1162]: time="2025-05-15T08:59:51.871724104Z" level=info msg="Stop container \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" with signal terminated" May 15 08:59:51.907866 systemd[1]: run-containerd-runc-k8s.io-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578-runc.p1l8d5.mount: Deactivated successfully. May 15 08:59:51.923476 systemd[1]: cri-containerd-73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4.scope: Deactivated successfully. May 15 08:59:51.923828 systemd[1]: cri-containerd-73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4.scope: Consumed 1.351s CPU time. May 15 08:59:51.954013 env[1162]: time="2025-05-15T08:59:51.953832129Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 08:59:51.960741 env[1162]: time="2025-05-15T08:59:51.960683974Z" level=info msg="StopContainer for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" with timeout 2 (s)" May 15 08:59:51.961129 env[1162]: time="2025-05-15T08:59:51.961082815Z" level=info msg="Stop container \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" with signal terminated" May 15 08:59:51.973087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4-rootfs.mount: Deactivated successfully. May 15 08:59:51.983548 systemd-networkd[990]: lxc_health: Link DOWN May 15 08:59:51.983558 systemd-networkd[990]: lxc_health: Lost carrier May 15 08:59:52.013293 env[1162]: time="2025-05-15T08:59:52.013182862Z" level=info msg="shim disconnected" id=73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4 May 15 08:59:52.013293 env[1162]: time="2025-05-15T08:59:52.013240931Z" level=warning msg="cleaning up after shim disconnected" id=73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4 namespace=k8s.io May 15 08:59:52.013293 env[1162]: time="2025-05-15T08:59:52.013261391Z" level=info msg="cleaning up dead shim" May 15 08:59:52.021891 systemd[1]: cri-containerd-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578.scope: Deactivated successfully. May 15 08:59:52.022183 systemd[1]: cri-containerd-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578.scope: Consumed 10.930s CPU time. May 15 08:59:52.042013 env[1162]: time="2025-05-15T08:59:52.041939814Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3499 runtime=io.containerd.runc.v2\n" May 15 08:59:52.060012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578-rootfs.mount: Deactivated successfully. May 15 08:59:52.076736 env[1162]: time="2025-05-15T08:59:52.076529698Z" level=info msg="StopContainer for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" returns successfully" May 15 08:59:52.078106 env[1162]: time="2025-05-15T08:59:52.078078487Z" level=info msg="StopPodSandbox for \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\"" May 15 08:59:52.078336 env[1162]: time="2025-05-15T08:59:52.078303371Z" level=info msg="Container to stop \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 08:59:52.095219 systemd[1]: cri-containerd-787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b.scope: Deactivated successfully. May 15 08:59:52.099103 env[1162]: time="2025-05-15T08:59:52.098989661Z" level=info msg="shim disconnected" id=51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578 May 15 08:59:52.099385 env[1162]: time="2025-05-15T08:59:52.099362583Z" level=warning msg="cleaning up after shim disconnected" id=51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578 namespace=k8s.io May 15 08:59:52.099515 env[1162]: time="2025-05-15T08:59:52.099496416Z" level=info msg="cleaning up dead shim" May 15 08:59:52.124852 env[1162]: time="2025-05-15T08:59:52.123063034Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3530 runtime=io.containerd.runc.v2\n" May 15 08:59:52.131802 env[1162]: time="2025-05-15T08:59:52.131751510Z" level=info msg="StopContainer for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" returns successfully" May 15 08:59:52.132573 env[1162]: time="2025-05-15T08:59:52.132523775Z" level=info msg="StopPodSandbox for \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\"" May 15 08:59:52.132691 env[1162]: time="2025-05-15T08:59:52.132596432Z" level=info msg="Container to stop \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 08:59:52.132691 env[1162]: time="2025-05-15T08:59:52.132617742Z" level=info msg="Container to stop \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 08:59:52.132691 env[1162]: time="2025-05-15T08:59:52.132632490Z" level=info msg="Container to stop \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 08:59:52.132691 env[1162]: time="2025-05-15T08:59:52.132648290Z" level=info msg="Container to stop \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 08:59:52.132691 env[1162]: time="2025-05-15T08:59:52.132669300Z" level=info msg="Container to stop \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 08:59:52.145576 systemd[1]: cri-containerd-942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663.scope: Deactivated successfully. May 15 08:59:52.166041 env[1162]: time="2025-05-15T08:59:52.165969985Z" level=info msg="shim disconnected" id=787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b May 15 08:59:52.166041 env[1162]: time="2025-05-15T08:59:52.166028054Z" level=warning msg="cleaning up after shim disconnected" id=787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b namespace=k8s.io May 15 08:59:52.166041 env[1162]: time="2025-05-15T08:59:52.166041078Z" level=info msg="cleaning up dead shim" May 15 08:59:52.178018 env[1162]: time="2025-05-15T08:59:52.177964742Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3572 runtime=io.containerd.runc.v2\n" May 15 08:59:52.178679 env[1162]: time="2025-05-15T08:59:52.178642530Z" level=info msg="TearDown network for sandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" successfully" May 15 08:59:52.178800 env[1162]: time="2025-05-15T08:59:52.178777915Z" level=info msg="StopPodSandbox for \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" returns successfully" May 15 08:59:52.192462 env[1162]: time="2025-05-15T08:59:52.192390312Z" level=info msg="shim disconnected" id=942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663 May 15 08:59:52.192622 env[1162]: time="2025-05-15T08:59:52.192476353Z" level=warning msg="cleaning up after shim disconnected" id=942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663 namespace=k8s.io May 15 08:59:52.192622 env[1162]: time="2025-05-15T08:59:52.192489839Z" level=info msg="cleaning up dead shim" May 15 08:59:52.201486 env[1162]: time="2025-05-15T08:59:52.201408479Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3584 runtime=io.containerd.runc.v2\n" May 15 08:59:52.202371 env[1162]: time="2025-05-15T08:59:52.201784247Z" level=info msg="TearDown network for sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" successfully" May 15 08:59:52.202371 env[1162]: time="2025-05-15T08:59:52.201815275Z" level=info msg="StopPodSandbox for \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" returns successfully" May 15 08:59:52.303231 kubelet[1899]: I0515 08:59:52.303164 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.303932 kubelet[1899]: I0515 08:59:52.303625 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-etc-cni-netd\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.303932 kubelet[1899]: I0515 08:59:52.303863 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-xtables-lock\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.303932 kubelet[1899]: I0515 08:59:52.303894 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-run\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.303932 kubelet[1899]: I0515 08:59:52.303914 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cni-path\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304130 kubelet[1899]: I0515 08:59:52.303945 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7258f9b5-5376-4051-9abf-ffb49980a2b6-clustermesh-secrets\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304130 kubelet[1899]: I0515 08:59:52.303968 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-bpf-maps\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304130 kubelet[1899]: I0515 08:59:52.303988 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-net\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304130 kubelet[1899]: I0515 08:59:52.304012 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxxkx\" (UniqueName: \"kubernetes.io/projected/17a8ce7e-f446-434e-8cef-f7795095c515-kube-api-access-fxxkx\") pod \"17a8ce7e-f446-434e-8cef-f7795095c515\" (UID: \"17a8ce7e-f446-434e-8cef-f7795095c515\") " May 15 08:59:52.304130 kubelet[1899]: I0515 08:59:52.304033 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-hubble-tls\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304130 kubelet[1899]: I0515 08:59:52.304064 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-kernel\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304398 kubelet[1899]: I0515 08:59:52.304090 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17a8ce7e-f446-434e-8cef-f7795095c515-cilium-config-path\") pod \"17a8ce7e-f446-434e-8cef-f7795095c515\" (UID: \"17a8ce7e-f446-434e-8cef-f7795095c515\") " May 15 08:59:52.304398 kubelet[1899]: I0515 08:59:52.304110 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-lib-modules\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304398 kubelet[1899]: I0515 08:59:52.304126 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-hostproc\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304398 kubelet[1899]: I0515 08:59:52.304144 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-cgroup\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.304398 kubelet[1899]: I0515 08:59:52.304210 1899 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-etc-cni-netd\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.304398 kubelet[1899]: I0515 08:59:52.304270 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.304710 kubelet[1899]: I0515 08:59:52.304293 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.304710 kubelet[1899]: I0515 08:59:52.304310 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.304710 kubelet[1899]: I0515 08:59:52.304329 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.305557 kubelet[1899]: I0515 08:59:52.305526 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.308400 kubelet[1899]: I0515 08:59:52.308301 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17a8ce7e-f446-434e-8cef-f7795095c515-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "17a8ce7e-f446-434e-8cef-f7795095c515" (UID: "17a8ce7e-f446-434e-8cef-f7795095c515"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 08:59:52.309675 kubelet[1899]: I0515 08:59:52.309634 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.309842 kubelet[1899]: I0515 08:59:52.309820 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.310015 kubelet[1899]: I0515 08:59:52.309996 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.310152 kubelet[1899]: I0515 08:59:52.310134 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 08:59:52.310806 kubelet[1899]: I0515 08:59:52.310784 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 08:59:52.315100 kubelet[1899]: I0515 08:59:52.315044 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258f9b5-5376-4051-9abf-ffb49980a2b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 08:59:52.316243 kubelet[1899]: I0515 08:59:52.316213 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a8ce7e-f446-434e-8cef-f7795095c515-kube-api-access-fxxkx" (OuterVolumeSpecName: "kube-api-access-fxxkx") pod "17a8ce7e-f446-434e-8cef-f7795095c515" (UID: "17a8ce7e-f446-434e-8cef-f7795095c515"). InnerVolumeSpecName "kube-api-access-fxxkx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 08:59:52.341888 systemd[1]: Removed slice kubepods-besteffort-pod17a8ce7e_f446_434e_8cef_f7795095c515.slice. May 15 08:59:52.341996 systemd[1]: kubepods-besteffort-pod17a8ce7e_f446_434e_8cef_f7795095c515.slice: Consumed 1.386s CPU time. May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.404728 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5k54\" (UniqueName: \"kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-kube-api-access-z5k54\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.407797 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-config-path\") pod \"7258f9b5-5376-4051-9abf-ffb49980a2b6\" (UID: \"7258f9b5-5376-4051-9abf-ffb49980a2b6\") " May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.407987 1899 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-xtables-lock\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.408023 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-run\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.408050 1899 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cni-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.408075 1899 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7258f9b5-5376-4051-9abf-ffb49980a2b6-clustermesh-secrets\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.410897 kubelet[1899]: I0515 08:59:52.408131 1899 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-bpf-maps\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408160 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fxxkx\" (UniqueName: \"kubernetes.io/projected/17a8ce7e-f446-434e-8cef-f7795095c515-kube-api-access-fxxkx\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408184 1899 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-hubble-tls\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408216 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408240 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-host-proc-sys-net\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408319 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/17a8ce7e-f446-434e-8cef-f7795095c515-cilium-config-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408344 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-cgroup\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.411932 kubelet[1899]: I0515 08:59:52.408367 1899 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-lib-modules\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.412779 kubelet[1899]: I0515 08:59:52.408390 1899 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7258f9b5-5376-4051-9abf-ffb49980a2b6-hostproc\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.417096 kubelet[1899]: I0515 08:59:52.417016 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 08:59:52.418850 kubelet[1899]: I0515 08:59:52.418767 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-kube-api-access-z5k54" (OuterVolumeSpecName: "kube-api-access-z5k54") pod "7258f9b5-5376-4051-9abf-ffb49980a2b6" (UID: "7258f9b5-5376-4051-9abf-ffb49980a2b6"). InnerVolumeSpecName "kube-api-access-z5k54". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 08:59:52.509497 kubelet[1899]: I0515 08:59:52.509332 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5k54\" (UniqueName: \"kubernetes.io/projected/7258f9b5-5376-4051-9abf-ffb49980a2b6-kube-api-access-z5k54\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.509497 kubelet[1899]: I0515 08:59:52.509403 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7258f9b5-5376-4051-9abf-ffb49980a2b6-cilium-config-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 08:59:52.899098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663-rootfs.mount: Deactivated successfully. May 15 08:59:52.899558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663-shm.mount: Deactivated successfully. May 15 08:59:52.899767 systemd[1]: var-lib-kubelet-pods-7258f9b5\x2d5376\x2d4051\x2d9abf\x2dffb49980a2b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz5k54.mount: Deactivated successfully. May 15 08:59:52.900061 systemd[1]: var-lib-kubelet-pods-7258f9b5\x2d5376\x2d4051\x2d9abf\x2dffb49980a2b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 08:59:52.900321 systemd[1]: var-lib-kubelet-pods-7258f9b5\x2d5376\x2d4051\x2d9abf\x2dffb49980a2b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 08:59:52.900612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b-rootfs.mount: Deactivated successfully. May 15 08:59:52.900761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b-shm.mount: Deactivated successfully. May 15 08:59:52.900993 systemd[1]: var-lib-kubelet-pods-17a8ce7e\x2df446\x2d434e\x2d8cef\x2df7795095c515-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfxxkx.mount: Deactivated successfully. May 15 08:59:52.924720 kubelet[1899]: I0515 08:59:52.924641 1899 scope.go:117] "RemoveContainer" containerID="73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4" May 15 08:59:52.942743 env[1162]: time="2025-05-15T08:59:52.942375009Z" level=info msg="RemoveContainer for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\"" May 15 08:59:52.962545 env[1162]: time="2025-05-15T08:59:52.962380024Z" level=info msg="RemoveContainer for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" returns successfully" May 15 08:59:52.966202 kubelet[1899]: I0515 08:59:52.966112 1899 scope.go:117] "RemoveContainer" containerID="73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4" May 15 08:59:52.967495 env[1162]: time="2025-05-15T08:59:52.967132672Z" level=error msg="ContainerStatus for \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\": not found" May 15 08:59:52.970143 kubelet[1899]: E0515 08:59:52.970096 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\": not found" containerID="73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4" May 15 08:59:52.970717 kubelet[1899]: I0515 08:59:52.970475 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4"} err="failed to get container status \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"73b910c24507436106f61281e57344f0a22c90b90ad4c4b7c45a6ef328ecb8c4\": not found" May 15 08:59:52.970930 kubelet[1899]: I0515 08:59:52.970903 1899 scope.go:117] "RemoveContainer" containerID="51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578" May 15 08:59:52.975236 env[1162]: time="2025-05-15T08:59:52.975200989Z" level=info msg="RemoveContainer for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\"" May 15 08:59:52.976126 systemd[1]: Removed slice kubepods-burstable-pod7258f9b5_5376_4051_9abf_ffb49980a2b6.slice. May 15 08:59:52.976240 systemd[1]: kubepods-burstable-pod7258f9b5_5376_4051_9abf_ffb49980a2b6.slice: Consumed 11.075s CPU time. May 15 08:59:52.996983 env[1162]: time="2025-05-15T08:59:52.994618177Z" level=info msg="RemoveContainer for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" returns successfully" May 15 08:59:52.997115 kubelet[1899]: I0515 08:59:52.995803 1899 scope.go:117] "RemoveContainer" containerID="029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0" May 15 08:59:52.998504 env[1162]: time="2025-05-15T08:59:52.997528282Z" level=info msg="RemoveContainer for \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\"" May 15 08:59:53.001562 env[1162]: time="2025-05-15T08:59:53.001464490Z" level=info msg="RemoveContainer for \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\" returns successfully" May 15 08:59:53.002008 kubelet[1899]: I0515 08:59:53.001990 1899 scope.go:117] "RemoveContainer" containerID="7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7" May 15 08:59:53.004050 env[1162]: time="2025-05-15T08:59:53.003991163Z" level=info msg="RemoveContainer for \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\"" May 15 08:59:53.011616 env[1162]: time="2025-05-15T08:59:53.011560219Z" level=info msg="RemoveContainer for \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\" returns successfully" May 15 08:59:53.011954 kubelet[1899]: I0515 08:59:53.011929 1899 scope.go:117] "RemoveContainer" containerID="845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d" May 15 08:59:53.015729 env[1162]: time="2025-05-15T08:59:53.015642834Z" level=info msg="RemoveContainer for \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\"" May 15 08:59:53.020213 env[1162]: time="2025-05-15T08:59:53.020148537Z" level=info msg="RemoveContainer for \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\" returns successfully" May 15 08:59:53.020532 kubelet[1899]: I0515 08:59:53.020510 1899 scope.go:117] "RemoveContainer" containerID="f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5" May 15 08:59:53.022308 env[1162]: time="2025-05-15T08:59:53.022278251Z" level=info msg="RemoveContainer for \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\"" May 15 08:59:53.026071 env[1162]: time="2025-05-15T08:59:53.026036595Z" level=info msg="RemoveContainer for \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\" returns successfully" May 15 08:59:53.026263 kubelet[1899]: I0515 08:59:53.026245 1899 scope.go:117] "RemoveContainer" containerID="51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578" May 15 08:59:53.026651 env[1162]: time="2025-05-15T08:59:53.026593694Z" level=error msg="ContainerStatus for \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\": not found" May 15 08:59:53.026850 kubelet[1899]: E0515 08:59:53.026821 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\": not found" containerID="51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578" May 15 08:59:53.026943 kubelet[1899]: I0515 08:59:53.026863 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578"} err="failed to get container status \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\": rpc error: code = NotFound desc = an error occurred when try to find container \"51b0e3797bf2273c90203e60b467cc2a041a01ec183e30b556671fece65d4578\": not found" May 15 08:59:53.026943 kubelet[1899]: I0515 08:59:53.026911 1899 scope.go:117] "RemoveContainer" containerID="029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0" May 15 08:59:53.027312 env[1162]: time="2025-05-15T08:59:53.027237829Z" level=error msg="ContainerStatus for \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\": not found" May 15 08:59:53.027553 kubelet[1899]: E0515 08:59:53.027528 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\": not found" containerID="029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0" May 15 08:59:53.027689 kubelet[1899]: I0515 08:59:53.027660 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0"} err="failed to get container status \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\": rpc error: code = NotFound desc = an error occurred when try to find container \"029e3d4cce315e4763313223d169cbe0e54764b14d8d230e0da8937543cb8ca0\": not found" May 15 08:59:53.027771 kubelet[1899]: I0515 08:59:53.027758 1899 scope.go:117] "RemoveContainer" containerID="7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7" May 15 08:59:53.028154 env[1162]: time="2025-05-15T08:59:53.028068585Z" level=error msg="ContainerStatus for \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\": not found" May 15 08:59:53.028308 kubelet[1899]: E0515 08:59:53.028288 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\": not found" containerID="7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7" May 15 08:59:53.028452 kubelet[1899]: I0515 08:59:53.028399 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7"} err="failed to get container status \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a9a4742ea9d7b03b2d5e6074f9e915d1dd3b9a41cf51cf3bb811dac98bcfbb7\": not found" May 15 08:59:53.028540 kubelet[1899]: I0515 08:59:53.028527 1899 scope.go:117] "RemoveContainer" containerID="845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d" May 15 08:59:53.028951 env[1162]: time="2025-05-15T08:59:53.028871517Z" level=error msg="ContainerStatus for \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\": not found" May 15 08:59:53.029173 kubelet[1899]: E0515 08:59:53.029153 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\": not found" containerID="845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d" May 15 08:59:53.029292 kubelet[1899]: I0515 08:59:53.029268 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d"} err="failed to get container status \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\": rpc error: code = NotFound desc = an error occurred when try to find container \"845239269052db3f2493a0ab1e8c3da2c36a76e92134cb6b01f44d83dbca7c8d\": not found" May 15 08:59:53.029450 kubelet[1899]: I0515 08:59:53.029419 1899 scope.go:117] "RemoveContainer" containerID="f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5" May 15 08:59:53.029900 env[1162]: time="2025-05-15T08:59:53.029837037Z" level=error msg="ContainerStatus for \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\": not found" May 15 08:59:53.030055 kubelet[1899]: E0515 08:59:53.030035 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\": not found" containerID="f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5" May 15 08:59:53.030172 kubelet[1899]: I0515 08:59:53.030151 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5"} err="failed to get container status \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6f1e913fa0d35b572e70ab467e00f0da51e42cb0aa5ed5c73d32a3c6a518cb5\": not found" May 15 08:59:53.940940 sshd[3439]: pam_unix(sshd:session): session closed for user core May 15 08:59:53.950834 systemd[1]: Started sshd@21-172.24.4.191:22-172.24.4.1:55306.service. May 15 08:59:53.953256 systemd[1]: sshd@20-172.24.4.191:22-172.24.4.1:43606.service: Deactivated successfully. May 15 08:59:53.955278 systemd[1]: session-21.scope: Deactivated successfully. May 15 08:59:53.955706 systemd[1]: session-21.scope: Consumed 1.143s CPU time. May 15 08:59:53.960624 systemd-logind[1148]: Session 21 logged out. Waiting for processes to exit. May 15 08:59:53.964800 systemd-logind[1148]: Removed session 21. May 15 08:59:54.333515 kubelet[1899]: I0515 08:59:54.333385 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a8ce7e-f446-434e-8cef-f7795095c515" path="/var/lib/kubelet/pods/17a8ce7e-f446-434e-8cef-f7795095c515/volumes" May 15 08:59:54.334810 kubelet[1899]: I0515 08:59:54.334737 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7258f9b5-5376-4051-9abf-ffb49980a2b6" path="/var/lib/kubelet/pods/7258f9b5-5376-4051-9abf-ffb49980a2b6/volumes" May 15 08:59:54.576064 kubelet[1899]: E0515 08:59:54.575813 1899 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 08:59:55.276698 sshd[3602]: Accepted publickey for core from 172.24.4.1 port 55306 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:55.280014 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:55.293054 systemd[1]: Started session-22.scope. May 15 08:59:55.293995 systemd-logind[1148]: New session 22 of user core. May 15 08:59:56.925015 kubelet[1899]: I0515 08:59:56.924976 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="17a8ce7e-f446-434e-8cef-f7795095c515" containerName="cilium-operator" May 15 08:59:56.925015 kubelet[1899]: I0515 08:59:56.925007 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="7258f9b5-5376-4051-9abf-ffb49980a2b6" containerName="cilium-agent" May 15 08:59:56.931502 systemd[1]: Created slice kubepods-burstable-pod611cc426_7d64_402c_a27a_9102a0fe32d7.slice. May 15 08:59:56.944515 kubelet[1899]: I0515 08:59:56.944443 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-cgroup\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.944815 kubelet[1899]: I0515 08:59:56.944749 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-kernel\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945036 kubelet[1899]: I0515 08:59:56.944990 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-run\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945364 kubelet[1899]: I0515 08:59:56.945193 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-config-path\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945666 kubelet[1899]: I0515 08:59:56.945627 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-net\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945798 kubelet[1899]: I0515 08:59:56.945672 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-xtables-lock\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945798 kubelet[1899]: I0515 08:59:56.945695 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-ipsec-secrets\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945798 kubelet[1899]: I0515 08:59:56.945714 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-bpf-maps\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945798 kubelet[1899]: I0515 08:59:56.945754 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-clustermesh-secrets\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.945798 kubelet[1899]: I0515 08:59:56.945778 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7pqp\" (UniqueName: \"kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-kube-api-access-p7pqp\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.946158 kubelet[1899]: I0515 08:59:56.945811 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-hostproc\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.946158 kubelet[1899]: I0515 08:59:56.945832 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cni-path\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.946158 kubelet[1899]: I0515 08:59:56.945878 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-hubble-tls\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.946158 kubelet[1899]: I0515 08:59:56.945916 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-etc-cni-netd\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:56.946158 kubelet[1899]: I0515 08:59:56.945935 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-lib-modules\") pod \"cilium-ltn2v\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " pod="kube-system/cilium-ltn2v" May 15 08:59:57.034472 sshd[3602]: pam_unix(sshd:session): session closed for user core May 15 08:59:57.038932 systemd[1]: Started sshd@22-172.24.4.191:22-172.24.4.1:55310.service. May 15 08:59:57.041808 systemd[1]: sshd@21-172.24.4.191:22-172.24.4.1:55306.service: Deactivated successfully. May 15 08:59:57.042716 systemd[1]: session-22.scope: Deactivated successfully. May 15 08:59:57.042975 systemd[1]: session-22.scope: Consumed 1.118s CPU time. May 15 08:59:57.044099 systemd-logind[1148]: Session 22 logged out. Waiting for processes to exit. May 15 08:59:57.045892 systemd-logind[1148]: Removed session 22. May 15 08:59:57.237185 env[1162]: time="2025-05-15T08:59:57.236904146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ltn2v,Uid:611cc426-7d64-402c-a27a-9102a0fe32d7,Namespace:kube-system,Attempt:0,}" May 15 08:59:57.279050 env[1162]: time="2025-05-15T08:59:57.278868604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 08:59:57.279050 env[1162]: time="2025-05-15T08:59:57.278982188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 08:59:57.279517 env[1162]: time="2025-05-15T08:59:57.279027062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 08:59:57.280005 env[1162]: time="2025-05-15T08:59:57.279900850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702 pid=3626 runtime=io.containerd.runc.v2 May 15 08:59:57.311311 systemd[1]: Started cri-containerd-10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702.scope. May 15 08:59:57.360556 env[1162]: time="2025-05-15T08:59:57.360501342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ltn2v,Uid:611cc426-7d64-402c-a27a-9102a0fe32d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\"" May 15 08:59:57.365258 env[1162]: time="2025-05-15T08:59:57.365222512Z" level=info msg="CreateContainer within sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 08:59:57.384269 env[1162]: time="2025-05-15T08:59:57.384219568Z" level=info msg="CreateContainer within sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\"" May 15 08:59:57.386648 env[1162]: time="2025-05-15T08:59:57.386612709Z" level=info msg="StartContainer for \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\"" May 15 08:59:57.410903 systemd[1]: Started cri-containerd-ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74.scope. May 15 08:59:57.423691 systemd[1]: cri-containerd-ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74.scope: Deactivated successfully. May 15 08:59:57.444583 env[1162]: time="2025-05-15T08:59:57.444530988Z" level=info msg="shim disconnected" id=ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74 May 15 08:59:57.444907 env[1162]: time="2025-05-15T08:59:57.444886749Z" level=warning msg="cleaning up after shim disconnected" id=ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74 namespace=k8s.io May 15 08:59:57.445012 env[1162]: time="2025-05-15T08:59:57.444995615Z" level=info msg="cleaning up dead shim" May 15 08:59:57.453265 env[1162]: time="2025-05-15T08:59:57.453213654Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:59:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3685 runtime=io.containerd.runc.v2\ntime=\"2025-05-15T08:59:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 15 08:59:57.453915 env[1162]: time="2025-05-15T08:59:57.453770414Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" May 15 08:59:57.454492 env[1162]: time="2025-05-15T08:59:57.454209942Z" level=error msg="Failed to pipe stdout of container \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\"" error="reading from a closed fifo" May 15 08:59:57.454645 env[1162]: time="2025-05-15T08:59:57.454242905Z" level=error msg="Failed to pipe stderr of container \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\"" error="reading from a closed fifo" May 15 08:59:57.457956 env[1162]: time="2025-05-15T08:59:57.457908273Z" level=error msg="StartContainer for \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 15 08:59:57.458325 kubelet[1899]: E0515 08:59:57.458270 1899 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74" May 15 08:59:57.459455 kubelet[1899]: E0515 08:59:57.458683 1899 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 15 08:59:57.459455 kubelet[1899]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 15 08:59:57.459455 kubelet[1899]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 15 08:59:57.459455 kubelet[1899]: rm /hostbin/cilium-mount May 15 08:59:57.459623 kubelet[1899]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7pqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-ltn2v_kube-system(611cc426-7d64-402c-a27a-9102a0fe32d7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 15 08:59:57.459623 kubelet[1899]: > logger="UnhandledError" May 15 08:59:57.460773 kubelet[1899]: E0515 08:59:57.460619 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ltn2v" podUID="611cc426-7d64-402c-a27a-9102a0fe32d7" May 15 08:59:57.998378 env[1162]: time="2025-05-15T08:59:57.998266916Z" level=info msg="CreateContainer within sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" May 15 08:59:58.034652 env[1162]: time="2025-05-15T08:59:58.034508918Z" level=info msg="CreateContainer within sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\"" May 15 08:59:58.040345 env[1162]: time="2025-05-15T08:59:58.040267313Z" level=info msg="StartContainer for \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\"" May 15 08:59:58.098070 systemd[1]: run-containerd-runc-k8s.io-32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113-runc.KDJqkN.mount: Deactivated successfully. May 15 08:59:58.106932 systemd[1]: Started cri-containerd-32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113.scope. May 15 08:59:58.116075 systemd[1]: cri-containerd-32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113.scope: Deactivated successfully. May 15 08:59:58.116331 systemd[1]: Stopped cri-containerd-32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113.scope. May 15 08:59:58.130521 env[1162]: time="2025-05-15T08:59:58.130465297Z" level=info msg="shim disconnected" id=32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113 May 15 08:59:58.130769 env[1162]: time="2025-05-15T08:59:58.130746507Z" level=warning msg="cleaning up after shim disconnected" id=32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113 namespace=k8s.io May 15 08:59:58.130873 env[1162]: time="2025-05-15T08:59:58.130854921Z" level=info msg="cleaning up dead shim" May 15 08:59:58.138893 env[1162]: time="2025-05-15T08:59:58.138857345Z" level=warning msg="cleanup warnings time=\"2025-05-15T08:59:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3720 runtime=io.containerd.runc.v2\ntime=\"2025-05-15T08:59:58Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" May 15 08:59:58.139266 env[1162]: time="2025-05-15T08:59:58.139218937Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" May 15 08:59:58.140537 env[1162]: time="2025-05-15T08:59:58.140212309Z" level=error msg="Failed to pipe stdout of container \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\"" error="reading from a closed fifo" May 15 08:59:58.140643 env[1162]: time="2025-05-15T08:59:58.140495303Z" level=error msg="Failed to pipe stderr of container \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\"" error="reading from a closed fifo" May 15 08:59:58.144847 env[1162]: time="2025-05-15T08:59:58.144810626Z" level=error msg="StartContainer for \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" May 15 08:59:58.145229 kubelet[1899]: E0515 08:59:58.145156 1899 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113" May 15 08:59:58.145601 kubelet[1899]: E0515 08:59:58.145351 1899 kuberuntime_manager.go:1341] "Unhandled Error" err=< May 15 08:59:58.145601 kubelet[1899]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; May 15 08:59:58.145601 kubelet[1899]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; May 15 08:59:58.145601 kubelet[1899]: rm /hostbin/cilium-mount May 15 08:59:58.145601 kubelet[1899]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p7pqp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-ltn2v_kube-system(611cc426-7d64-402c-a27a-9102a0fe32d7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown May 15 08:59:58.145601 kubelet[1899]: > logger="UnhandledError" May 15 08:59:58.147047 kubelet[1899]: E0515 08:59:58.146939 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ltn2v" podUID="611cc426-7d64-402c-a27a-9102a0fe32d7" May 15 08:59:58.436825 sshd[3612]: Accepted publickey for core from 172.24.4.1 port 55310 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 08:59:58.440268 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 08:59:58.451759 systemd-logind[1148]: New session 23 of user core. May 15 08:59:58.452731 systemd[1]: Started session-23.scope. May 15 08:59:58.996223 kubelet[1899]: I0515 08:59:58.996192 1899 scope.go:117] "RemoveContainer" containerID="ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74" May 15 08:59:58.997650 env[1162]: time="2025-05-15T08:59:58.997291468Z" level=info msg="RemoveContainer for \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\"" May 15 08:59:58.999239 kubelet[1899]: I0515 08:59:58.998116 1899 scope.go:117] "RemoveContainer" containerID="ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74" May 15 08:59:59.004037 env[1162]: time="2025-05-15T08:59:59.003995784Z" level=info msg="RemoveContainer for \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\" returns successfully" May 15 08:59:59.008032 env[1162]: time="2025-05-15T08:59:59.007254106Z" level=info msg="RemoveContainer for \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\"" May 15 08:59:59.008032 env[1162]: time="2025-05-15T08:59:59.007378000Z" level=info msg="RemoveContainer for \"ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74\" returns successfully" May 15 08:59:59.012594 kubelet[1899]: E0515 08:59:59.009618 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-ltn2v_kube-system(611cc426-7d64-402c-a27a-9102a0fe32d7)\"" pod="kube-system/cilium-ltn2v" podUID="611cc426-7d64-402c-a27a-9102a0fe32d7" May 15 08:59:59.070748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113-rootfs.mount: Deactivated successfully. May 15 08:59:59.316816 sshd[3612]: pam_unix(sshd:session): session closed for user core May 15 08:59:59.326785 systemd[1]: sshd@22-172.24.4.191:22-172.24.4.1:55310.service: Deactivated successfully. May 15 08:59:59.329001 systemd[1]: session-23.scope: Deactivated successfully. May 15 08:59:59.333074 systemd-logind[1148]: Session 23 logged out. Waiting for processes to exit. May 15 08:59:59.341622 systemd[1]: Started sshd@23-172.24.4.191:22-172.24.4.1:55312.service. May 15 08:59:59.357552 systemd-logind[1148]: Removed session 23. May 15 08:59:59.581776 kubelet[1899]: E0515 08:59:59.581087 1899 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:00:00.009599 env[1162]: time="2025-05-15T09:00:00.009299770Z" level=info msg="StopPodSandbox for \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\"" May 15 09:00:00.022872 env[1162]: time="2025-05-15T09:00:00.009836973Z" level=info msg="Container to stop \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:00:00.019914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702-shm.mount: Deactivated successfully. May 15 09:00:00.024721 systemd[1]: cri-containerd-10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702.scope: Deactivated successfully. May 15 09:00:00.104877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702-rootfs.mount: Deactivated successfully. May 15 09:00:00.111945 env[1162]: time="2025-05-15T09:00:00.111830604Z" level=info msg="shim disconnected" id=10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702 May 15 09:00:00.112072 env[1162]: time="2025-05-15T09:00:00.111952273Z" level=warning msg="cleaning up after shim disconnected" id=10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702 namespace=k8s.io May 15 09:00:00.112072 env[1162]: time="2025-05-15T09:00:00.111981568Z" level=info msg="cleaning up dead shim" May 15 09:00:00.125883 env[1162]: time="2025-05-15T09:00:00.125700307Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:00:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3762 runtime=io.containerd.runc.v2\n" May 15 09:00:00.126709 env[1162]: time="2025-05-15T09:00:00.126640159Z" level=info msg="TearDown network for sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" successfully" May 15 09:00:00.126772 env[1162]: time="2025-05-15T09:00:00.126733866Z" level=info msg="StopPodSandbox for \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" returns successfully" May 15 09:00:00.274364 kubelet[1899]: I0515 09:00:00.274247 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-clustermesh-secrets\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274364 kubelet[1899]: I0515 09:00:00.274292 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-cgroup\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274364 kubelet[1899]: I0515 09:00:00.274323 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-kernel\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274364 kubelet[1899]: I0515 09:00:00.274360 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-net\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274402 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-config-path\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274439 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-lib-modules\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274470 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-ipsec-secrets\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274492 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-hostproc\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274521 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-run\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274538 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-etc-cni-netd\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274562 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cni-path\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274580 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-bpf-maps\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274612 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7pqp\" (UniqueName: \"kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-kube-api-access-p7pqp\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274631 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-xtables-lock\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.274661 kubelet[1899]: I0515 09:00:00.274649 1899 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-hubble-tls\") pod \"611cc426-7d64-402c-a27a-9102a0fe32d7\" (UID: \"611cc426-7d64-402c-a27a-9102a0fe32d7\") " May 15 09:00:00.277528 kubelet[1899]: I0515 09:00:00.277419 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-hostproc" (OuterVolumeSpecName: "hostproc") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.277629 kubelet[1899]: I0515 09:00:00.277535 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.277629 kubelet[1899]: I0515 09:00:00.277558 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.277629 kubelet[1899]: I0515 09:00:00.277581 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cni-path" (OuterVolumeSpecName: "cni-path") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.277629 kubelet[1899]: I0515 09:00:00.277601 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.279410 kubelet[1899]: I0515 09:00:00.279372 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.279873 kubelet[1899]: I0515 09:00:00.279846 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.279953 kubelet[1899]: I0515 09:00:00.279880 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.279953 kubelet[1899]: I0515 09:00:00.279898 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.282438 kubelet[1899]: I0515 09:00:00.282384 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:00:00.284698 systemd[1]: var-lib-kubelet-pods-611cc426\x2d7d64\x2d402c\x2da27a\x2d9102a0fe32d7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 09:00:00.287262 kubelet[1899]: I0515 09:00:00.287218 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 09:00:00.287391 kubelet[1899]: I0515 09:00:00.287366 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 09:00:00.289667 systemd[1]: var-lib-kubelet-pods-611cc426\x2d7d64\x2d402c\x2da27a\x2d9102a0fe32d7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 15 09:00:00.292502 kubelet[1899]: I0515 09:00:00.290897 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 09:00:00.292678 systemd[1]: var-lib-kubelet-pods-611cc426\x2d7d64\x2d402c\x2da27a\x2d9102a0fe32d7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 09:00:00.293860 kubelet[1899]: I0515 09:00:00.293824 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 09:00:00.297058 systemd[1]: var-lib-kubelet-pods-611cc426\x2d7d64\x2d402c\x2da27a\x2d9102a0fe32d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7pqp.mount: Deactivated successfully. May 15 09:00:00.298269 kubelet[1899]: I0515 09:00:00.298224 1899 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-kube-api-access-p7pqp" (OuterVolumeSpecName: "kube-api-access-p7pqp") pod "611cc426-7d64-402c-a27a-9102a0fe32d7" (UID: "611cc426-7d64-402c-a27a-9102a0fe32d7"). InnerVolumeSpecName "kube-api-access-p7pqp". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 09:00:00.336322 systemd[1]: Removed slice kubepods-burstable-pod611cc426_7d64_402c_a27a_9102a0fe32d7.slice. May 15 09:00:00.375855 kubelet[1899]: I0515 09:00:00.375799 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-kernel\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.376165 kubelet[1899]: I0515 09:00:00.376127 1899 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-host-proc-sys-net\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.376375 kubelet[1899]: I0515 09:00:00.376339 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-cgroup\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.376649 kubelet[1899]: I0515 09:00:00.376613 1899 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-lib-modules\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.376910 kubelet[1899]: I0515 09:00:00.376876 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-ipsec-secrets\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.377121 kubelet[1899]: I0515 09:00:00.377089 1899 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-hostproc\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.377338 kubelet[1899]: I0515 09:00:00.377305 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-config-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.377576 kubelet[1899]: I0515 09:00:00.377542 1899 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cilium-run\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.377827 kubelet[1899]: I0515 09:00:00.377792 1899 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-etc-cni-netd\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.378045 kubelet[1899]: I0515 09:00:00.378012 1899 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-bpf-maps\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.378255 kubelet[1899]: I0515 09:00:00.378225 1899 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-cni-path\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.378513 kubelet[1899]: I0515 09:00:00.378477 1899 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/611cc426-7d64-402c-a27a-9102a0fe32d7-xtables-lock\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.378794 kubelet[1899]: I0515 09:00:00.378756 1899 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7pqp\" (UniqueName: \"kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-kube-api-access-p7pqp\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.379010 kubelet[1899]: I0515 09:00:00.378958 1899 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/611cc426-7d64-402c-a27a-9102a0fe32d7-hubble-tls\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.379199 kubelet[1899]: I0515 09:00:00.379169 1899 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/611cc426-7d64-402c-a27a-9102a0fe32d7-clustermesh-secrets\") on node \"ci-3510-3-7-n-fb2247adc4.novalocal\" DevicePath \"\"" May 15 09:00:00.552620 kubelet[1899]: W0515 09:00:00.552490 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod611cc426_7d64_402c_a27a_9102a0fe32d7.slice/cri-containerd-ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74.scope WatchSource:0}: container "ffeeaa35895c265ec1f7c2ade87d457769f3ccdf6466d57e5fd78ce22a95bb74" in namespace "k8s.io": not found May 15 09:00:00.680973 sshd[3741]: Accepted publickey for core from 172.24.4.1 port 55312 ssh2: RSA SHA256:1/SkRw3PH5oh/+o3gl3TCDC6ELETrVd474qGk5scK40 May 15 09:00:00.684234 sshd[3741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 09:00:00.694560 systemd-logind[1148]: New session 24 of user core. May 15 09:00:00.696806 systemd[1]: Started session-24.scope. May 15 09:00:01.016696 kubelet[1899]: I0515 09:00:01.015753 1899 scope.go:117] "RemoveContainer" containerID="32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113" May 15 09:00:01.025539 env[1162]: time="2025-05-15T09:00:01.025405150Z" level=info msg="RemoveContainer for \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\"" May 15 09:00:01.080062 env[1162]: time="2025-05-15T09:00:01.079987335Z" level=info msg="RemoveContainer for \"32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113\" returns successfully" May 15 09:00:01.131065 kubelet[1899]: I0515 09:00:01.130995 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="611cc426-7d64-402c-a27a-9102a0fe32d7" containerName="mount-cgroup" May 15 09:00:01.131490 kubelet[1899]: I0515 09:00:01.131398 1899 memory_manager.go:355] "RemoveStaleState removing state" podUID="611cc426-7d64-402c-a27a-9102a0fe32d7" containerName="mount-cgroup" May 15 09:00:01.148941 systemd[1]: Created slice kubepods-burstable-pod0ca85a07_160c_4722_a80d_dabf30ff4572.slice. May 15 09:00:01.186149 kubelet[1899]: I0515 09:00:01.186081 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-cilium-run\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.186415 kubelet[1899]: I0515 09:00:01.186382 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-hostproc\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.186567 kubelet[1899]: I0515 09:00:01.186549 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-etc-cni-netd\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.186700 kubelet[1899]: I0515 09:00:01.186672 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0ca85a07-160c-4722-a80d-dabf30ff4572-cilium-ipsec-secrets\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.186845 kubelet[1899]: I0515 09:00:01.186827 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-xtables-lock\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.186993 kubelet[1899]: I0515 09:00:01.186960 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-host-proc-sys-net\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187112 kubelet[1899]: I0515 09:00:01.187096 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-cni-path\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187231 kubelet[1899]: I0515 09:00:01.187215 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ca85a07-160c-4722-a80d-dabf30ff4572-cilium-config-path\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187354 kubelet[1899]: I0515 09:00:01.187338 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0ca85a07-160c-4722-a80d-dabf30ff4572-hubble-tls\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187521 kubelet[1899]: I0515 09:00:01.187494 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-cilium-cgroup\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187660 kubelet[1899]: I0515 09:00:01.187643 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0ca85a07-160c-4722-a80d-dabf30ff4572-clustermesh-secrets\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187783 kubelet[1899]: I0515 09:00:01.187766 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-host-proc-sys-kernel\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.187899 kubelet[1899]: I0515 09:00:01.187883 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-lib-modules\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.188013 kubelet[1899]: I0515 09:00:01.187997 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr285\" (UniqueName: \"kubernetes.io/projected/0ca85a07-160c-4722-a80d-dabf30ff4572-kube-api-access-nr285\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.188120 kubelet[1899]: I0515 09:00:01.188105 1899 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0ca85a07-160c-4722-a80d-dabf30ff4572-bpf-maps\") pod \"cilium-sb975\" (UID: \"0ca85a07-160c-4722-a80d-dabf30ff4572\") " pod="kube-system/cilium-sb975" May 15 09:00:01.454733 env[1162]: time="2025-05-15T09:00:01.454635368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sb975,Uid:0ca85a07-160c-4722-a80d-dabf30ff4572,Namespace:kube-system,Attempt:0,}" May 15 09:00:01.654080 env[1162]: time="2025-05-15T09:00:01.653892789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:00:01.654532 env[1162]: time="2025-05-15T09:00:01.653985654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:00:01.654532 env[1162]: time="2025-05-15T09:00:01.654081735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:00:01.656569 env[1162]: time="2025-05-15T09:00:01.654933350Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f pid=3797 runtime=io.containerd.runc.v2 May 15 09:00:01.680038 systemd[1]: Started cri-containerd-019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f.scope. May 15 09:00:01.731079 env[1162]: time="2025-05-15T09:00:01.730623111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sb975,Uid:0ca85a07-160c-4722-a80d-dabf30ff4572,Namespace:kube-system,Attempt:0,} returns sandbox id \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\"" May 15 09:00:01.738505 env[1162]: time="2025-05-15T09:00:01.738462979Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:00:01.813710 env[1162]: time="2025-05-15T09:00:01.813638000Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58\"" May 15 09:00:01.815208 env[1162]: time="2025-05-15T09:00:01.815175148Z" level=info msg="StartContainer for \"ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58\"" May 15 09:00:01.848893 systemd[1]: Started cri-containerd-ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58.scope. May 15 09:00:01.914416 systemd[1]: cri-containerd-ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58.scope: Deactivated successfully. May 15 09:00:01.980575 kubelet[1899]: I0515 09:00:01.980487 1899 setters.go:602] "Node became not ready" node="ci-3510-3-7-n-fb2247adc4.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T09:00:01Z","lastTransitionTime":"2025-05-15T09:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 09:00:02.091729 env[1162]: time="2025-05-15T09:00:02.091658627Z" level=info msg="StartContainer for \"ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58\" returns successfully" May 15 09:00:02.169157 env[1162]: time="2025-05-15T09:00:02.169108957Z" level=info msg="shim disconnected" id=ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58 May 15 09:00:02.169393 env[1162]: time="2025-05-15T09:00:02.169373697Z" level=warning msg="cleaning up after shim disconnected" id=ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58 namespace=k8s.io May 15 09:00:02.169555 env[1162]: time="2025-05-15T09:00:02.169536643Z" level=info msg="cleaning up dead shim" May 15 09:00:02.185442 env[1162]: time="2025-05-15T09:00:02.185347366Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:00:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3884 runtime=io.containerd.runc.v2\n" May 15 09:00:02.334735 kubelet[1899]: I0515 09:00:02.334663 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611cc426-7d64-402c-a27a-9102a0fe32d7" path="/var/lib/kubelet/pods/611cc426-7d64-402c-a27a-9102a0fe32d7/volumes" May 15 09:00:03.146225 env[1162]: time="2025-05-15T09:00:03.146042956Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:00:03.183447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333713333.mount: Deactivated successfully. May 15 09:00:03.189745 env[1162]: time="2025-05-15T09:00:03.189590653Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348\"" May 15 09:00:03.196372 env[1162]: time="2025-05-15T09:00:03.193783837Z" level=info msg="StartContainer for \"3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348\"" May 15 09:00:03.226788 systemd[1]: Started cri-containerd-3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348.scope. May 15 09:00:03.277546 env[1162]: time="2025-05-15T09:00:03.277469763Z" level=info msg="StartContainer for \"3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348\" returns successfully" May 15 09:00:03.297776 systemd[1]: run-containerd-runc-k8s.io-3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348-runc.JyguXt.mount: Deactivated successfully. May 15 09:00:03.312356 systemd[1]: cri-containerd-3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348.scope: Deactivated successfully. May 15 09:00:03.334519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348-rootfs.mount: Deactivated successfully. May 15 09:00:03.344042 env[1162]: time="2025-05-15T09:00:03.343996825Z" level=info msg="shim disconnected" id=3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348 May 15 09:00:03.344317 env[1162]: time="2025-05-15T09:00:03.344293374Z" level=warning msg="cleaning up after shim disconnected" id=3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348 namespace=k8s.io May 15 09:00:03.344459 env[1162]: time="2025-05-15T09:00:03.344410655Z" level=info msg="cleaning up dead shim" May 15 09:00:03.358779 env[1162]: time="2025-05-15T09:00:03.358747069Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:00:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3944 runtime=io.containerd.runc.v2\n" May 15 09:00:03.666011 kubelet[1899]: W0515 09:00:03.665888 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod611cc426_7d64_402c_a27a_9102a0fe32d7.slice/cri-containerd-32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113.scope WatchSource:0}: container "32ece79be8a5cb410abae12a3acd62c7ed0667be1abedb94a77f48665cef8113" in namespace "k8s.io": not found May 15 09:00:04.146048 env[1162]: time="2025-05-15T09:00:04.145837762Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:00:04.200704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434718170.mount: Deactivated successfully. May 15 09:00:04.205003 env[1162]: time="2025-05-15T09:00:04.204872433Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8\"" May 15 09:00:04.208782 env[1162]: time="2025-05-15T09:00:04.208697904Z" level=info msg="StartContainer for \"d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8\"" May 15 09:00:04.240945 systemd[1]: Started cri-containerd-d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8.scope. May 15 09:00:04.279835 env[1162]: time="2025-05-15T09:00:04.279778760Z" level=info msg="StartContainer for \"d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8\" returns successfully" May 15 09:00:04.280278 systemd[1]: cri-containerd-d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8.scope: Deactivated successfully. May 15 09:00:04.313585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8-rootfs.mount: Deactivated successfully. May 15 09:00:04.325065 env[1162]: time="2025-05-15T09:00:04.325008489Z" level=info msg="shim disconnected" id=d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8 May 15 09:00:04.325382 env[1162]: time="2025-05-15T09:00:04.325352227Z" level=warning msg="cleaning up after shim disconnected" id=d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8 namespace=k8s.io May 15 09:00:04.325497 env[1162]: time="2025-05-15T09:00:04.325480268Z" level=info msg="cleaning up dead shim" May 15 09:00:04.340689 env[1162]: time="2025-05-15T09:00:04.340615247Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:00:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4004 runtime=io.containerd.runc.v2\n" May 15 09:00:04.584302 kubelet[1899]: E0515 09:00:04.584185 1899 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:00:05.161531 env[1162]: time="2025-05-15T09:00:05.161269610Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:00:05.220878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140621548.mount: Deactivated successfully. May 15 09:00:05.229185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716117921.mount: Deactivated successfully. May 15 09:00:05.240829 env[1162]: time="2025-05-15T09:00:05.240783596Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4\"" May 15 09:00:05.245930 env[1162]: time="2025-05-15T09:00:05.244775330Z" level=info msg="StartContainer for \"09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4\"" May 15 09:00:05.265190 systemd[1]: Started cri-containerd-09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4.scope. May 15 09:00:05.308610 systemd[1]: cri-containerd-09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4.scope: Deactivated successfully. May 15 09:00:05.321668 env[1162]: time="2025-05-15T09:00:05.321602059Z" level=info msg="StartContainer for \"09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4\" returns successfully" May 15 09:00:05.342001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4-rootfs.mount: Deactivated successfully. May 15 09:00:05.351937 env[1162]: time="2025-05-15T09:00:05.351825220Z" level=info msg="shim disconnected" id=09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4 May 15 09:00:05.351937 env[1162]: time="2025-05-15T09:00:05.351917685Z" level=warning msg="cleaning up after shim disconnected" id=09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4 namespace=k8s.io May 15 09:00:05.352094 env[1162]: time="2025-05-15T09:00:05.351930799Z" level=info msg="cleaning up dead shim" May 15 09:00:05.364086 env[1162]: time="2025-05-15T09:00:05.364046979Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:00:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" May 15 09:00:06.173961 env[1162]: time="2025-05-15T09:00:06.173762264Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:00:06.216800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386834135.mount: Deactivated successfully. May 15 09:00:06.229802 env[1162]: time="2025-05-15T09:00:06.229681975Z" level=info msg="CreateContainer within sandbox \"019eefdf83c6e13ae76f51a7d67fba495bc08badb88240bffe952587fd6d1f2f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b\"" May 15 09:00:06.239273 env[1162]: time="2025-05-15T09:00:06.239189446Z" level=info msg="StartContainer for \"194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b\"" May 15 09:00:06.278280 systemd[1]: Started cri-containerd-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b.scope. May 15 09:00:06.332158 env[1162]: time="2025-05-15T09:00:06.332113670Z" level=info msg="StartContainer for \"194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b\" returns successfully" May 15 09:00:06.366297 systemd[1]: run-containerd-runc-k8s.io-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b-runc.LY5Q93.mount: Deactivated successfully. May 15 09:00:06.786617 kubelet[1899]: W0515 09:00:06.786538 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca85a07_160c_4722_a80d_dabf30ff4572.slice/cri-containerd-ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58.scope WatchSource:0}: task ef863832b48be8eef50ad8d7674c980656197061327508eba75916d3268c1f58 not found: not found May 15 09:00:06.830480 kernel: cryptd: max_cpu_qlen set to 1000 May 15 09:00:06.919541 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 15 09:00:07.773543 systemd[1]: run-containerd-runc-k8s.io-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b-runc.vyXrtx.mount: Deactivated successfully. May 15 09:00:09.899677 kubelet[1899]: W0515 09:00:09.899541 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca85a07_160c_4722_a80d_dabf30ff4572.slice/cri-containerd-3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348.scope WatchSource:0}: task 3a91c5f02bd86f213ed66c3097629f1c36df0bd8babb6fb99451cd8250d41348 not found: not found May 15 09:00:09.971772 systemd[1]: run-containerd-runc-k8s.io-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b-runc.c1tWfF.mount: Deactivated successfully. May 15 09:00:10.541968 systemd-networkd[990]: lxc_health: Link UP May 15 09:00:10.550456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 09:00:10.550909 systemd-networkd[990]: lxc_health: Gained carrier May 15 09:00:11.521560 kubelet[1899]: I0515 09:00:11.521367 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sb975" podStartSLOduration=10.521309395 podStartE2EDuration="10.521309395s" podCreationTimestamp="2025-05-15 09:00:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:00:07.232746751 +0000 UTC m=+353.508968027" watchObservedRunningTime="2025-05-15 09:00:11.521309395 +0000 UTC m=+357.797530661" May 15 09:00:11.692708 systemd-networkd[990]: lxc_health: Gained IPv6LL May 15 09:00:12.242991 systemd[1]: run-containerd-runc-k8s.io-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b-runc.RXQ97A.mount: Deactivated successfully. May 15 09:00:12.333743 kubelet[1899]: E0515 09:00:12.333568 1899 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:46466->127.0.0.1:39871: read tcp 127.0.0.1:46466->127.0.0.1:39871: read: connection reset by peer May 15 09:00:13.023029 kubelet[1899]: W0515 09:00:13.022401 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca85a07_160c_4722_a80d_dabf30ff4572.slice/cri-containerd-d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8.scope WatchSource:0}: task d327ec64379dedd1abc13e6e1e19b343b0f5be8bc426cbd32b6175b3ab3904d8 not found: not found May 15 09:00:14.286207 env[1162]: time="2025-05-15T09:00:14.286073167Z" level=info msg="StopPodSandbox for \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\"" May 15 09:00:14.286697 env[1162]: time="2025-05-15T09:00:14.286483080Z" level=info msg="TearDown network for sandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" successfully" May 15 09:00:14.286697 env[1162]: time="2025-05-15T09:00:14.286612594Z" level=info msg="StopPodSandbox for \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" returns successfully" May 15 09:00:14.288133 env[1162]: time="2025-05-15T09:00:14.288031759Z" level=info msg="RemovePodSandbox for \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\"" May 15 09:00:14.288323 env[1162]: time="2025-05-15T09:00:14.288149121Z" level=info msg="Forcibly stopping sandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\"" May 15 09:00:14.288625 env[1162]: time="2025-05-15T09:00:14.288532333Z" level=info msg="TearDown network for sandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" successfully" May 15 09:00:14.297894 env[1162]: time="2025-05-15T09:00:14.297732176Z" level=info msg="RemovePodSandbox \"787eb88db14d9d974096182afdfb685e8e65ede689f765646e54805047354d6b\" returns successfully" May 15 09:00:14.298955 env[1162]: time="2025-05-15T09:00:14.298696905Z" level=info msg="StopPodSandbox for \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\"" May 15 09:00:14.298955 env[1162]: time="2025-05-15T09:00:14.298828703Z" level=info msg="TearDown network for sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" successfully" May 15 09:00:14.298955 env[1162]: time="2025-05-15T09:00:14.298872656Z" level=info msg="StopPodSandbox for \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" returns successfully" May 15 09:00:14.301111 env[1162]: time="2025-05-15T09:00:14.299591481Z" level=info msg="RemovePodSandbox for \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\"" May 15 09:00:14.301111 env[1162]: time="2025-05-15T09:00:14.299622199Z" level=info msg="Forcibly stopping sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\"" May 15 09:00:14.301111 env[1162]: time="2025-05-15T09:00:14.299705436Z" level=info msg="TearDown network for sandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" successfully" May 15 09:00:14.303886 env[1162]: time="2025-05-15T09:00:14.303849027Z" level=info msg="RemovePodSandbox \"942d72f7d0b969112417b334f480c98b363771928a35f9b823520f33b73e4663\" returns successfully" May 15 09:00:14.304636 env[1162]: time="2025-05-15T09:00:14.304606194Z" level=info msg="StopPodSandbox for \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\"" May 15 09:00:14.304851 env[1162]: time="2025-05-15T09:00:14.304805550Z" level=info msg="TearDown network for sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" successfully" May 15 09:00:14.304937 env[1162]: time="2025-05-15T09:00:14.304916029Z" level=info msg="StopPodSandbox for \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" returns successfully" May 15 09:00:14.305996 env[1162]: time="2025-05-15T09:00:14.305921905Z" level=info msg="RemovePodSandbox for \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\"" May 15 09:00:14.306161 env[1162]: time="2025-05-15T09:00:14.306035189Z" level=info msg="Forcibly stopping sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\"" May 15 09:00:14.306528 env[1162]: time="2025-05-15T09:00:14.306417219Z" level=info msg="TearDown network for sandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" successfully" May 15 09:00:14.314735 env[1162]: time="2025-05-15T09:00:14.314629861Z" level=info msg="RemovePodSandbox \"10d4036de6e5422373ca8cbfa247fd737544d5875818006eda239e7f44c94702\" returns successfully" May 15 09:00:14.469343 systemd[1]: run-containerd-runc-k8s.io-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b-runc.Uderpb.mount: Deactivated successfully. May 15 09:00:16.135101 kubelet[1899]: W0515 09:00:16.134987 1899 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ca85a07_160c_4722_a80d_dabf30ff4572.slice/cri-containerd-09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4.scope WatchSource:0}: task 09308974dede4b9e018e53d1b6df21ab64c4c1d450e69176aa29874eb4f9a1a4 not found: not found May 15 09:00:16.788448 systemd[1]: run-containerd-runc-k8s.io-194e294fb5fc449e06878def1811ea7b74d3a6e4adcf92f893c7934a16bc860b-runc.t2FZpq.mount: Deactivated successfully. May 15 09:00:17.100480 sshd[3741]: pam_unix(sshd:session): session closed for user core May 15 09:00:17.111335 systemd-logind[1148]: Session 24 logged out. Waiting for processes to exit. May 15 09:00:17.114786 systemd[1]: sshd@23-172.24.4.191:22-172.24.4.1:55312.service: Deactivated successfully. May 15 09:00:17.117247 systemd[1]: session-24.scope: Deactivated successfully. May 15 09:00:17.123213 systemd-logind[1148]: Removed session 24.