Mar 18 08:53:48.809622 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 18 08:53:48.809646 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 18 08:53:48.809655 kernel: BIOS-provided physical RAM map: Mar 18 08:53:48.809665 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 18 08:53:48.809672 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 18 08:53:48.809679 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 18 08:53:48.809687 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 18 08:53:48.809694 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 18 08:53:48.809700 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 18 08:53:48.809707 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 18 08:53:48.809713 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 18 08:53:48.809720 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 18 08:53:48.809728 kernel: NX (Execute Disable) protection: active Mar 18 08:53:48.809734 kernel: SMBIOS 3.0.0 present. Mar 18 08:53:48.809743 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 18 08:53:48.809750 kernel: Hypervisor detected: KVM Mar 18 08:53:48.809757 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 18 08:53:48.809764 kernel: kvm-clock: cpu 0, msr 6319a001, primary cpu clock Mar 18 08:53:48.809772 kernel: kvm-clock: using sched offset of 3954402056 cycles Mar 18 08:53:48.809780 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 18 08:53:48.809787 kernel: tsc: Detected 1996.249 MHz processor Mar 18 08:53:48.809795 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 18 08:53:48.809803 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 18 08:53:48.809810 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 18 08:53:48.809818 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 18 08:53:48.809825 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 18 08:53:48.809833 kernel: ACPI: Early table checksum verification disabled Mar 18 08:53:48.809841 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 18 08:53:48.809849 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 08:53:48.809856 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 08:53:48.809864 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 08:53:48.809871 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 18 08:53:48.809878 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 08:53:48.809886 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 18 08:53:48.809893 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 18 08:53:48.809902 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 18 08:53:48.809909 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 18 08:53:48.809916 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 18 08:53:48.809923 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 18 08:53:48.809931 kernel: No NUMA configuration found Mar 18 08:53:48.809941 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 18 08:53:48.809949 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 18 08:53:48.809957 kernel: Zone ranges: Mar 18 08:53:48.809965 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 18 08:53:48.809973 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 18 08:53:48.809980 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 18 08:53:48.809988 kernel: Movable zone start for each node Mar 18 08:53:48.809995 kernel: Early memory node ranges Mar 18 08:53:48.810003 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 18 08:53:48.810010 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 18 08:53:48.810019 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 18 08:53:48.810027 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 18 08:53:48.810034 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 18 08:53:48.810042 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 18 08:53:48.810050 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 18 08:53:48.810057 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 18 08:53:48.810065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 18 08:53:48.810072 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 18 08:53:48.810080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 18 08:53:48.810089 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 18 08:53:48.810097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 18 08:53:48.810104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 18 08:53:48.810112 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 18 08:53:48.810119 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 18 08:53:48.810127 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 18 08:53:48.810134 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 18 08:53:48.810142 kernel: Booting paravirtualized kernel on KVM Mar 18 08:53:48.810150 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 18 08:53:48.810159 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 18 08:53:48.810166 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 18 08:53:48.810174 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 18 08:53:48.810181 kernel: pcpu-alloc: [0] 0 1 Mar 18 08:53:48.810189 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 Mar 18 08:53:48.810196 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 18 08:53:48.810204 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 18 08:53:48.810211 kernel: Policy zone: Normal Mar 18 08:53:48.810220 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 18 08:53:48.810230 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 18 08:53:48.810238 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 18 08:53:48.810245 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 18 08:53:48.810253 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 18 08:53:48.810261 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 225236K reserved, 0K cma-reserved) Mar 18 08:53:48.810269 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 18 08:53:48.810276 kernel: ftrace: allocating 34580 entries in 136 pages Mar 18 08:53:48.810284 kernel: ftrace: allocated 136 pages with 2 groups Mar 18 08:53:48.810293 kernel: rcu: Hierarchical RCU implementation. Mar 18 08:53:48.810301 kernel: rcu: RCU event tracing is enabled. Mar 18 08:53:48.810309 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 18 08:53:48.810316 kernel: Rude variant of Tasks RCU enabled. Mar 18 08:53:48.810324 kernel: Tracing variant of Tasks RCU enabled. Mar 18 08:53:48.810332 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 18 08:53:48.810340 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 18 08:53:48.810347 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 18 08:53:48.810355 kernel: Console: colour VGA+ 80x25 Mar 18 08:53:48.810364 kernel: printk: console [tty0] enabled Mar 18 08:53:48.810371 kernel: printk: console [ttyS0] enabled Mar 18 08:53:48.810379 kernel: ACPI: Core revision 20210730 Mar 18 08:53:48.810386 kernel: APIC: Switch to symmetric I/O mode setup Mar 18 08:53:48.810394 kernel: x2apic enabled Mar 18 08:53:48.810401 kernel: Switched APIC routing to physical x2apic. Mar 18 08:53:48.810409 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 18 08:53:48.810417 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 18 08:53:48.810425 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 18 08:53:48.810434 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 18 08:53:48.810441 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 18 08:53:48.810449 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 18 08:53:48.810457 kernel: Spectre V2 : Mitigation: Retpolines Mar 18 08:53:48.810464 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 18 08:53:48.810472 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 18 08:53:48.810480 kernel: Speculative Store Bypass: Vulnerable Mar 18 08:53:48.810487 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 18 08:53:48.810495 kernel: Freeing SMP alternatives memory: 32K Mar 18 08:53:48.810503 kernel: pid_max: default: 32768 minimum: 301 Mar 18 08:53:48.810511 kernel: LSM: Security Framework initializing Mar 18 08:53:48.810519 kernel: SELinux: Initializing. Mar 18 08:53:48.810526 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 18 08:53:48.810534 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 18 08:53:48.810542 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 18 08:53:48.810555 kernel: Performance Events: AMD PMU driver. Mar 18 08:53:48.812613 kernel: ... version: 0 Mar 18 08:53:48.812626 kernel: ... bit width: 48 Mar 18 08:53:48.812634 kernel: ... generic registers: 4 Mar 18 08:53:48.812642 kernel: ... value mask: 0000ffffffffffff Mar 18 08:53:48.812650 kernel: ... max period: 00007fffffffffff Mar 18 08:53:48.812661 kernel: ... fixed-purpose events: 0 Mar 18 08:53:48.812669 kernel: ... event mask: 000000000000000f Mar 18 08:53:48.812677 kernel: signal: max sigframe size: 1440 Mar 18 08:53:48.812685 kernel: rcu: Hierarchical SRCU implementation. Mar 18 08:53:48.812693 kernel: smp: Bringing up secondary CPUs ... Mar 18 08:53:48.812702 kernel: x86: Booting SMP configuration: Mar 18 08:53:48.812710 kernel: .... node #0, CPUs: #1 Mar 18 08:53:48.812718 kernel: kvm-clock: cpu 1, msr 6319a041, secondary cpu clock Mar 18 08:53:48.812726 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 Mar 18 08:53:48.812734 kernel: smp: Brought up 1 node, 2 CPUs Mar 18 08:53:48.812742 kernel: smpboot: Max logical packages: 2 Mar 18 08:53:48.812750 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 18 08:53:48.812758 kernel: devtmpfs: initialized Mar 18 08:53:48.812766 kernel: x86/mm: Memory block size: 128MB Mar 18 08:53:48.812775 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 18 08:53:48.812783 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 18 08:53:48.812791 kernel: pinctrl core: initialized pinctrl subsystem Mar 18 08:53:48.812799 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 18 08:53:48.812807 kernel: audit: initializing netlink subsys (disabled) Mar 18 08:53:48.812815 kernel: audit: type=2000 audit(1742288028.276:1): state=initialized audit_enabled=0 res=1 Mar 18 08:53:48.812823 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 18 08:53:48.812831 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 18 08:53:48.812838 kernel: cpuidle: using governor menu Mar 18 08:53:48.812847 kernel: ACPI: bus type PCI registered Mar 18 08:53:48.812855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 18 08:53:48.812863 kernel: dca service started, version 1.12.1 Mar 18 08:53:48.812871 kernel: PCI: Using configuration type 1 for base access Mar 18 08:53:48.812879 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 18 08:53:48.812887 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 18 08:53:48.812895 kernel: ACPI: Added _OSI(Module Device) Mar 18 08:53:48.812903 kernel: ACPI: Added _OSI(Processor Device) Mar 18 08:53:48.812911 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 18 08:53:48.812920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 18 08:53:48.812928 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 18 08:53:48.812936 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 18 08:53:48.812944 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 18 08:53:48.812952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 18 08:53:48.812960 kernel: ACPI: Interpreter enabled Mar 18 08:53:48.812968 kernel: ACPI: PM: (supports S0 S3 S5) Mar 18 08:53:48.812976 kernel: ACPI: Using IOAPIC for interrupt routing Mar 18 08:53:48.812984 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 18 08:53:48.812993 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 18 08:53:48.813001 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 18 08:53:48.813127 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 18 08:53:48.813212 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 18 08:53:48.813224 kernel: acpiphp: Slot [3] registered Mar 18 08:53:48.813232 kernel: acpiphp: Slot [4] registered Mar 18 08:53:48.813240 kernel: acpiphp: Slot [5] registered Mar 18 08:53:48.813248 kernel: acpiphp: Slot [6] registered Mar 18 08:53:48.813259 kernel: acpiphp: Slot [7] registered Mar 18 08:53:48.813267 kernel: acpiphp: Slot [8] registered Mar 18 08:53:48.813275 kernel: acpiphp: Slot [9] registered Mar 18 08:53:48.813282 kernel: acpiphp: Slot [10] registered Mar 18 08:53:48.813290 kernel: acpiphp: Slot [11] registered Mar 18 08:53:48.813298 kernel: acpiphp: Slot [12] registered Mar 18 08:53:48.813306 kernel: acpiphp: Slot [13] registered Mar 18 08:53:48.813313 kernel: acpiphp: Slot [14] registered Mar 18 08:53:48.813321 kernel: acpiphp: Slot [15] registered Mar 18 08:53:48.813331 kernel: acpiphp: Slot [16] registered Mar 18 08:53:48.813338 kernel: acpiphp: Slot [17] registered Mar 18 08:53:48.813346 kernel: acpiphp: Slot [18] registered Mar 18 08:53:48.813354 kernel: acpiphp: Slot [19] registered Mar 18 08:53:48.813362 kernel: acpiphp: Slot [20] registered Mar 18 08:53:48.813369 kernel: acpiphp: Slot [21] registered Mar 18 08:53:48.813377 kernel: acpiphp: Slot [22] registered Mar 18 08:53:48.813385 kernel: acpiphp: Slot [23] registered Mar 18 08:53:48.813392 kernel: acpiphp: Slot [24] registered Mar 18 08:53:48.813400 kernel: acpiphp: Slot [25] registered Mar 18 08:53:48.813409 kernel: acpiphp: Slot [26] registered Mar 18 08:53:48.813417 kernel: acpiphp: Slot [27] registered Mar 18 08:53:48.813425 kernel: acpiphp: Slot [28] registered Mar 18 08:53:48.813433 kernel: acpiphp: Slot [29] registered Mar 18 08:53:48.813440 kernel: acpiphp: Slot [30] registered Mar 18 08:53:48.813448 kernel: acpiphp: Slot [31] registered Mar 18 08:53:48.813456 kernel: PCI host bridge to bus 0000:00 Mar 18 08:53:48.813542 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 18 08:53:48.813656 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 18 08:53:48.813733 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 18 08:53:48.813805 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 18 08:53:48.813880 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 18 08:53:48.813954 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 18 08:53:48.814051 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 18 08:53:48.814144 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 18 08:53:48.814241 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 18 08:53:48.814324 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 18 08:53:48.814407 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 18 08:53:48.814490 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 18 08:53:48.817632 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 18 08:53:48.817724 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 18 08:53:48.817819 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 18 08:53:48.817899 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 18 08:53:48.817979 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 18 08:53:48.818066 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 18 08:53:48.818148 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 18 08:53:48.818234 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 18 08:53:48.818316 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 18 08:53:48.818401 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 18 08:53:48.818481 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 18 08:53:48.818593 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 18 08:53:48.818679 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 18 08:53:48.818761 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 18 08:53:48.818842 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 18 08:53:48.818924 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 18 08:53:48.819016 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 18 08:53:48.819100 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 18 08:53:48.819181 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 18 08:53:48.819262 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 18 08:53:48.819351 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 18 08:53:48.819433 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 18 08:53:48.819559 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 18 08:53:48.819671 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 18 08:53:48.819848 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 18 08:53:48.819991 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 18 08:53:48.820126 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 18 08:53:48.820145 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 18 08:53:48.820154 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 18 08:53:48.820162 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 18 08:53:48.820173 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 18 08:53:48.820195 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 18 08:53:48.820204 kernel: iommu: Default domain type: Translated Mar 18 08:53:48.820212 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 18 08:53:48.820333 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 18 08:53:48.820503 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 18 08:53:48.820608 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 18 08:53:48.820628 kernel: vgaarb: loaded Mar 18 08:53:48.820636 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 18 08:53:48.820647 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 18 08:53:48.820655 kernel: PTP clock support registered Mar 18 08:53:48.820663 kernel: PCI: Using ACPI for IRQ routing Mar 18 08:53:48.820671 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 18 08:53:48.820679 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 18 08:53:48.820687 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 18 08:53:48.820695 kernel: clocksource: Switched to clocksource kvm-clock Mar 18 08:53:48.820703 kernel: VFS: Disk quotas dquot_6.6.0 Mar 18 08:53:48.820711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 18 08:53:48.820720 kernel: pnp: PnP ACPI init Mar 18 08:53:48.820806 kernel: pnp 00:03: [dma 2] Mar 18 08:53:48.820818 kernel: pnp: PnP ACPI: found 5 devices Mar 18 08:53:48.820827 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 18 08:53:48.820835 kernel: NET: Registered PF_INET protocol family Mar 18 08:53:48.820843 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 18 08:53:48.820851 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 18 08:53:48.820859 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 18 08:53:48.820870 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 18 08:53:48.820878 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 18 08:53:48.820886 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 18 08:53:48.820894 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 18 08:53:48.820902 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 18 08:53:48.820910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 18 08:53:48.820918 kernel: NET: Registered PF_XDP protocol family Mar 18 08:53:48.820991 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 18 08:53:48.821064 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 18 08:53:48.821139 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 18 08:53:48.821210 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 18 08:53:48.821280 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 18 08:53:48.821362 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 18 08:53:48.821444 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 18 08:53:48.821526 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 18 08:53:48.821538 kernel: PCI: CLS 0 bytes, default 64 Mar 18 08:53:48.821546 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 18 08:53:48.821557 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 18 08:53:48.821583 kernel: Initialise system trusted keyrings Mar 18 08:53:48.821602 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 18 08:53:48.821610 kernel: Key type asymmetric registered Mar 18 08:53:48.821618 kernel: Asymmetric key parser 'x509' registered Mar 18 08:53:48.821626 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 18 08:53:48.821634 kernel: io scheduler mq-deadline registered Mar 18 08:53:48.821642 kernel: io scheduler kyber registered Mar 18 08:53:48.821650 kernel: io scheduler bfq registered Mar 18 08:53:48.821660 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 18 08:53:48.821669 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 18 08:53:48.821677 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 18 08:53:48.821685 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 18 08:53:48.821693 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 18 08:53:48.821701 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 18 08:53:48.821709 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 18 08:53:48.821717 kernel: random: crng init done Mar 18 08:53:48.821725 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 18 08:53:48.821734 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 18 08:53:48.821742 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 18 08:53:48.821750 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 18 08:53:48.821838 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 18 08:53:48.821914 kernel: rtc_cmos 00:04: registered as rtc0 Mar 18 08:53:48.821989 kernel: rtc_cmos 00:04: setting system clock to 2025-03-18T08:53:48 UTC (1742288028) Mar 18 08:53:48.822063 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 18 08:53:48.822074 kernel: NET: Registered PF_INET6 protocol family Mar 18 08:53:48.822085 kernel: Segment Routing with IPv6 Mar 18 08:53:48.822093 kernel: In-situ OAM (IOAM) with IPv6 Mar 18 08:53:48.822101 kernel: NET: Registered PF_PACKET protocol family Mar 18 08:53:48.822109 kernel: Key type dns_resolver registered Mar 18 08:53:48.822117 kernel: IPI shorthand broadcast: enabled Mar 18 08:53:48.822125 kernel: sched_clock: Marking stable (824873010, 157762712)->(1045396581, -62760859) Mar 18 08:53:48.822133 kernel: registered taskstats version 1 Mar 18 08:53:48.822141 kernel: Loading compiled-in X.509 certificates Mar 18 08:53:48.822149 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 18 08:53:48.822159 kernel: Key type .fscrypt registered Mar 18 08:53:48.822166 kernel: Key type fscrypt-provisioning registered Mar 18 08:53:48.822175 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 18 08:53:48.822183 kernel: ima: Allocated hash algorithm: sha1 Mar 18 08:53:48.822190 kernel: ima: No architecture policies found Mar 18 08:53:48.822198 kernel: clk: Disabling unused clocks Mar 18 08:53:48.822206 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 18 08:53:48.822214 kernel: Write protecting the kernel read-only data: 28672k Mar 18 08:53:48.822224 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 18 08:53:48.822232 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 18 08:53:48.822239 kernel: Run /init as init process Mar 18 08:53:48.822247 kernel: with arguments: Mar 18 08:53:48.822255 kernel: /init Mar 18 08:53:48.822263 kernel: with environment: Mar 18 08:53:48.822270 kernel: HOME=/ Mar 18 08:53:48.822278 kernel: TERM=linux Mar 18 08:53:48.822286 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 18 08:53:48.822297 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 18 08:53:48.822309 systemd[1]: Detected virtualization kvm. Mar 18 08:53:48.822318 systemd[1]: Detected architecture x86-64. Mar 18 08:53:48.822326 systemd[1]: Running in initrd. Mar 18 08:53:48.822335 systemd[1]: No hostname configured, using default hostname. Mar 18 08:53:48.822343 systemd[1]: Hostname set to . Mar 18 08:53:48.822352 systemd[1]: Initializing machine ID from VM UUID. Mar 18 08:53:48.822362 systemd[1]: Queued start job for default target initrd.target. Mar 18 08:53:48.822371 systemd[1]: Started systemd-ask-password-console.path. Mar 18 08:53:48.822379 systemd[1]: Reached target cryptsetup.target. Mar 18 08:53:48.822387 systemd[1]: Reached target paths.target. Mar 18 08:53:48.822396 systemd[1]: Reached target slices.target. Mar 18 08:53:48.822404 systemd[1]: Reached target swap.target. Mar 18 08:53:48.822413 systemd[1]: Reached target timers.target. Mar 18 08:53:48.822422 systemd[1]: Listening on iscsid.socket. Mar 18 08:53:48.822432 systemd[1]: Listening on iscsiuio.socket. Mar 18 08:53:48.822446 systemd[1]: Listening on systemd-journald-audit.socket. Mar 18 08:53:48.822456 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 18 08:53:48.822465 systemd[1]: Listening on systemd-journald.socket. Mar 18 08:53:48.822474 systemd[1]: Listening on systemd-networkd.socket. Mar 18 08:53:48.822482 systemd[1]: Listening on systemd-udevd-control.socket. Mar 18 08:53:48.822493 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 18 08:53:48.822502 systemd[1]: Reached target sockets.target. Mar 18 08:53:48.822510 systemd[1]: Starting kmod-static-nodes.service... Mar 18 08:53:48.822519 systemd[1]: Finished network-cleanup.service. Mar 18 08:53:48.822528 systemd[1]: Starting systemd-fsck-usr.service... Mar 18 08:53:48.822537 systemd[1]: Starting systemd-journald.service... Mar 18 08:53:48.822545 systemd[1]: Starting systemd-modules-load.service... Mar 18 08:53:48.822554 systemd[1]: Starting systemd-resolved.service... Mar 18 08:53:48.822563 systemd[1]: Starting systemd-vconsole-setup.service... Mar 18 08:53:48.822594 systemd[1]: Finished kmod-static-nodes.service. Mar 18 08:53:48.822603 systemd[1]: Finished systemd-fsck-usr.service. Mar 18 08:53:48.822615 systemd-journald[186]: Journal started Mar 18 08:53:48.822658 systemd-journald[186]: Runtime Journal (/run/log/journal/a7b30ae0c22843cfb1c6029c91f5265e) is 8.0M, max 78.4M, 70.4M free. Mar 18 08:53:48.816897 systemd-modules-load[187]: Inserted module 'overlay' Mar 18 08:53:48.852860 kernel: audit: type=1130 audit(1742288028.846:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.852878 systemd[1]: Started systemd-journald.service. Mar 18 08:53:48.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.838421 systemd-resolved[188]: Positive Trust Anchors: Mar 18 08:53:48.838434 systemd-resolved[188]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 18 08:53:48.838471 systemd-resolved[188]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 18 08:53:48.841105 systemd-resolved[188]: Defaulting to hostname 'linux'. Mar 18 08:53:48.858352 systemd[1]: Started systemd-resolved.service. Mar 18 08:53:48.859624 systemd[1]: Finished systemd-vconsole-setup.service. Mar 18 08:53:48.860714 systemd[1]: Reached target nss-lookup.target. Mar 18 08:53:48.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.862300 systemd[1]: Starting dracut-cmdline-ask.service... Mar 18 08:53:48.875810 kernel: audit: type=1130 audit(1742288028.857:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.875847 kernel: audit: type=1130 audit(1742288028.858:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.875860 kernel: audit: type=1130 audit(1742288028.860:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.876779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 18 08:53:48.891907 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 18 08:53:48.891936 kernel: Bridge firewalling registered Mar 18 08:53:48.882772 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 18 08:53:48.897590 kernel: audit: type=1130 audit(1742288028.892:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.888439 systemd-modules-load[187]: Inserted module 'br_netfilter' Mar 18 08:53:48.900076 systemd[1]: Finished dracut-cmdline-ask.service. Mar 18 08:53:48.905757 kernel: audit: type=1130 audit(1742288028.899:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.906762 systemd[1]: Starting dracut-cmdline.service... Mar 18 08:53:48.925146 dracut-cmdline[203]: dracut-dracut-053 Mar 18 08:53:48.925146 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 18 08:53:48.927913 kernel: SCSI subsystem initialized Mar 18 08:53:48.944538 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 18 08:53:48.944597 kernel: device-mapper: uevent: version 1.0.3 Mar 18 08:53:48.947587 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 18 08:53:48.950826 systemd-modules-load[187]: Inserted module 'dm_multipath' Mar 18 08:53:48.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.958612 kernel: audit: type=1130 audit(1742288028.952:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.952342 systemd[1]: Finished systemd-modules-load.service. Mar 18 08:53:48.953645 systemd[1]: Starting systemd-sysctl.service... Mar 18 08:53:48.963972 systemd[1]: Finished systemd-sysctl.service. Mar 18 08:53:48.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.969622 kernel: audit: type=1130 audit(1742288028.963:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:48.981600 kernel: Loading iSCSI transport class v2.0-870. Mar 18 08:53:49.002639 kernel: iscsi: registered transport (tcp) Mar 18 08:53:49.057806 kernel: iscsi: registered transport (qla4xxx) Mar 18 08:53:49.057836 kernel: QLogic iSCSI HBA Driver Mar 18 08:53:49.109446 systemd[1]: Finished dracut-cmdline.service. Mar 18 08:53:49.122553 kernel: audit: type=1130 audit(1742288029.109:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:49.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:49.110973 systemd[1]: Starting dracut-pre-udev.service... Mar 18 08:53:49.169666 kernel: raid6: sse2x4 gen() 13308 MB/s Mar 18 08:53:49.187664 kernel: raid6: sse2x4 xor() 7402 MB/s Mar 18 08:53:49.205666 kernel: raid6: sse2x2 gen() 14726 MB/s Mar 18 08:53:49.223664 kernel: raid6: sse2x2 xor() 8846 MB/s Mar 18 08:53:49.241665 kernel: raid6: sse2x1 gen() 11460 MB/s Mar 18 08:53:49.259971 kernel: raid6: sse2x1 xor() 7010 MB/s Mar 18 08:53:49.260029 kernel: raid6: using algorithm sse2x2 gen() 14726 MB/s Mar 18 08:53:49.260057 kernel: raid6: .... xor() 8846 MB/s, rmw enabled Mar 18 08:53:49.261176 kernel: raid6: using ssse3x2 recovery algorithm Mar 18 08:53:49.280672 kernel: xor: measuring software checksum speed Mar 18 08:53:49.280740 kernel: prefetch64-sse : 18366 MB/sec Mar 18 08:53:49.281905 kernel: generic_sse : 16723 MB/sec Mar 18 08:53:49.281942 kernel: xor: using function: prefetch64-sse (18366 MB/sec) Mar 18 08:53:49.396631 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 18 08:53:49.412198 systemd[1]: Finished dracut-pre-udev.service. Mar 18 08:53:49.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:49.412000 audit: BPF prog-id=7 op=LOAD Mar 18 08:53:49.412000 audit: BPF prog-id=8 op=LOAD Mar 18 08:53:49.413739 systemd[1]: Starting systemd-udevd.service... Mar 18 08:53:49.427383 systemd-udevd[386]: Using default interface naming scheme 'v252'. Mar 18 08:53:49.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:49.431979 systemd[1]: Started systemd-udevd.service. Mar 18 08:53:49.437468 systemd[1]: Starting dracut-pre-trigger.service... Mar 18 08:53:49.464335 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 18 08:53:49.509616 systemd[1]: Finished dracut-pre-trigger.service. Mar 18 08:53:49.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:49.511068 systemd[1]: Starting systemd-udev-trigger.service... Mar 18 08:53:49.568633 systemd[1]: Finished systemd-udev-trigger.service. Mar 18 08:53:49.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:49.628628 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 18 08:53:49.670177 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 18 08:53:49.670201 kernel: GPT:17805311 != 20971519 Mar 18 08:53:49.670212 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 18 08:53:49.670223 kernel: GPT:17805311 != 20971519 Mar 18 08:53:49.670239 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 18 08:53:49.670250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 08:53:49.670582 kernel: libata version 3.00 loaded. Mar 18 08:53:49.674738 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 18 08:53:49.691369 kernel: scsi host0: ata_piix Mar 18 08:53:49.691489 kernel: scsi host1: ata_piix Mar 18 08:53:49.691617 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 18 08:53:49.691631 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 18 08:53:49.696589 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Mar 18 08:53:49.699652 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 18 08:53:49.750126 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 18 08:53:49.751391 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 18 08:53:49.757649 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 18 08:53:49.762260 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 18 08:53:49.764119 systemd[1]: Starting disk-uuid.service... Mar 18 08:53:49.779550 disk-uuid[470]: Primary Header is updated. Mar 18 08:53:49.779550 disk-uuid[470]: Secondary Entries is updated. Mar 18 08:53:49.779550 disk-uuid[470]: Secondary Header is updated. Mar 18 08:53:49.789639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 08:53:49.803601 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 08:53:50.815633 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 18 08:53:50.816686 disk-uuid[471]: The operation has completed successfully. Mar 18 08:53:50.878823 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 18 08:53:50.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:50.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:50.879063 systemd[1]: Finished disk-uuid.service. Mar 18 08:53:50.903493 systemd[1]: Starting verity-setup.service... Mar 18 08:53:50.921693 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 18 08:53:51.027648 systemd[1]: Found device dev-mapper-usr.device. Mar 18 08:53:51.030721 systemd[1]: Mounting sysusr-usr.mount... Mar 18 08:53:51.032357 systemd[1]: Finished verity-setup.service. Mar 18 08:53:51.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.173627 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 18 08:53:51.174169 systemd[1]: Mounted sysusr-usr.mount. Mar 18 08:53:51.174854 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 18 08:53:51.175610 systemd[1]: Starting ignition-setup.service... Mar 18 08:53:51.179811 systemd[1]: Starting parse-ip-for-networkd.service... Mar 18 08:53:51.196349 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 18 08:53:51.196407 kernel: BTRFS info (device vda6): using free space tree Mar 18 08:53:51.196419 kernel: BTRFS info (device vda6): has skinny extents Mar 18 08:53:51.213593 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 18 08:53:51.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.229757 systemd[1]: Finished ignition-setup.service. Mar 18 08:53:51.231132 systemd[1]: Starting ignition-fetch-offline.service... Mar 18 08:53:51.314312 systemd[1]: Finished parse-ip-for-networkd.service. Mar 18 08:53:51.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.316000 audit: BPF prog-id=9 op=LOAD Mar 18 08:53:51.317281 systemd[1]: Starting systemd-networkd.service... Mar 18 08:53:51.339542 systemd-networkd[641]: lo: Link UP Mar 18 08:53:51.340247 systemd-networkd[641]: lo: Gained carrier Mar 18 08:53:51.341395 systemd-networkd[641]: Enumeration completed Mar 18 08:53:51.342041 systemd[1]: Started systemd-networkd.service. Mar 18 08:53:51.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.342390 systemd-networkd[641]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 18 08:53:51.342750 systemd[1]: Reached target network.target. Mar 18 08:53:51.344328 systemd[1]: Starting iscsiuio.service... Mar 18 08:53:51.346532 systemd-networkd[641]: eth0: Link UP Mar 18 08:53:51.347042 systemd-networkd[641]: eth0: Gained carrier Mar 18 08:53:51.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.351696 systemd[1]: Started iscsiuio.service. Mar 18 08:53:51.352993 systemd[1]: Starting iscsid.service... Mar 18 08:53:51.356295 iscsid[650]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 18 08:53:51.356295 iscsid[650]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 18 08:53:51.356295 iscsid[650]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 18 08:53:51.356295 iscsid[650]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 18 08:53:51.356295 iscsid[650]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 18 08:53:51.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.368480 iscsid[650]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 18 08:53:51.359007 systemd[1]: Started iscsid.service. Mar 18 08:53:51.360807 systemd[1]: Starting dracut-initqueue.service... Mar 18 08:53:51.369680 systemd-networkd[641]: eth0: DHCPv4 address 172.24.4.149/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 18 08:53:51.371556 systemd[1]: Finished dracut-initqueue.service. Mar 18 08:53:51.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.372667 systemd[1]: Reached target remote-fs-pre.target. Mar 18 08:53:51.374344 systemd[1]: Reached target remote-cryptsetup.target. Mar 18 08:53:51.375829 systemd[1]: Reached target remote-fs.target. Mar 18 08:53:51.377899 systemd[1]: Starting dracut-pre-mount.service... Mar 18 08:53:51.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.387084 systemd[1]: Finished dracut-pre-mount.service. Mar 18 08:53:51.496815 ignition[559]: Ignition 2.14.0 Mar 18 08:53:51.496851 ignition[559]: Stage: fetch-offline Mar 18 08:53:51.497007 ignition[559]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:51.497054 ignition[559]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:51.499429 ignition[559]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:51.499744 ignition[559]: parsed url from cmdline: "" Mar 18 08:53:51.499754 ignition[559]: no config URL provided Mar 18 08:53:51.499768 ignition[559]: reading system config file "/usr/lib/ignition/user.ign" Mar 18 08:53:51.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.501723 systemd[1]: Finished ignition-fetch-offline.service. Mar 18 08:53:51.499789 ignition[559]: no config at "/usr/lib/ignition/user.ign" Mar 18 08:53:51.503831 systemd[1]: Starting ignition-fetch.service... Mar 18 08:53:51.499809 ignition[559]: failed to fetch config: resource requires networking Mar 18 08:53:51.500502 ignition[559]: Ignition finished successfully Mar 18 08:53:51.512810 ignition[664]: Ignition 2.14.0 Mar 18 08:53:51.512818 ignition[664]: Stage: fetch Mar 18 08:53:51.512932 ignition[664]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:51.512953 ignition[664]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:51.513915 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:51.514009 ignition[664]: parsed url from cmdline: "" Mar 18 08:53:51.514013 ignition[664]: no config URL provided Mar 18 08:53:51.514019 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Mar 18 08:53:51.514027 ignition[664]: no config at "/usr/lib/ignition/user.ign" Mar 18 08:53:51.521994 ignition[664]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 18 08:53:51.522055 ignition[664]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 18 08:53:51.522068 ignition[664]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 18 08:53:51.764442 ignition[664]: GET result: OK Mar 18 08:53:51.764693 ignition[664]: parsing config with SHA512: 939684e1706acf04f79c3973f90a9f89278a519e5f04ba0f4e7bc8a4590bfa21aad4751bdf3c66710846d976ab4b36a446e0f46c87b7ae16282d26cf5eb18964 Mar 18 08:53:51.782726 unknown[664]: fetched base config from "system" Mar 18 08:53:51.782757 unknown[664]: fetched base config from "system" Mar 18 08:53:51.783969 ignition[664]: fetch: fetch complete Mar 18 08:53:51.782772 unknown[664]: fetched user config from "openstack" Mar 18 08:53:51.783982 ignition[664]: fetch: fetch passed Mar 18 08:53:51.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.786970 systemd[1]: Finished ignition-fetch.service. Mar 18 08:53:51.784064 ignition[664]: Ignition finished successfully Mar 18 08:53:51.791293 systemd[1]: Starting ignition-kargs.service... Mar 18 08:53:51.820731 ignition[670]: Ignition 2.14.0 Mar 18 08:53:51.820759 ignition[670]: Stage: kargs Mar 18 08:53:51.821083 ignition[670]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:51.821129 ignition[670]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:51.823383 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:51.826229 ignition[670]: kargs: kargs passed Mar 18 08:53:51.826338 ignition[670]: Ignition finished successfully Mar 18 08:53:51.828251 systemd[1]: Finished ignition-kargs.service. Mar 18 08:53:51.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.832188 systemd[1]: Starting ignition-disks.service... Mar 18 08:53:51.850731 ignition[676]: Ignition 2.14.0 Mar 18 08:53:51.852678 ignition[676]: Stage: disks Mar 18 08:53:51.854153 ignition[676]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:51.855436 ignition[676]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:51.859210 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:51.862596 ignition[676]: disks: disks passed Mar 18 08:53:51.862758 ignition[676]: Ignition finished successfully Mar 18 08:53:51.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.864441 systemd[1]: Finished ignition-disks.service. Mar 18 08:53:51.865950 systemd[1]: Reached target initrd-root-device.target. Mar 18 08:53:51.868197 systemd[1]: Reached target local-fs-pre.target. Mar 18 08:53:51.870650 systemd[1]: Reached target local-fs.target. Mar 18 08:53:51.873113 systemd[1]: Reached target sysinit.target. Mar 18 08:53:51.875436 systemd[1]: Reached target basic.target. Mar 18 08:53:51.879640 systemd[1]: Starting systemd-fsck-root.service... Mar 18 08:53:51.912428 systemd-fsck[684]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 18 08:53:51.927397 systemd[1]: Finished systemd-fsck-root.service. Mar 18 08:53:51.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:51.930550 systemd[1]: Mounting sysroot.mount... Mar 18 08:53:51.959601 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 18 08:53:51.961541 systemd[1]: Mounted sysroot.mount. Mar 18 08:53:51.962954 systemd[1]: Reached target initrd-root-fs.target. Mar 18 08:53:51.967817 systemd[1]: Mounting sysroot-usr.mount... Mar 18 08:53:51.969852 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 18 08:53:51.971311 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 18 08:53:51.976514 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 18 08:53:51.976628 systemd[1]: Reached target ignition-diskful.target. Mar 18 08:53:51.985071 systemd[1]: Mounted sysroot-usr.mount. Mar 18 08:53:51.990251 systemd[1]: Starting initrd-setup-root.service... Mar 18 08:53:52.004616 initrd-setup-root[695]: cut: /sysroot/etc/passwd: No such file or directory Mar 18 08:53:52.030123 initrd-setup-root[703]: cut: /sysroot/etc/group: No such file or directory Mar 18 08:53:52.044495 initrd-setup-root[711]: cut: /sysroot/etc/shadow: No such file or directory Mar 18 08:53:52.056459 initrd-setup-root[719]: cut: /sysroot/etc/gshadow: No such file or directory Mar 18 08:53:52.069983 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 18 08:53:52.092603 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (728) Mar 18 08:53:52.100725 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 18 08:53:52.100748 kernel: BTRFS info (device vda6): using free space tree Mar 18 08:53:52.100760 kernel: BTRFS info (device vda6): has skinny extents Mar 18 08:53:52.117185 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 18 08:53:52.145700 systemd[1]: Finished initrd-setup-root.service. Mar 18 08:53:52.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:52.147093 systemd[1]: Starting ignition-mount.service... Mar 18 08:53:52.148159 systemd[1]: Starting sysroot-boot.service... Mar 18 08:53:52.155250 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 18 08:53:52.155358 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 18 08:53:52.171601 ignition[758]: INFO : Ignition 2.14.0 Mar 18 08:53:52.172376 ignition[758]: INFO : Stage: mount Mar 18 08:53:52.173038 ignition[758]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:52.173880 ignition[758]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:52.176147 ignition[758]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:52.178935 ignition[758]: INFO : mount: mount passed Mar 18 08:53:52.179512 ignition[758]: INFO : Ignition finished successfully Mar 18 08:53:52.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:52.180726 systemd[1]: Finished ignition-mount.service. Mar 18 08:53:52.184093 coreos-metadata[690]: Mar 18 08:53:52.184 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 18 08:53:52.194280 systemd[1]: Finished sysroot-boot.service. Mar 18 08:53:52.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:52.201602 coreos-metadata[690]: Mar 18 08:53:52.201 INFO Fetch successful Mar 18 08:53:52.201602 coreos-metadata[690]: Mar 18 08:53:52.201 INFO wrote hostname ci-3510-3-7-7-00419dcf52.novalocal to /sysroot/etc/hostname Mar 18 08:53:52.205506 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 18 08:53:52.205646 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 18 08:53:52.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:52.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:52.207866 systemd[1]: Starting ignition-files.service... Mar 18 08:53:52.215302 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 18 08:53:52.225606 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (767) Mar 18 08:53:52.228931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 18 08:53:52.228953 kernel: BTRFS info (device vda6): using free space tree Mar 18 08:53:52.228965 kernel: BTRFS info (device vda6): has skinny extents Mar 18 08:53:52.238460 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 18 08:53:52.248696 ignition[786]: INFO : Ignition 2.14.0 Mar 18 08:53:52.249540 ignition[786]: INFO : Stage: files Mar 18 08:53:52.250263 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:52.251033 ignition[786]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:52.253033 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:52.256079 ignition[786]: DEBUG : files: compiled without relabeling support, skipping Mar 18 08:53:52.257394 ignition[786]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 18 08:53:52.257394 ignition[786]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 18 08:53:52.260719 ignition[786]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 18 08:53:52.261474 ignition[786]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 18 08:53:52.262400 unknown[786]: wrote ssh authorized keys file for user: core Mar 18 08:53:52.263085 ignition[786]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 18 08:53:52.263813 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 18 08:53:52.263813 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 18 08:53:52.340227 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 18 08:53:52.650181 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 18 08:53:52.652913 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 18 08:53:52.652913 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 18 08:53:53.233882 systemd-networkd[641]: eth0: Gained IPv6LL Mar 18 08:53:53.343553 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 18 08:53:53.949425 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 18 08:53:53.951805 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 18 08:53:54.487762 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 18 08:53:56.719807 ignition[786]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 18 08:53:56.719807 ignition[786]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 18 08:53:56.719807 ignition[786]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 18 08:53:56.719807 ignition[786]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 18 08:53:56.729828 ignition[786]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 18 08:53:56.755396 kernel: kauditd_printk_skb: 27 callbacks suppressed Mar 18 08:53:56.755423 kernel: audit: type=1130 audit(1742288036.742:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.739098 systemd[1]: Finished ignition-files.service. Mar 18 08:53:56.756136 ignition[786]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 18 08:53:56.756136 ignition[786]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 18 08:53:56.756136 ignition[786]: INFO : files: files passed Mar 18 08:53:56.756136 ignition[786]: INFO : Ignition finished successfully Mar 18 08:53:56.784696 kernel: audit: type=1130 audit(1742288036.757:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.784752 kernel: audit: type=1131 audit(1742288036.757:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.747550 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 18 08:53:56.750459 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 18 08:53:56.790351 initrd-setup-root-after-ignition[811]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 18 08:53:56.805298 kernel: audit: type=1130 audit(1742288036.790:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.751437 systemd[1]: Starting ignition-quench.service... Mar 18 08:53:56.757209 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 18 08:53:56.757309 systemd[1]: Finished ignition-quench.service. Mar 18 08:53:56.788963 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 18 08:53:56.791785 systemd[1]: Reached target ignition-complete.target. Mar 18 08:53:56.808034 systemd[1]: Starting initrd-parse-etc.service... Mar 18 08:53:56.836721 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 18 08:53:56.838472 systemd[1]: Finished initrd-parse-etc.service. Mar 18 08:53:56.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.840601 systemd[1]: Reached target initrd-fs.target. Mar 18 08:53:56.864275 kernel: audit: type=1130 audit(1742288036.839:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.864323 kernel: audit: type=1131 audit(1742288036.839:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.863057 systemd[1]: Reached target initrd.target. Mar 18 08:53:56.864735 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 18 08:53:56.865491 systemd[1]: Starting dracut-pre-pivot.service... Mar 18 08:53:56.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.883111 systemd[1]: Finished dracut-pre-pivot.service. Mar 18 08:53:56.889087 kernel: audit: type=1130 audit(1742288036.883:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.884414 systemd[1]: Starting initrd-cleanup.service... Mar 18 08:53:56.895312 systemd[1]: Stopped target nss-lookup.target. Mar 18 08:53:56.895908 systemd[1]: Stopped target remote-cryptsetup.target. Mar 18 08:53:56.896913 systemd[1]: Stopped target timers.target. Mar 18 08:53:56.897888 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 18 08:53:56.903667 kernel: audit: type=1131 audit(1742288036.898:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.897997 systemd[1]: Stopped dracut-pre-pivot.service. Mar 18 08:53:56.898927 systemd[1]: Stopped target initrd.target. Mar 18 08:53:56.904222 systemd[1]: Stopped target basic.target. Mar 18 08:53:56.905099 systemd[1]: Stopped target ignition-complete.target. Mar 18 08:53:56.905999 systemd[1]: Stopped target ignition-diskful.target. Mar 18 08:53:56.906895 systemd[1]: Stopped target initrd-root-device.target. Mar 18 08:53:56.907836 systemd[1]: Stopped target remote-fs.target. Mar 18 08:53:56.908741 systemd[1]: Stopped target remote-fs-pre.target. Mar 18 08:53:56.909664 systemd[1]: Stopped target sysinit.target. Mar 18 08:53:56.910517 systemd[1]: Stopped target local-fs.target. Mar 18 08:53:56.911443 systemd[1]: Stopped target local-fs-pre.target. Mar 18 08:53:56.912392 systemd[1]: Stopped target swap.target. Mar 18 08:53:56.918866 kernel: audit: type=1131 audit(1742288036.913:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.913234 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 18 08:53:56.913374 systemd[1]: Stopped dracut-pre-mount.service. Mar 18 08:53:56.925177 kernel: audit: type=1131 audit(1742288036.919:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.914274 systemd[1]: Stopped target cryptsetup.target. Mar 18 08:53:56.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.919365 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 18 08:53:56.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.919503 systemd[1]: Stopped dracut-initqueue.service. Mar 18 08:53:56.920505 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 18 08:53:56.920678 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 18 08:53:56.925832 systemd[1]: ignition-files.service: Deactivated successfully. Mar 18 08:53:56.925968 systemd[1]: Stopped ignition-files.service. Mar 18 08:53:56.927546 systemd[1]: Stopping ignition-mount.service... Mar 18 08:53:56.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.944666 systemd[1]: Stopping sysroot-boot.service... Mar 18 08:53:56.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.945226 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 18 08:53:56.945465 systemd[1]: Stopped systemd-udev-trigger.service. Mar 18 08:53:56.946207 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 18 08:53:56.946349 systemd[1]: Stopped dracut-pre-trigger.service. Mar 18 08:53:56.950302 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 18 08:53:56.960742 ignition[824]: INFO : Ignition 2.14.0 Mar 18 08:53:56.960742 ignition[824]: INFO : Stage: umount Mar 18 08:53:56.960742 ignition[824]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 18 08:53:56.960742 ignition[824]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 18 08:53:56.960742 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 18 08:53:56.960742 ignition[824]: INFO : umount: umount passed Mar 18 08:53:56.960742 ignition[824]: INFO : Ignition finished successfully Mar 18 08:53:56.950391 systemd[1]: Finished initrd-cleanup.service. Mar 18 08:53:56.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.967072 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 18 08:53:56.967517 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 18 08:53:56.968654 systemd[1]: Stopped ignition-mount.service. Mar 18 08:53:56.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.969677 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 18 08:53:56.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.969774 systemd[1]: Stopped sysroot-boot.service. Mar 18 08:53:56.971149 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 18 08:53:56.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.971191 systemd[1]: Stopped ignition-disks.service. Mar 18 08:53:56.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.971840 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 18 08:53:56.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.971877 systemd[1]: Stopped ignition-kargs.service. Mar 18 08:53:56.972817 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 18 08:53:56.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.972851 systemd[1]: Stopped ignition-fetch.service. Mar 18 08:53:56.973751 systemd[1]: Stopped target network.target. Mar 18 08:53:56.974683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 18 08:53:56.974722 systemd[1]: Stopped ignition-fetch-offline.service. Mar 18 08:53:56.975631 systemd[1]: Stopped target paths.target. Mar 18 08:53:56.976501 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 18 08:53:56.979616 systemd[1]: Stopped systemd-ask-password-console.path. Mar 18 08:53:56.980243 systemd[1]: Stopped target slices.target. Mar 18 08:53:56.981171 systemd[1]: Stopped target sockets.target. Mar 18 08:53:56.982078 systemd[1]: iscsid.socket: Deactivated successfully. Mar 18 08:53:56.982101 systemd[1]: Closed iscsid.socket. Mar 18 08:53:56.982943 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 18 08:53:56.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.982967 systemd[1]: Closed iscsiuio.socket. Mar 18 08:53:56.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.983820 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 18 08:53:56.983855 systemd[1]: Stopped ignition-setup.service. Mar 18 08:53:56.984710 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 18 08:53:56.984747 systemd[1]: Stopped initrd-setup-root.service. Mar 18 08:53:56.986079 systemd[1]: Stopping systemd-networkd.service... Mar 18 08:53:56.987201 systemd[1]: Stopping systemd-resolved.service... Mar 18 08:53:56.989620 systemd-networkd[641]: eth0: DHCPv6 lease lost Mar 18 08:53:56.990717 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 18 08:53:56.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.990805 systemd[1]: Stopped systemd-networkd.service. Mar 18 08:53:56.992093 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 18 08:53:56.992131 systemd[1]: Closed systemd-networkd.socket. Mar 18 08:53:56.994638 systemd[1]: Stopping network-cleanup.service... Mar 18 08:53:56.995000 audit: BPF prog-id=9 op=UNLOAD Mar 18 08:53:56.996470 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 18 08:53:56.996523 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 18 08:53:56.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.997882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 18 08:53:56.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.997918 systemd[1]: Stopped systemd-sysctl.service. Mar 18 08:53:56.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:56.999090 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 18 08:53:56.999127 systemd[1]: Stopped systemd-modules-load.service. Mar 18 08:53:56.999900 systemd[1]: Stopping systemd-udevd.service... Mar 18 08:53:57.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.001899 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 18 08:53:57.002382 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 18 08:53:57.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.002477 systemd[1]: Stopped systemd-resolved.service. Mar 18 08:53:57.004091 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 18 08:53:57.006000 audit: BPF prog-id=6 op=UNLOAD Mar 18 08:53:57.004209 systemd[1]: Stopped systemd-udevd.service. Mar 18 08:53:57.006100 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 18 08:53:57.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.006137 systemd[1]: Closed systemd-udevd-control.socket. Mar 18 08:53:57.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.006833 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 18 08:53:57.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.006861 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 18 08:53:57.009013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 18 08:53:57.009051 systemd[1]: Stopped dracut-pre-udev.service. Mar 18 08:53:57.009988 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 18 08:53:57.010023 systemd[1]: Stopped dracut-cmdline.service. Mar 18 08:53:57.011129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 18 08:53:57.011166 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 18 08:53:57.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.012784 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 18 08:53:57.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.013765 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 18 08:53:57.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.013812 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 18 08:53:57.020947 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 18 08:53:57.020999 systemd[1]: Stopped kmod-static-nodes.service. Mar 18 08:53:57.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.021674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 18 08:53:57.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.021712 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 18 08:53:57.023549 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 18 08:53:57.024088 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 18 08:53:57.024178 systemd[1]: Stopped network-cleanup.service. Mar 18 08:53:57.025012 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 18 08:53:57.025088 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 18 08:53:57.025893 systemd[1]: Reached target initrd-switch-root.target. Mar 18 08:53:57.027467 systemd[1]: Starting initrd-switch-root.service... Mar 18 08:53:57.045190 systemd[1]: Switching root. Mar 18 08:53:57.064655 iscsid[650]: iscsid shutting down. Mar 18 08:53:57.065279 systemd-journald[186]: Journal stopped Mar 18 08:54:01.121830 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Mar 18 08:54:01.121875 kernel: SELinux: Class mctp_socket not defined in policy. Mar 18 08:54:01.121891 kernel: SELinux: Class anon_inode not defined in policy. Mar 18 08:54:01.121903 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 18 08:54:01.121917 kernel: SELinux: policy capability network_peer_controls=1 Mar 18 08:54:01.121928 kernel: SELinux: policy capability open_perms=1 Mar 18 08:54:01.121941 kernel: SELinux: policy capability extended_socket_class=1 Mar 18 08:54:01.121952 kernel: SELinux: policy capability always_check_network=0 Mar 18 08:54:01.121965 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 18 08:54:01.121978 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 18 08:54:01.121989 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 18 08:54:01.121999 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 18 08:54:01.122011 systemd[1]: Successfully loaded SELinux policy in 99.472ms. Mar 18 08:54:01.122027 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.101ms. Mar 18 08:54:01.122041 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 18 08:54:01.122057 systemd[1]: Detected virtualization kvm. Mar 18 08:54:01.122069 systemd[1]: Detected architecture x86-64. Mar 18 08:54:01.122082 systemd[1]: Detected first boot. Mar 18 08:54:01.122094 systemd[1]: Hostname set to . Mar 18 08:54:01.122106 systemd[1]: Initializing machine ID from VM UUID. Mar 18 08:54:01.122117 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 18 08:54:01.122129 systemd[1]: Populated /etc with preset unit settings. Mar 18 08:54:01.122143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 18 08:54:01.122157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 18 08:54:01.122171 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 08:54:01.122184 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 18 08:54:01.122195 systemd[1]: Stopped iscsiuio.service. Mar 18 08:54:01.122207 systemd[1]: iscsid.service: Deactivated successfully. Mar 18 08:54:01.122219 systemd[1]: Stopped iscsid.service. Mar 18 08:54:01.122233 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 18 08:54:01.122245 systemd[1]: Stopped initrd-switch-root.service. Mar 18 08:54:01.122257 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 18 08:54:01.122269 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 18 08:54:01.122281 systemd[1]: Created slice system-addon\x2drun.slice. Mar 18 08:54:01.122293 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 18 08:54:01.122305 systemd[1]: Created slice system-getty.slice. Mar 18 08:54:01.122317 systemd[1]: Created slice system-modprobe.slice. Mar 18 08:54:01.122329 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 18 08:54:01.122343 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 18 08:54:01.122355 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 18 08:54:01.122367 systemd[1]: Created slice user.slice. Mar 18 08:54:01.122379 systemd[1]: Started systemd-ask-password-console.path. Mar 18 08:54:01.122390 systemd[1]: Started systemd-ask-password-wall.path. Mar 18 08:54:01.122402 systemd[1]: Set up automount boot.automount. Mar 18 08:54:01.122416 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 18 08:54:01.122428 systemd[1]: Stopped target initrd-switch-root.target. Mar 18 08:54:01.122440 systemd[1]: Stopped target initrd-fs.target. Mar 18 08:54:01.122451 systemd[1]: Stopped target initrd-root-fs.target. Mar 18 08:54:01.122463 systemd[1]: Reached target integritysetup.target. Mar 18 08:54:01.122475 systemd[1]: Reached target remote-cryptsetup.target. Mar 18 08:54:01.122486 systemd[1]: Reached target remote-fs.target. Mar 18 08:54:01.122498 systemd[1]: Reached target slices.target. Mar 18 08:54:01.122531 systemd[1]: Reached target swap.target. Mar 18 08:54:01.122546 systemd[1]: Reached target torcx.target. Mar 18 08:54:01.122558 systemd[1]: Reached target veritysetup.target. Mar 18 08:54:01.122582 systemd[1]: Listening on systemd-coredump.socket. Mar 18 08:54:01.122596 systemd[1]: Listening on systemd-initctl.socket. Mar 18 08:54:01.122607 systemd[1]: Listening on systemd-networkd.socket. Mar 18 08:54:01.122618 systemd[1]: Listening on systemd-udevd-control.socket. Mar 18 08:54:01.122631 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 18 08:54:01.122643 systemd[1]: Listening on systemd-userdbd.socket. Mar 18 08:54:01.122655 systemd[1]: Mounting dev-hugepages.mount... Mar 18 08:54:01.122666 systemd[1]: Mounting dev-mqueue.mount... Mar 18 08:54:01.122679 systemd[1]: Mounting media.mount... Mar 18 08:54:01.122691 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:01.122703 systemd[1]: Mounting sys-kernel-debug.mount... Mar 18 08:54:01.122714 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 18 08:54:01.122726 systemd[1]: Mounting tmp.mount... Mar 18 08:54:01.122738 systemd[1]: Starting flatcar-tmpfiles.service... Mar 18 08:54:01.122749 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 18 08:54:01.122761 systemd[1]: Starting kmod-static-nodes.service... Mar 18 08:54:01.122772 systemd[1]: Starting modprobe@configfs.service... Mar 18 08:54:01.122785 systemd[1]: Starting modprobe@dm_mod.service... Mar 18 08:54:01.122797 systemd[1]: Starting modprobe@drm.service... Mar 18 08:54:01.122809 systemd[1]: Starting modprobe@efi_pstore.service... Mar 18 08:54:01.122821 systemd[1]: Starting modprobe@fuse.service... Mar 18 08:54:01.122832 systemd[1]: Starting modprobe@loop.service... Mar 18 08:54:01.122845 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 18 08:54:01.122856 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 18 08:54:01.122868 systemd[1]: Stopped systemd-fsck-root.service. Mar 18 08:54:01.122880 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 18 08:54:01.122894 systemd[1]: Stopped systemd-fsck-usr.service. Mar 18 08:54:01.122905 systemd[1]: Stopped systemd-journald.service. Mar 18 08:54:01.122917 systemd[1]: Starting systemd-journald.service... Mar 18 08:54:01.122928 systemd[1]: Starting systemd-modules-load.service... Mar 18 08:54:01.122940 systemd[1]: Starting systemd-network-generator.service... Mar 18 08:54:01.122952 systemd[1]: Starting systemd-remount-fs.service... Mar 18 08:54:01.122964 systemd[1]: Starting systemd-udev-trigger.service... Mar 18 08:54:01.122975 kernel: loop: module loaded Mar 18 08:54:01.122986 systemd[1]: verity-setup.service: Deactivated successfully. Mar 18 08:54:01.123000 systemd[1]: Stopped verity-setup.service. Mar 18 08:54:01.123012 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:01.123024 systemd[1]: Mounted dev-hugepages.mount. Mar 18 08:54:01.123035 systemd[1]: Mounted dev-mqueue.mount. Mar 18 08:54:01.123046 systemd[1]: Mounted media.mount. Mar 18 08:54:01.123058 systemd[1]: Mounted sys-kernel-debug.mount. Mar 18 08:54:01.123069 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 18 08:54:01.123081 kernel: fuse: init (API version 7.34) Mar 18 08:54:01.123092 systemd[1]: Mounted tmp.mount. Mar 18 08:54:01.123106 systemd[1]: Finished kmod-static-nodes.service. Mar 18 08:54:01.123118 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 18 08:54:01.123131 systemd[1]: Finished modprobe@configfs.service. Mar 18 08:54:01.123142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 08:54:01.123154 systemd[1]: Finished modprobe@dm_mod.service. Mar 18 08:54:01.123170 systemd-journald[926]: Journal started Mar 18 08:54:01.123212 systemd-journald[926]: Runtime Journal (/run/log/journal/a7b30ae0c22843cfb1c6029c91f5265e) is 8.0M, max 78.4M, 70.4M free. Mar 18 08:53:57.379000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 18 08:53:57.469000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 18 08:53:57.469000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 18 08:53:57.469000 audit: BPF prog-id=10 op=LOAD Mar 18 08:53:57.469000 audit: BPF prog-id=10 op=UNLOAD Mar 18 08:53:57.469000 audit: BPF prog-id=11 op=LOAD Mar 18 08:53:57.469000 audit: BPF prog-id=11 op=UNLOAD Mar 18 08:53:57.647000 audit[856]: AVC avc: denied { associate } for pid=856 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 18 08:53:57.647000 audit[856]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178cc a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=839 pid=856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 18 08:53:57.647000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 18 08:53:57.650000 audit[856]: AVC avc: denied { associate } for pid=856 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 18 08:53:57.650000 audit[856]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a5 a2=1ed a3=0 items=2 ppid=839 pid=856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 18 08:53:57.650000 audit: CWD cwd="/" Mar 18 08:53:57.650000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:53:57.650000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:53:57.650000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 18 08:54:00.864000 audit: BPF prog-id=12 op=LOAD Mar 18 08:54:00.864000 audit: BPF prog-id=3 op=UNLOAD Mar 18 08:54:00.864000 audit: BPF prog-id=13 op=LOAD Mar 18 08:54:00.864000 audit: BPF prog-id=14 op=LOAD Mar 18 08:54:00.864000 audit: BPF prog-id=4 op=UNLOAD Mar 18 08:54:01.125866 systemd[1]: Started systemd-journald.service. Mar 18 08:54:00.864000 audit: BPF prog-id=5 op=UNLOAD Mar 18 08:54:00.866000 audit: BPF prog-id=15 op=LOAD Mar 18 08:54:00.866000 audit: BPF prog-id=12 op=UNLOAD Mar 18 08:54:00.866000 audit: BPF prog-id=16 op=LOAD Mar 18 08:54:00.866000 audit: BPF prog-id=17 op=LOAD Mar 18 08:54:00.866000 audit: BPF prog-id=13 op=UNLOAD Mar 18 08:54:00.866000 audit: BPF prog-id=14 op=UNLOAD Mar 18 08:54:00.866000 audit: BPF prog-id=18 op=LOAD Mar 18 08:54:00.866000 audit: BPF prog-id=15 op=UNLOAD Mar 18 08:54:00.867000 audit: BPF prog-id=19 op=LOAD Mar 18 08:54:00.867000 audit: BPF prog-id=20 op=LOAD Mar 18 08:54:00.867000 audit: BPF prog-id=16 op=UNLOAD Mar 18 08:54:00.867000 audit: BPF prog-id=17 op=UNLOAD Mar 18 08:54:00.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:00.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:00.890000 audit: BPF prog-id=18 op=UNLOAD Mar 18 08:54:00.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:00.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:00.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.062000 audit: BPF prog-id=21 op=LOAD Mar 18 08:54:01.062000 audit: BPF prog-id=22 op=LOAD Mar 18 08:54:01.062000 audit: BPF prog-id=23 op=LOAD Mar 18 08:54:01.062000 audit: BPF prog-id=19 op=UNLOAD Mar 18 08:54:01.062000 audit: BPF prog-id=20 op=UNLOAD Mar 18 08:54:01.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.116000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 18 08:54:01.116000 audit[926]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffc287e7c0 a2=4000 a3=7fffc287e85c items=0 ppid=1 pid=926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 18 08:54:01.116000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 18 08:54:01.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.644164 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 18 08:54:00.863183 systemd[1]: Queued start job for default target multi-user.target. Mar 18 08:53:57.645403 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 18 08:54:00.863197 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 18 08:53:57.645427 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 18 08:54:00.868560 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 18 08:53:57.645464 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 18 08:53:57.645477 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 18 08:53:57.645514 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 18 08:53:57.645530 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 18 08:54:01.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:53:57.645778 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 18 08:54:01.128428 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 18 08:53:57.645820 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 18 08:53:57.645836 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 18 08:53:57.646887 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 18 08:53:57.646928 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 18 08:53:57.646950 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 18 08:53:57.646968 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 18 08:53:57.646987 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 18 08:53:57.647004 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:53:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 18 08:54:00.477787 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:54:00Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 18 08:54:00.478063 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:54:00Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 18 08:54:00.478299 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:54:00Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 18 08:54:00.479097 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:54:00Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 18 08:54:00.479165 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:54:00Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 18 08:54:00.479233 /usr/lib/systemd/system-generators/torcx-generator[856]: time="2025-03-18T08:54:00Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 18 08:54:01.130745 systemd[1]: Finished modprobe@drm.service. Mar 18 08:54:01.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.131707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 08:54:01.132048 systemd[1]: Finished modprobe@efi_pstore.service. Mar 18 08:54:01.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.132756 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 18 08:54:01.135449 systemd[1]: Finished modprobe@fuse.service. Mar 18 08:54:01.136139 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 08:54:01.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.136445 systemd[1]: Finished modprobe@loop.service. Mar 18 08:54:01.137193 systemd[1]: Finished systemd-modules-load.service. Mar 18 08:54:01.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.138152 systemd[1]: Finished systemd-network-generator.service. Mar 18 08:54:01.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.138986 systemd[1]: Finished systemd-remount-fs.service. Mar 18 08:54:01.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.139916 systemd[1]: Reached target network-pre.target. Mar 18 08:54:01.141764 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 18 08:54:01.146086 systemd[1]: Mounting sys-kernel-config.mount... Mar 18 08:54:01.148684 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 18 08:54:01.151964 systemd[1]: Starting systemd-hwdb-update.service... Mar 18 08:54:01.153388 systemd[1]: Starting systemd-journal-flush.service... Mar 18 08:54:01.153914 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 18 08:54:01.154833 systemd[1]: Starting systemd-random-seed.service... Mar 18 08:54:01.155349 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 18 08:54:01.156401 systemd[1]: Starting systemd-sysctl.service... Mar 18 08:54:01.158347 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 18 08:54:01.161327 systemd[1]: Mounted sys-kernel-config.mount. Mar 18 08:54:01.166383 systemd-journald[926]: Time spent on flushing to /var/log/journal/a7b30ae0c22843cfb1c6029c91f5265e is 36.442ms for 1111 entries. Mar 18 08:54:01.166383 systemd-journald[926]: System Journal (/var/log/journal/a7b30ae0c22843cfb1c6029c91f5265e) is 8.0M, max 584.8M, 576.8M free. Mar 18 08:54:01.217179 systemd-journald[926]: Received client request to flush runtime journal. Mar 18 08:54:01.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.181657 systemd[1]: Finished systemd-random-seed.service. Mar 18 08:54:01.182262 systemd[1]: Reached target first-boot-complete.target. Mar 18 08:54:01.188353 systemd[1]: Finished systemd-sysctl.service. Mar 18 08:54:01.217024 systemd[1]: Finished systemd-udev-trigger.service. Mar 18 08:54:01.217906 systemd[1]: Finished flatcar-tmpfiles.service. Mar 18 08:54:01.218603 systemd[1]: Finished systemd-journal-flush.service. Mar 18 08:54:01.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.220523 systemd[1]: Starting systemd-sysusers.service... Mar 18 08:54:01.222259 systemd[1]: Starting systemd-udev-settle.service... Mar 18 08:54:01.233546 udevadm[965]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 18 08:54:01.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.263862 systemd[1]: Finished systemd-sysusers.service. Mar 18 08:54:01.265670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 18 08:54:01.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.307108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 18 08:54:01.764249 systemd[1]: Finished systemd-hwdb-update.service. Mar 18 08:54:01.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.777989 kernel: kauditd_printk_skb: 108 callbacks suppressed Mar 18 08:54:01.778087 kernel: audit: type=1130 audit(1742288041.764:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.790000 audit: BPF prog-id=24 op=LOAD Mar 18 08:54:01.794551 kernel: audit: type=1334 audit(1742288041.790:148): prog-id=24 op=LOAD Mar 18 08:54:01.794000 audit: BPF prog-id=25 op=LOAD Mar 18 08:54:01.794000 audit: BPF prog-id=7 op=UNLOAD Mar 18 08:54:01.799277 systemd[1]: Starting systemd-udevd.service... Mar 18 08:54:01.802536 kernel: audit: type=1334 audit(1742288041.794:149): prog-id=25 op=LOAD Mar 18 08:54:01.802667 kernel: audit: type=1334 audit(1742288041.794:150): prog-id=7 op=UNLOAD Mar 18 08:54:01.802753 kernel: audit: type=1334 audit(1742288041.794:151): prog-id=8 op=UNLOAD Mar 18 08:54:01.794000 audit: BPF prog-id=8 op=UNLOAD Mar 18 08:54:01.842414 systemd-udevd[969]: Using default interface naming scheme 'v252'. Mar 18 08:54:01.899798 systemd[1]: Started systemd-udevd.service. Mar 18 08:54:01.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.912816 kernel: audit: type=1130 audit(1742288041.900:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:01.912791 systemd[1]: Starting systemd-networkd.service... Mar 18 08:54:01.901000 audit: BPF prog-id=26 op=LOAD Mar 18 08:54:01.918657 kernel: audit: type=1334 audit(1742288041.901:153): prog-id=26 op=LOAD Mar 18 08:54:01.929000 audit: BPF prog-id=27 op=LOAD Mar 18 08:54:01.934248 systemd[1]: Starting systemd-userdbd.service... Mar 18 08:54:01.934722 kernel: audit: type=1334 audit(1742288041.929:154): prog-id=27 op=LOAD Mar 18 08:54:01.929000 audit: BPF prog-id=28 op=LOAD Mar 18 08:54:01.938650 kernel: audit: type=1334 audit(1742288041.929:155): prog-id=28 op=LOAD Mar 18 08:54:01.929000 audit: BPF prog-id=29 op=LOAD Mar 18 08:54:01.943678 kernel: audit: type=1334 audit(1742288041.929:156): prog-id=29 op=LOAD Mar 18 08:54:01.979502 systemd[1]: Started systemd-userdbd.service. Mar 18 08:54:01.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.007931 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 18 08:54:02.051355 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 18 08:54:02.055599 kernel: ACPI: button: Power Button [PWRF] Mar 18 08:54:02.087620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 18 08:54:02.097000 audit[976]: AVC avc: denied { confidentiality } for pid=976 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 18 08:54:02.097000 audit[976]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559208e5c3a0 a1=338ac a2=7fb8edb24bc5 a3=5 items=110 ppid=969 pid=976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 18 08:54:02.097000 audit: CWD cwd="/" Mar 18 08:54:02.097000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=1 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=2 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=3 name=(null) inode=13187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=4 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=5 name=(null) inode=13188 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=6 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=7 name=(null) inode=13189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=8 name=(null) inode=13189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=9 name=(null) inode=13190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=10 name=(null) inode=13189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=11 name=(null) inode=13191 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=12 name=(null) inode=13189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=13 name=(null) inode=13192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=14 name=(null) inode=13189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=15 name=(null) inode=13193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=16 name=(null) inode=13189 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=17 name=(null) inode=13194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=18 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=19 name=(null) inode=13195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=20 name=(null) inode=13195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=21 name=(null) inode=13196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=22 name=(null) inode=13195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=23 name=(null) inode=13197 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=24 name=(null) inode=13195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=25 name=(null) inode=13198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=26 name=(null) inode=13195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=27 name=(null) inode=13199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=28 name=(null) inode=13195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=29 name=(null) inode=13200 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=30 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=31 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=32 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=33 name=(null) inode=13202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=34 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=35 name=(null) inode=13203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=36 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=37 name=(null) inode=13204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=38 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=39 name=(null) inode=13205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=40 name=(null) inode=13201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=41 name=(null) inode=13206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=42 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=43 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=44 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=45 name=(null) inode=13208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=46 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=47 name=(null) inode=13209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=48 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=49 name=(null) inode=13210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=50 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=51 name=(null) inode=13211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=52 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=53 name=(null) inode=13212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=55 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=56 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=57 name=(null) inode=13214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=58 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=59 name=(null) inode=13215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=60 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=61 name=(null) inode=13216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=62 name=(null) inode=13216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=63 name=(null) inode=13217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=64 name=(null) inode=13216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=65 name=(null) inode=13218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=66 name=(null) inode=13216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=67 name=(null) inode=13219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=68 name=(null) inode=13216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=69 name=(null) inode=13220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=70 name=(null) inode=13216 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=71 name=(null) inode=13221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=72 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=73 name=(null) inode=13222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=74 name=(null) inode=13222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=75 name=(null) inode=13223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=76 name=(null) inode=13222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=77 name=(null) inode=13224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=78 name=(null) inode=13222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=79 name=(null) inode=13225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=80 name=(null) inode=13222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=81 name=(null) inode=13226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=82 name=(null) inode=13222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=83 name=(null) inode=13227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=84 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=85 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=86 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=87 name=(null) inode=13229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=88 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=89 name=(null) inode=13230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=90 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=91 name=(null) inode=13231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=92 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=93 name=(null) inode=13232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=94 name=(null) inode=13228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=95 name=(null) inode=13233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=96 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=97 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=98 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=99 name=(null) inode=13235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=100 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=101 name=(null) inode=13236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=102 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=103 name=(null) inode=13237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=104 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=105 name=(null) inode=13238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=106 name=(null) inode=13234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=107 name=(null) inode=13239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PATH item=109 name=(null) inode=13240 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 18 08:54:02.097000 audit: PROCTITLE proctitle="(udev-worker)" Mar 18 08:54:02.145619 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 18 08:54:02.155636 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 18 08:54:02.165595 kernel: mousedev: PS/2 mouse device common for all mice Mar 18 08:54:02.181291 systemd-networkd[982]: lo: Link UP Mar 18 08:54:02.181309 systemd-networkd[982]: lo: Gained carrier Mar 18 08:54:02.182055 systemd-networkd[982]: Enumeration completed Mar 18 08:54:02.182207 systemd[1]: Started systemd-networkd.service. Mar 18 08:54:02.183536 systemd-networkd[982]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 18 08:54:02.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.233357 systemd-networkd[982]: eth0: Link UP Mar 18 08:54:02.233381 systemd-networkd[982]: eth0: Gained carrier Mar 18 08:54:02.247765 systemd-networkd[982]: eth0: DHCPv4 address 172.24.4.149/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 18 08:54:02.256221 systemd[1]: Finished systemd-udev-settle.service. Mar 18 08:54:02.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.260109 systemd[1]: Starting lvm2-activation-early.service... Mar 18 08:54:02.305091 lvm[1003]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 18 08:54:02.343731 systemd[1]: Finished lvm2-activation-early.service. Mar 18 08:54:02.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.345313 systemd[1]: Reached target cryptsetup.target. Mar 18 08:54:02.349014 systemd[1]: Starting lvm2-activation.service... Mar 18 08:54:02.358615 lvm[1004]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 18 08:54:02.397713 systemd[1]: Finished lvm2-activation.service. Mar 18 08:54:02.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.399059 systemd[1]: Reached target local-fs-pre.target. Mar 18 08:54:02.400241 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 18 08:54:02.400301 systemd[1]: Reached target local-fs.target. Mar 18 08:54:02.401484 systemd[1]: Reached target machines.target. Mar 18 08:54:02.405007 systemd[1]: Starting ldconfig.service... Mar 18 08:54:02.407317 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 18 08:54:02.407409 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:02.409553 systemd[1]: Starting systemd-boot-update.service... Mar 18 08:54:02.413768 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 18 08:54:02.417944 systemd[1]: Starting systemd-machine-id-commit.service... Mar 18 08:54:02.425394 systemd[1]: Starting systemd-sysext.service... Mar 18 08:54:02.445003 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1006 (bootctl) Mar 18 08:54:02.447464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 18 08:54:02.481034 systemd[1]: Unmounting usr-share-oem.mount... Mar 18 08:54:02.493767 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 18 08:54:02.494137 systemd[1]: Unmounted usr-share-oem.mount. Mar 18 08:54:02.509670 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 18 08:54:02.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.540704 kernel: loop0: detected capacity change from 0 to 205544 Mar 18 08:54:02.932398 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 18 08:54:02.933677 systemd[1]: Finished systemd-machine-id-commit.service. Mar 18 08:54:02.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:02.979655 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 18 08:54:03.010640 kernel: loop1: detected capacity change from 0 to 205544 Mar 18 08:54:03.052396 (sd-sysext)[1021]: Using extensions 'kubernetes'. Mar 18 08:54:03.054872 (sd-sysext)[1021]: Merged extensions into '/usr'. Mar 18 08:54:03.092545 systemd-fsck[1018]: fsck.fat 4.2 (2021-01-31) Mar 18 08:54:03.092545 systemd-fsck[1018]: /dev/vda1: 789 files, 119299/258078 clusters Mar 18 08:54:03.105531 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 18 08:54:03.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.108151 systemd[1]: Mounting boot.mount... Mar 18 08:54:03.108666 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.111003 systemd[1]: Mounting usr-share-oem.mount... Mar 18 08:54:03.111711 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.113073 systemd[1]: Starting modprobe@dm_mod.service... Mar 18 08:54:03.114687 systemd[1]: Starting modprobe@efi_pstore.service... Mar 18 08:54:03.117658 systemd[1]: Starting modprobe@loop.service... Mar 18 08:54:03.118402 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.118590 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:03.118720 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.122730 systemd[1]: Mounted usr-share-oem.mount. Mar 18 08:54:03.124510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 08:54:03.124793 systemd[1]: Finished modprobe@dm_mod.service. Mar 18 08:54:03.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.126391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 08:54:03.126619 systemd[1]: Finished modprobe@efi_pstore.service. Mar 18 08:54:03.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.128194 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 08:54:03.128758 systemd[1]: Finished modprobe@loop.service. Mar 18 08:54:03.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.132863 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 18 08:54:03.132917 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.134066 systemd[1]: Finished systemd-sysext.service. Mar 18 08:54:03.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.137185 systemd[1]: Mounted boot.mount. Mar 18 08:54:03.139757 systemd[1]: Starting ensure-sysext.service... Mar 18 08:54:03.141402 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 18 08:54:03.154903 systemd[1]: Reloading. Mar 18 08:54:03.169544 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 18 08:54:03.171209 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 18 08:54:03.174075 systemd-tmpfiles[1029]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 18 08:54:03.242247 /usr/lib/systemd/system-generators/torcx-generator[1048]: time="2025-03-18T08:54:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 18 08:54:03.242939 /usr/lib/systemd/system-generators/torcx-generator[1048]: time="2025-03-18T08:54:03Z" level=info msg="torcx already run" Mar 18 08:54:03.281773 systemd-networkd[982]: eth0: Gained IPv6LL Mar 18 08:54:03.381473 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 18 08:54:03.381770 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 18 08:54:03.403905 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 08:54:03.461000 audit: BPF prog-id=30 op=LOAD Mar 18 08:54:03.462000 audit: BPF prog-id=31 op=LOAD Mar 18 08:54:03.462000 audit: BPF prog-id=24 op=UNLOAD Mar 18 08:54:03.462000 audit: BPF prog-id=25 op=UNLOAD Mar 18 08:54:03.463000 audit: BPF prog-id=32 op=LOAD Mar 18 08:54:03.463000 audit: BPF prog-id=26 op=UNLOAD Mar 18 08:54:03.464000 audit: BPF prog-id=33 op=LOAD Mar 18 08:54:03.464000 audit: BPF prog-id=27 op=UNLOAD Mar 18 08:54:03.464000 audit: BPF prog-id=34 op=LOAD Mar 18 08:54:03.464000 audit: BPF prog-id=35 op=LOAD Mar 18 08:54:03.464000 audit: BPF prog-id=28 op=UNLOAD Mar 18 08:54:03.464000 audit: BPF prog-id=29 op=UNLOAD Mar 18 08:54:03.465000 audit: BPF prog-id=36 op=LOAD Mar 18 08:54:03.465000 audit: BPF prog-id=21 op=UNLOAD Mar 18 08:54:03.466000 audit: BPF prog-id=37 op=LOAD Mar 18 08:54:03.466000 audit: BPF prog-id=38 op=LOAD Mar 18 08:54:03.466000 audit: BPF prog-id=22 op=UNLOAD Mar 18 08:54:03.466000 audit: BPF prog-id=23 op=UNLOAD Mar 18 08:54:03.475600 systemd[1]: Finished systemd-boot-update.service. Mar 18 08:54:03.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.477385 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 18 08:54:03.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.481445 systemd[1]: Starting audit-rules.service... Mar 18 08:54:03.483058 systemd[1]: Starting clean-ca-certificates.service... Mar 18 08:54:03.485382 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 18 08:54:03.488000 audit: BPF prog-id=39 op=LOAD Mar 18 08:54:03.492000 audit: BPF prog-id=40 op=LOAD Mar 18 08:54:03.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.504000 audit[1104]: SYSTEM_BOOT pid=1104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.491451 systemd[1]: Starting systemd-resolved.service... Mar 18 08:54:03.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.493742 systemd[1]: Starting systemd-timesyncd.service... Mar 18 08:54:03.497605 systemd[1]: Starting systemd-update-utmp.service... Mar 18 08:54:03.500515 systemd[1]: Finished clean-ca-certificates.service. Mar 18 08:54:03.510372 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.510666 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.513340 systemd[1]: Starting modprobe@dm_mod.service... Mar 18 08:54:03.515428 systemd[1]: Starting modprobe@efi_pstore.service... Mar 18 08:54:03.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.518268 systemd[1]: Starting modprobe@loop.service... Mar 18 08:54:03.542320 ldconfig[1005]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 18 08:54:03.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.519285 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.519869 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:03.520079 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 18 08:54:03.520262 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.522184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 08:54:03.522304 systemd[1]: Finished modprobe@dm_mod.service. Mar 18 08:54:03.524780 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 08:54:03.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.524903 systemd[1]: Finished modprobe@loop.service. Mar 18 08:54:03.527304 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.529155 systemd[1]: Finished systemd-update-utmp.service. Mar 18 08:54:03.531005 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.531239 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.533550 systemd[1]: Starting modprobe@dm_mod.service... Mar 18 08:54:03.536740 systemd[1]: Starting modprobe@loop.service... Mar 18 08:54:03.538764 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.538901 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:03.539035 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 18 08:54:03.539146 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.540096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 08:54:03.540237 systemd[1]: Finished modprobe@efi_pstore.service. Mar 18 08:54:03.541331 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 08:54:03.541438 systemd[1]: Finished modprobe@loop.service. Mar 18 08:54:03.543061 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 18 08:54:03.548238 systemd[1]: Finished ldconfig.service. Mar 18 08:54:03.549270 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 18 08:54:03.549387 systemd[1]: Finished modprobe@dm_mod.service. Mar 18 08:54:03.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.551743 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.552007 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.553387 systemd[1]: Starting modprobe@drm.service... Mar 18 08:54:03.555703 systemd[1]: Starting modprobe@efi_pstore.service... Mar 18 08:54:03.558267 systemd[1]: Starting modprobe@loop.service... Mar 18 08:54:03.559916 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.560037 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:03.561387 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 18 08:54:03.562044 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 18 08:54:03.562181 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 18 08:54:03.564119 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 18 08:54:03.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.566056 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 18 08:54:03.566178 systemd[1]: Finished modprobe@drm.service. Mar 18 08:54:03.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.567063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 18 08:54:03.567173 systemd[1]: Finished modprobe@efi_pstore.service. Mar 18 08:54:03.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.569674 systemd[1]: Finished ensure-sysext.service. Mar 18 08:54:03.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.570743 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 18 08:54:03.572458 systemd[1]: Starting systemd-update-done.service... Mar 18 08:54:03.576437 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 18 08:54:03.576561 systemd[1]: Finished modprobe@loop.service. Mar 18 08:54:03.577148 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.580840 systemd[1]: Finished systemd-update-done.service. Mar 18 08:54:03.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.582433 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 18 08:54:03.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 18 08:54:03.600000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 18 08:54:03.600000 audit[1126]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd93913e90 a2=420 a3=0 items=0 ppid=1096 pid=1126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 18 08:54:03.600000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 18 08:54:03.601176 augenrules[1126]: No rules Mar 18 08:54:03.601234 systemd[1]: Finished audit-rules.service. Mar 18 08:54:03.627898 systemd[1]: Started systemd-timesyncd.service. Mar 18 08:54:03.628552 systemd[1]: Reached target time-set.target. Mar 18 08:54:03.631847 systemd-resolved[1100]: Positive Trust Anchors: Mar 18 08:54:03.631866 systemd-resolved[1100]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 18 08:54:03.631904 systemd-resolved[1100]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 18 08:54:03.638542 systemd-resolved[1100]: Using system hostname 'ci-3510-3-7-7-00419dcf52.novalocal'. Mar 18 08:54:03.639875 systemd[1]: Started systemd-resolved.service. Mar 18 08:54:03.640425 systemd[1]: Reached target network.target. Mar 18 08:54:03.640882 systemd[1]: Reached target network-online.target. Mar 18 08:54:03.641327 systemd[1]: Reached target nss-lookup.target. Mar 18 08:54:03.641795 systemd[1]: Reached target sysinit.target. Mar 18 08:54:03.642297 systemd[1]: Started motdgen.path. Mar 18 08:54:03.642753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 18 08:54:03.643437 systemd[1]: Started logrotate.timer. Mar 18 08:54:03.643965 systemd[1]: Started mdadm.timer. Mar 18 08:54:03.644422 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 18 08:54:03.644885 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 18 08:54:03.644915 systemd[1]: Reached target paths.target. Mar 18 08:54:03.645334 systemd[1]: Reached target timers.target. Mar 18 08:54:03.646011 systemd[1]: Listening on dbus.socket. Mar 18 08:54:03.647461 systemd[1]: Starting docker.socket... Mar 18 08:54:03.651068 systemd[1]: Listening on sshd.socket. Mar 18 08:54:03.651635 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:03.652037 systemd[1]: Listening on docker.socket. Mar 18 08:54:03.652552 systemd[1]: Reached target sockets.target. Mar 18 08:54:03.653016 systemd[1]: Reached target basic.target. Mar 18 08:54:03.653487 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.653518 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 18 08:54:03.654454 systemd[1]: Starting containerd.service... Mar 18 08:54:03.656723 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 18 08:54:03.658218 systemd[1]: Starting dbus.service... Mar 18 08:54:03.660077 systemd[1]: Starting enable-oem-cloudinit.service... Mar 18 08:54:03.661970 systemd[1]: Starting extend-filesystems.service... Mar 18 08:54:03.663030 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 18 08:54:03.667752 systemd[1]: Starting kubelet.service... Mar 18 08:54:03.673916 systemd[1]: Starting motdgen.service... Mar 18 08:54:03.675454 systemd[1]: Starting prepare-helm.service... Mar 18 08:54:03.677443 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 18 08:54:03.678600 jq[1139]: false Mar 18 08:54:03.683407 systemd[1]: Starting sshd-keygen.service... Mar 18 08:54:03.687218 systemd[1]: Starting systemd-logind.service... Mar 18 08:54:03.687949 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 18 08:54:03.688000 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 18 08:54:03.688449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 18 08:54:03.689174 systemd[1]: Starting update-engine.service... Mar 18 08:54:03.690615 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 18 08:54:03.693092 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 18 08:54:03.694658 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 18 08:54:03.697175 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 18 08:54:03.697341 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 18 08:54:03.707358 jq[1150]: true Mar 18 08:54:03.727708 tar[1155]: linux-amd64/helm Mar 18 08:54:03.742888 jq[1164]: true Mar 18 08:54:03.760424 systemd[1]: motdgen.service: Deactivated successfully. Mar 18 08:54:03.760610 systemd[1]: Finished motdgen.service. Mar 18 08:54:03.765059 dbus-daemon[1136]: [system] SELinux support is enabled Mar 18 08:54:03.765452 systemd[1]: Started dbus.service. Mar 18 08:54:03.768353 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 18 08:54:03.768382 systemd[1]: Reached target system-config.target. Mar 18 08:54:03.768946 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 18 08:54:03.768967 systemd[1]: Reached target user-config.target. Mar 18 08:54:03.780268 extend-filesystems[1140]: Found loop1 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda1 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda2 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda3 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found usr Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda4 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda6 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda7 Mar 18 08:54:03.780268 extend-filesystems[1140]: Found vda9 Mar 18 08:54:03.780268 extend-filesystems[1140]: Checking size of /dev/vda9 Mar 18 08:54:03.814902 extend-filesystems[1140]: Resized partition /dev/vda9 Mar 18 08:54:03.830286 extend-filesystems[1191]: resize2fs 1.46.5 (30-Dec-2021) Mar 18 08:54:03.854791 env[1162]: time="2025-03-18T08:54:03.854719072Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 18 08:54:03.871949 update_engine[1149]: I0318 08:54:03.865528 1149 main.cc:92] Flatcar Update Engine starting Mar 18 08:54:03.876594 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 18 08:54:03.880594 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 18 08:54:03.893857 systemd[1]: Started update-engine.service. Mar 18 08:54:04.463313 env[1162]: time="2025-03-18T08:54:04.455657948Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 18 08:54:04.463313 env[1162]: time="2025-03-18T08:54:04.462729550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 18 08:54:04.463385 update_engine[1149]: I0318 08:54:03.894866 1149 update_check_scheduler.cc:74] Next update check in 8m59s Mar 18 08:54:03.896076 systemd[1]: Started locksmithd.service. Mar 18 08:54:04.451916 systemd-resolved[1100]: Clock change detected. Flushing caches. Mar 18 08:54:04.462999 systemd-logind[1148]: Watching system buttons on /dev/input/event1 (Power Button) Mar 18 08:54:04.463016 systemd-logind[1148]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 18 08:54:04.463027 systemd-timesyncd[1101]: Contacted time server 129.250.35.250:123 (0.flatcar.pool.ntp.org). Mar 18 08:54:04.463092 systemd-timesyncd[1101]: Initial clock synchronization to Tue 2025-03-18 08:54:04.451864 UTC. Mar 18 08:54:04.464620 extend-filesystems[1191]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 18 08:54:04.464620 extend-filesystems[1191]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 18 08:54:04.464620 extend-filesystems[1191]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 18 08:54:04.478301 extend-filesystems[1140]: Resized filesystem in /dev/vda9 Mar 18 08:54:04.488091 bash[1189]: Updated "/home/core/.ssh/authorized_keys" Mar 18 08:54:04.465316 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.471247676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.471304022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.474317794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.474343242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.474360614Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.474373949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.474512860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.474854360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.475322128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 18 08:54:04.488277 env[1162]: time="2025-03-18T08:54:04.475348718Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 18 08:54:04.465474 systemd[1]: Finished extend-filesystems.service. Mar 18 08:54:04.488711 env[1162]: time="2025-03-18T08:54:04.475446291Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 18 08:54:04.488711 env[1162]: time="2025-03-18T08:54:04.475470326Z" level=info msg="metadata content store policy set" policy=shared Mar 18 08:54:04.467180 systemd-logind[1148]: New seat seat0. Mar 18 08:54:04.477459 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 18 08:54:04.480089 systemd[1]: Started systemd-logind.service. Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500641144Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500692029Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500708340Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500756800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500778872Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500796425Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500862008Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500878759Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500897605Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500913384Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500929705Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.500944342Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.501064327Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 18 08:54:04.502711 env[1162]: time="2025-03-18T08:54:04.501183110Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501551731Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501580706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501596776Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501654564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501671075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501685031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501750394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501767296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501781532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501794827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501808924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501824192Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501952022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501970527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503291 env[1162]: time="2025-03-18T08:54:04.501985305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503627 env[1162]: time="2025-03-18T08:54:04.502001024Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 18 08:54:04.503627 env[1162]: time="2025-03-18T08:54:04.502018537Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 18 08:54:04.503627 env[1162]: time="2025-03-18T08:54:04.502030940Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 18 08:54:04.503627 env[1162]: time="2025-03-18T08:54:04.502050056Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 18 08:54:04.503627 env[1162]: time="2025-03-18T08:54:04.502089009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 18 08:54:04.503735 env[1162]: time="2025-03-18T08:54:04.502348275Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 18 08:54:04.503735 env[1162]: time="2025-03-18T08:54:04.502415131Z" level=info msg="Connect containerd service" Mar 18 08:54:04.503735 env[1162]: time="2025-03-18T08:54:04.502445598Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504145807Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504257967Z" level=info msg="Start subscribing containerd event" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504306087Z" level=info msg="Start recovering state" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504373083Z" level=info msg="Start event monitor" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504392720Z" level=info msg="Start snapshots syncer" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504402538Z" level=info msg="Start cni network conf syncer for default" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504410633Z" level=info msg="Start streaming server" Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504778563Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504837173Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 18 08:54:04.508844 env[1162]: time="2025-03-18T08:54:04.504885905Z" level=info msg="containerd successfully booted in 0.115551s" Mar 18 08:54:04.505804 systemd[1]: Started containerd.service. Mar 18 08:54:05.157179 tar[1155]: linux-amd64/LICENSE Mar 18 08:54:05.157339 tar[1155]: linux-amd64/README.md Mar 18 08:54:05.161816 systemd[1]: Finished prepare-helm.service. Mar 18 08:54:05.191097 locksmithd[1195]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 18 08:54:05.756575 systemd[1]: Started kubelet.service. Mar 18 08:54:05.886594 sshd_keygen[1169]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 18 08:54:05.927635 systemd[1]: Finished sshd-keygen.service. Mar 18 08:54:05.929730 systemd[1]: Starting issuegen.service... Mar 18 08:54:05.936838 systemd[1]: issuegen.service: Deactivated successfully. Mar 18 08:54:05.936997 systemd[1]: Finished issuegen.service. Mar 18 08:54:05.938826 systemd[1]: Starting systemd-user-sessions.service... Mar 18 08:54:05.945878 systemd[1]: Finished systemd-user-sessions.service. Mar 18 08:54:05.947709 systemd[1]: Started getty@tty1.service. Mar 18 08:54:05.949291 systemd[1]: Started serial-getty@ttyS0.service. Mar 18 08:54:05.949920 systemd[1]: Reached target getty.target. Mar 18 08:54:06.671297 systemd[1]: Created slice system-sshd.slice. Mar 18 08:54:06.675983 systemd[1]: Started sshd@0-172.24.4.149:22-172.24.4.1:50430.service. Mar 18 08:54:06.722975 kubelet[1208]: E0318 08:54:06.722930 1208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 08:54:06.726237 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 08:54:06.726371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 08:54:06.726737 systemd[1]: kubelet.service: Consumed 1.303s CPU time. Mar 18 08:54:07.668430 sshd[1228]: Accepted publickey for core from 172.24.4.1 port 50430 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:07.674328 sshd[1228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:07.708993 systemd-logind[1148]: New session 1 of user core. Mar 18 08:54:07.712590 systemd[1]: Created slice user-500.slice. Mar 18 08:54:07.716359 systemd[1]: Starting user-runtime-dir@500.service... Mar 18 08:54:07.751629 systemd[1]: Finished user-runtime-dir@500.service. Mar 18 08:54:07.757190 systemd[1]: Starting user@500.service... Mar 18 08:54:07.766750 (systemd)[1232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:07.885358 systemd[1232]: Queued start job for default target default.target. Mar 18 08:54:07.886483 systemd[1232]: Reached target paths.target. Mar 18 08:54:07.886586 systemd[1232]: Reached target sockets.target. Mar 18 08:54:07.886664 systemd[1232]: Reached target timers.target. Mar 18 08:54:07.886747 systemd[1232]: Reached target basic.target. Mar 18 08:54:07.886863 systemd[1232]: Reached target default.target. Mar 18 08:54:07.886989 systemd[1232]: Startup finished in 105ms. Mar 18 08:54:07.887096 systemd[1]: Started user@500.service. Mar 18 08:54:07.892060 systemd[1]: Started session-1.scope. Mar 18 08:54:08.367897 systemd[1]: Started sshd@1-172.24.4.149:22-172.24.4.1:50444.service. Mar 18 08:54:09.936773 sshd[1241]: Accepted publickey for core from 172.24.4.1 port 50444 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:09.940096 sshd[1241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:09.950637 systemd-logind[1148]: New session 2 of user core. Mar 18 08:54:09.951393 systemd[1]: Started session-2.scope. Mar 18 08:54:10.579845 sshd[1241]: pam_unix(sshd:session): session closed for user core Mar 18 08:54:10.587781 systemd[1]: Started sshd@2-172.24.4.149:22-172.24.4.1:50448.service. Mar 18 08:54:10.590509 systemd[1]: sshd@1-172.24.4.149:22-172.24.4.1:50444.service: Deactivated successfully. Mar 18 08:54:10.593079 systemd[1]: session-2.scope: Deactivated successfully. Mar 18 08:54:10.595679 systemd-logind[1148]: Session 2 logged out. Waiting for processes to exit. Mar 18 08:54:10.598350 systemd-logind[1148]: Removed session 2. Mar 18 08:54:11.339719 coreos-metadata[1135]: Mar 18 08:54:11.339 WARN failed to locate config-drive, using the metadata service API instead Mar 18 08:54:11.450983 coreos-metadata[1135]: Mar 18 08:54:11.450 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 18 08:54:11.718817 sshd[1246]: Accepted publickey for core from 172.24.4.1 port 50448 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:11.721453 sshd[1246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:11.731298 systemd-logind[1148]: New session 3 of user core. Mar 18 08:54:11.732002 systemd[1]: Started session-3.scope. Mar 18 08:54:11.928828 coreos-metadata[1135]: Mar 18 08:54:11.928 INFO Fetch successful Mar 18 08:54:11.928828 coreos-metadata[1135]: Mar 18 08:54:11.928 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 18 08:54:11.946754 coreos-metadata[1135]: Mar 18 08:54:11.946 INFO Fetch successful Mar 18 08:54:11.951346 unknown[1135]: wrote ssh authorized keys file for user: core Mar 18 08:54:11.995675 update-ssh-keys[1252]: Updated "/home/core/.ssh/authorized_keys" Mar 18 08:54:11.997371 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 18 08:54:11.998259 systemd[1]: Reached target multi-user.target. Mar 18 08:54:12.002555 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 18 08:54:12.019437 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 18 08:54:12.019963 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 18 08:54:12.021252 systemd[1]: Startup finished in 914ms (kernel) + 8.640s (initrd) + 14.229s (userspace) = 23.785s. Mar 18 08:54:12.359719 sshd[1246]: pam_unix(sshd:session): session closed for user core Mar 18 08:54:12.366261 systemd-logind[1148]: Session 3 logged out. Waiting for processes to exit. Mar 18 08:54:12.366516 systemd[1]: sshd@2-172.24.4.149:22-172.24.4.1:50448.service: Deactivated successfully. Mar 18 08:54:12.367943 systemd[1]: session-3.scope: Deactivated successfully. Mar 18 08:54:12.369652 systemd-logind[1148]: Removed session 3. Mar 18 08:54:16.977981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 18 08:54:16.978473 systemd[1]: Stopped kubelet.service. Mar 18 08:54:16.978558 systemd[1]: kubelet.service: Consumed 1.303s CPU time. Mar 18 08:54:16.981443 systemd[1]: Starting kubelet.service... Mar 18 08:54:17.109669 systemd[1]: Started kubelet.service. Mar 18 08:54:17.379632 kubelet[1260]: E0318 08:54:17.379392 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 08:54:17.386464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 08:54:17.386771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 08:54:22.372018 systemd[1]: Started sshd@3-172.24.4.149:22-172.24.4.1:58804.service. Mar 18 08:54:23.543747 sshd[1267]: Accepted publickey for core from 172.24.4.1 port 58804 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:23.547047 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:23.556943 systemd-logind[1148]: New session 4 of user core. Mar 18 08:54:23.557695 systemd[1]: Started session-4.scope. Mar 18 08:54:24.335455 sshd[1267]: pam_unix(sshd:session): session closed for user core Mar 18 08:54:24.340852 systemd[1]: Started sshd@4-172.24.4.149:22-172.24.4.1:38370.service. Mar 18 08:54:24.345777 systemd[1]: sshd@3-172.24.4.149:22-172.24.4.1:58804.service: Deactivated successfully. Mar 18 08:54:24.347335 systemd[1]: session-4.scope: Deactivated successfully. Mar 18 08:54:24.350325 systemd-logind[1148]: Session 4 logged out. Waiting for processes to exit. Mar 18 08:54:24.352649 systemd-logind[1148]: Removed session 4. Mar 18 08:54:25.498420 sshd[1272]: Accepted publickey for core from 172.24.4.1 port 38370 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:25.501761 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:25.512012 systemd-logind[1148]: New session 5 of user core. Mar 18 08:54:25.512749 systemd[1]: Started session-5.scope. Mar 18 08:54:26.101901 sshd[1272]: pam_unix(sshd:session): session closed for user core Mar 18 08:54:26.109574 systemd[1]: Started sshd@5-172.24.4.149:22-172.24.4.1:38374.service. Mar 18 08:54:26.110755 systemd[1]: sshd@4-172.24.4.149:22-172.24.4.1:38370.service: Deactivated successfully. Mar 18 08:54:26.113597 systemd[1]: session-5.scope: Deactivated successfully. Mar 18 08:54:26.116043 systemd-logind[1148]: Session 5 logged out. Waiting for processes to exit. Mar 18 08:54:26.119306 systemd-logind[1148]: Removed session 5. Mar 18 08:54:27.301266 sshd[1278]: Accepted publickey for core from 172.24.4.1 port 38374 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:27.303916 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:27.314314 systemd-logind[1148]: New session 6 of user core. Mar 18 08:54:27.314989 systemd[1]: Started session-6.scope. Mar 18 08:54:27.461164 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 18 08:54:27.461726 systemd[1]: Stopped kubelet.service. Mar 18 08:54:27.464822 systemd[1]: Starting kubelet.service... Mar 18 08:54:27.715772 systemd[1]: Started kubelet.service. Mar 18 08:54:27.809489 kubelet[1286]: E0318 08:54:27.809387 1286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 08:54:27.812066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 08:54:27.812426 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 08:54:28.032082 sshd[1278]: pam_unix(sshd:session): session closed for user core Mar 18 08:54:28.039752 systemd[1]: sshd@5-172.24.4.149:22-172.24.4.1:38374.service: Deactivated successfully. Mar 18 08:54:28.041252 systemd[1]: session-6.scope: Deactivated successfully. Mar 18 08:54:28.042658 systemd-logind[1148]: Session 6 logged out. Waiting for processes to exit. Mar 18 08:54:28.044949 systemd[1]: Started sshd@6-172.24.4.149:22-172.24.4.1:38382.service. Mar 18 08:54:28.048832 systemd-logind[1148]: Removed session 6. Mar 18 08:54:29.173057 sshd[1295]: Accepted publickey for core from 172.24.4.1 port 38382 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:54:29.175805 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:54:29.186790 systemd[1]: Started session-7.scope. Mar 18 08:54:29.188197 systemd-logind[1148]: New session 7 of user core. Mar 18 08:54:29.637874 sudo[1298]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 18 08:54:29.638462 sudo[1298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 18 08:54:29.690983 systemd[1]: Starting docker.service... Mar 18 08:54:29.764667 env[1308]: time="2025-03-18T08:54:29.764564115Z" level=info msg="Starting up" Mar 18 08:54:29.774677 env[1308]: time="2025-03-18T08:54:29.774617740Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 18 08:54:29.774677 env[1308]: time="2025-03-18T08:54:29.774670619Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 18 08:54:29.774852 env[1308]: time="2025-03-18T08:54:29.774720132Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 18 08:54:29.774852 env[1308]: time="2025-03-18T08:54:29.774749828Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 18 08:54:29.778234 env[1308]: time="2025-03-18T08:54:29.778207803Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 18 08:54:29.778310 env[1308]: time="2025-03-18T08:54:29.778296349Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 18 08:54:29.778387 env[1308]: time="2025-03-18T08:54:29.778369296Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 18 08:54:29.778452 env[1308]: time="2025-03-18T08:54:29.778438736Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 18 08:54:29.790695 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3793745718-merged.mount: Deactivated successfully. Mar 18 08:54:29.850566 env[1308]: time="2025-03-18T08:54:29.850497823Z" level=info msg="Loading containers: start." Mar 18 08:54:30.081156 kernel: Initializing XFRM netlink socket Mar 18 08:54:30.161860 env[1308]: time="2025-03-18T08:54:30.161788148Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 18 08:54:30.315042 systemd-networkd[982]: docker0: Link UP Mar 18 08:54:30.334562 env[1308]: time="2025-03-18T08:54:30.334452149Z" level=info msg="Loading containers: done." Mar 18 08:54:30.366976 env[1308]: time="2025-03-18T08:54:30.366935272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 18 08:54:30.367369 env[1308]: time="2025-03-18T08:54:30.367349929Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 18 08:54:30.367553 env[1308]: time="2025-03-18T08:54:30.367538062Z" level=info msg="Daemon has completed initialization" Mar 18 08:54:30.397948 systemd[1]: Started docker.service. Mar 18 08:54:30.416219 env[1308]: time="2025-03-18T08:54:30.416086715Z" level=info msg="API listen on /run/docker.sock" Mar 18 08:54:32.216973 env[1162]: time="2025-03-18T08:54:32.216895291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 18 08:54:32.900762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858845841.mount: Deactivated successfully. Mar 18 08:54:35.518139 env[1162]: time="2025-03-18T08:54:35.517999271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:35.521248 env[1162]: time="2025-03-18T08:54:35.521188522Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:35.524531 env[1162]: time="2025-03-18T08:54:35.524485215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:35.527529 env[1162]: time="2025-03-18T08:54:35.527471947Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:35.529075 env[1162]: time="2025-03-18T08:54:35.529003920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 18 08:54:35.531690 env[1162]: time="2025-03-18T08:54:35.531663318Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 18 08:54:37.962645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 18 08:54:37.963166 systemd[1]: Stopped kubelet.service. Mar 18 08:54:37.966049 systemd[1]: Starting kubelet.service... Mar 18 08:54:38.102055 systemd[1]: Started kubelet.service. Mar 18 08:54:38.209037 kubelet[1435]: E0318 08:54:38.208973 1435 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 08:54:38.212035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 08:54:38.212325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 08:54:38.299585 env[1162]: time="2025-03-18T08:54:38.299393892Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:38.303294 env[1162]: time="2025-03-18T08:54:38.303224502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:38.308259 env[1162]: time="2025-03-18T08:54:38.308172097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:38.314869 env[1162]: time="2025-03-18T08:54:38.314745401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:38.323290 env[1162]: time="2025-03-18T08:54:38.323161829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 18 08:54:38.324622 env[1162]: time="2025-03-18T08:54:38.324556042Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 18 08:54:40.304543 env[1162]: time="2025-03-18T08:54:40.301630968Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:40.307529 env[1162]: time="2025-03-18T08:54:40.306428161Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:40.309181 env[1162]: time="2025-03-18T08:54:40.308859188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:40.313183 env[1162]: time="2025-03-18T08:54:40.313089940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:40.315556 env[1162]: time="2025-03-18T08:54:40.315487406Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 18 08:54:40.317407 env[1162]: time="2025-03-18T08:54:40.317352542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 18 08:54:41.833331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540842414.mount: Deactivated successfully. Mar 18 08:54:42.908243 env[1162]: time="2025-03-18T08:54:42.908077105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:42.913678 env[1162]: time="2025-03-18T08:54:42.913584491Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:42.918968 env[1162]: time="2025-03-18T08:54:42.918882324Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:42.923505 env[1162]: time="2025-03-18T08:54:42.923405244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:42.926261 env[1162]: time="2025-03-18T08:54:42.924968675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 18 08:54:42.927617 env[1162]: time="2025-03-18T08:54:42.927566517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 18 08:54:43.554889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179312084.mount: Deactivated successfully. Mar 18 08:54:45.191883 env[1162]: time="2025-03-18T08:54:45.191782694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.198298 env[1162]: time="2025-03-18T08:54:45.198231334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.201479 env[1162]: time="2025-03-18T08:54:45.201423540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.203583 env[1162]: time="2025-03-18T08:54:45.203539458Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 18 08:54:45.205324 env[1162]: time="2025-03-18T08:54:45.204573586Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 18 08:54:45.208276 env[1162]: time="2025-03-18T08:54:45.208085050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.791751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330448762.mount: Deactivated successfully. Mar 18 08:54:45.806673 env[1162]: time="2025-03-18T08:54:45.806607251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.809981 env[1162]: time="2025-03-18T08:54:45.809930642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.813260 env[1162]: time="2025-03-18T08:54:45.813210501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.816309 env[1162]: time="2025-03-18T08:54:45.816228971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:45.817929 env[1162]: time="2025-03-18T08:54:45.817869928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 18 08:54:45.819238 env[1162]: time="2025-03-18T08:54:45.819188360Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 18 08:54:46.406594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917371713.mount: Deactivated successfully. Mar 18 08:54:48.461179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 18 08:54:48.461572 systemd[1]: Stopped kubelet.service. Mar 18 08:54:48.464190 systemd[1]: Starting kubelet.service... Mar 18 08:54:48.599958 systemd[1]: Started kubelet.service. Mar 18 08:54:48.658190 kubelet[1444]: E0318 08:54:48.658106 1444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 18 08:54:48.659803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 18 08:54:48.659945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 18 08:54:49.243275 update_engine[1149]: I0318 08:54:49.243199 1149 update_attempter.cc:509] Updating boot flags... Mar 18 08:54:50.747488 env[1162]: time="2025-03-18T08:54:50.747352191Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:50.751915 env[1162]: time="2025-03-18T08:54:50.751867568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:50.756424 env[1162]: time="2025-03-18T08:54:50.756375591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:50.760524 env[1162]: time="2025-03-18T08:54:50.760478083Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:50.763100 env[1162]: time="2025-03-18T08:54:50.763021733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 18 08:54:54.593723 systemd[1]: Stopped kubelet.service. Mar 18 08:54:54.601866 systemd[1]: Starting kubelet.service... Mar 18 08:54:54.658762 systemd[1]: Reloading. Mar 18 08:54:54.783248 /usr/lib/systemd/system-generators/torcx-generator[1507]: time="2025-03-18T08:54:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 18 08:54:54.785210 /usr/lib/systemd/system-generators/torcx-generator[1507]: time="2025-03-18T08:54:54Z" level=info msg="torcx already run" Mar 18 08:54:54.865643 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 18 08:54:54.865838 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 18 08:54:54.888841 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 08:54:54.986169 systemd[1]: Started kubelet.service. Mar 18 08:54:54.988335 systemd[1]: Stopping kubelet.service... Mar 18 08:54:54.988880 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 08:54:54.989062 systemd[1]: Stopped kubelet.service. Mar 18 08:54:54.991062 systemd[1]: Starting kubelet.service... Mar 18 08:54:55.072642 systemd[1]: Started kubelet.service. Mar 18 08:54:55.424788 kubelet[1561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:54:55.424788 kubelet[1561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 08:54:55.424788 kubelet[1561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:54:55.425628 kubelet[1561]: I0318 08:54:55.424954 1561 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 08:54:56.508822 kubelet[1561]: I0318 08:54:56.508748 1561 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 18 08:54:56.508822 kubelet[1561]: I0318 08:54:56.508813 1561 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 08:54:56.509485 kubelet[1561]: I0318 08:54:56.509442 1561 server.go:929] "Client rotation is on, will bootstrap in background" Mar 18 08:54:56.559591 kubelet[1561]: E0318 08:54:56.559549 1561 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:56.560664 kubelet[1561]: I0318 08:54:56.560647 1561 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 18 08:54:56.576059 kubelet[1561]: E0318 08:54:56.575986 1561 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 18 08:54:56.576415 kubelet[1561]: I0318 08:54:56.576387 1561 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 18 08:54:56.588369 kubelet[1561]: I0318 08:54:56.588317 1561 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 18 08:54:56.588669 kubelet[1561]: I0318 08:54:56.588623 1561 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 08:54:56.589015 kubelet[1561]: I0318 08:54:56.588922 1561 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 08:54:56.589756 kubelet[1561]: I0318 08:54:56.589016 1561 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-7-00419dcf52.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 08:54:56.589981 kubelet[1561]: I0318 08:54:56.589861 1561 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 08:54:56.589981 kubelet[1561]: I0318 08:54:56.589893 1561 container_manager_linux.go:300] "Creating device plugin manager" Mar 18 08:54:56.590298 kubelet[1561]: I0318 08:54:56.590258 1561 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:54:56.605067 kubelet[1561]: I0318 08:54:56.605013 1561 kubelet.go:408] "Attempting to sync node with API server" Mar 18 08:54:56.605404 kubelet[1561]: I0318 08:54:56.605375 1561 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 08:54:56.605665 kubelet[1561]: I0318 08:54:56.605638 1561 kubelet.go:314] "Adding apiserver pod source" Mar 18 08:54:56.605879 kubelet[1561]: I0318 08:54:56.605851 1561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 08:54:56.625099 kubelet[1561]: W0318 08:54:56.623923 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-7-00419dcf52.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:56.625099 kubelet[1561]: E0318 08:54:56.624072 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-7-00419dcf52.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:56.625099 kubelet[1561]: W0318 08:54:56.624857 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:56.625099 kubelet[1561]: E0318 08:54:56.624943 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:56.625942 kubelet[1561]: I0318 08:54:56.625899 1561 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 18 08:54:56.632941 kubelet[1561]: I0318 08:54:56.632904 1561 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 08:54:56.641307 kubelet[1561]: W0318 08:54:56.641272 1561 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 18 08:54:56.646491 kubelet[1561]: I0318 08:54:56.646457 1561 server.go:1269] "Started kubelet" Mar 18 08:54:56.647312 kubelet[1561]: I0318 08:54:56.647205 1561 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 08:54:56.649546 kubelet[1561]: I0318 08:54:56.649499 1561 server.go:460] "Adding debug handlers to kubelet server" Mar 18 08:54:56.655442 kubelet[1561]: I0318 08:54:56.655343 1561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 08:54:56.655968 kubelet[1561]: I0318 08:54:56.655935 1561 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 08:54:56.659504 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 18 08:54:56.662629 kubelet[1561]: I0318 08:54:56.660942 1561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 08:54:56.663223 kubelet[1561]: E0318 08:54:56.656438 1561 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.149:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-7-00419dcf52.novalocal.182dd9caf8ffae32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-7-00419dcf52.novalocal,UID:ci-3510-3-7-7-00419dcf52.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-7-00419dcf52.novalocal,},FirstTimestamp:2025-03-18 08:54:56.64637701 +0000 UTC m=+1.568447612,LastTimestamp:2025-03-18 08:54:56.64637701 +0000 UTC m=+1.568447612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-7-00419dcf52.novalocal,}" Mar 18 08:54:56.664023 kubelet[1561]: I0318 08:54:56.663961 1561 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 18 08:54:56.667410 kubelet[1561]: I0318 08:54:56.667357 1561 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 08:54:56.667956 kubelet[1561]: E0318 08:54:56.667899 1561 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-7-7-00419dcf52.novalocal\" not found" Mar 18 08:54:56.668999 kubelet[1561]: I0318 08:54:56.668943 1561 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 18 08:54:56.669169 kubelet[1561]: I0318 08:54:56.669107 1561 reconciler.go:26] "Reconciler: start to sync state" Mar 18 08:54:56.671197 kubelet[1561]: W0318 08:54:56.671046 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:56.671332 kubelet[1561]: E0318 08:54:56.671224 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:56.671417 kubelet[1561]: E0318 08:54:56.671366 1561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-7-00419dcf52.novalocal?timeout=10s\": dial tcp 172.24.4.149:6443: connect: connection refused" interval="200ms" Mar 18 08:54:56.672090 kubelet[1561]: I0318 08:54:56.671846 1561 factory.go:221] Registration of the systemd container factory successfully Mar 18 08:54:56.672090 kubelet[1561]: I0318 08:54:56.672013 1561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 18 08:54:56.676367 kubelet[1561]: I0318 08:54:56.676323 1561 factory.go:221] Registration of the containerd container factory successfully Mar 18 08:54:56.694384 kubelet[1561]: E0318 08:54:56.694342 1561 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 18 08:54:56.704341 kubelet[1561]: I0318 08:54:56.704245 1561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 08:54:56.709018 kubelet[1561]: I0318 08:54:56.708993 1561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 08:54:56.709332 kubelet[1561]: I0318 08:54:56.709310 1561 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 08:54:56.709746 kubelet[1561]: I0318 08:54:56.709725 1561 kubelet.go:2321] "Starting kubelet main sync loop" Mar 18 08:54:56.710065 kubelet[1561]: E0318 08:54:56.710033 1561 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 08:54:56.713621 kubelet[1561]: I0318 08:54:56.713598 1561 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 18 08:54:56.713621 kubelet[1561]: I0318 08:54:56.713614 1561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 18 08:54:56.713621 kubelet[1561]: I0318 08:54:56.713632 1561 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:54:56.714197 kubelet[1561]: W0318 08:54:56.714109 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:56.714433 kubelet[1561]: E0318 08:54:56.714415 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:56.718453 kubelet[1561]: I0318 08:54:56.718405 1561 policy_none.go:49] "None policy: Start" Mar 18 08:54:56.719234 kubelet[1561]: I0318 08:54:56.719219 1561 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 08:54:56.719532 kubelet[1561]: I0318 08:54:56.719508 1561 state_mem.go:35] "Initializing new in-memory state store" Mar 18 08:54:56.733755 systemd[1]: Created slice kubepods.slice. Mar 18 08:54:56.738447 systemd[1]: Created slice kubepods-besteffort.slice. Mar 18 08:54:56.745140 systemd[1]: Created slice kubepods-burstable.slice. Mar 18 08:54:56.746929 kubelet[1561]: I0318 08:54:56.746894 1561 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 08:54:56.747062 kubelet[1561]: I0318 08:54:56.747036 1561 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 08:54:56.747099 kubelet[1561]: I0318 08:54:56.747059 1561 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 08:54:56.748104 kubelet[1561]: I0318 08:54:56.747656 1561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 08:54:56.749204 kubelet[1561]: E0318 08:54:56.749185 1561 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-7-00419dcf52.novalocal\" not found" Mar 18 08:54:56.827077 systemd[1]: Created slice kubepods-burstable-pod82cb68b656f2e528ffa6d990f9c83d62.slice. Mar 18 08:54:56.842892 systemd[1]: Created slice kubepods-burstable-pod598218f03fc7ad8bbb70a1bfbf03f074.slice. Mar 18 08:54:56.854752 kubelet[1561]: I0318 08:54:56.853928 1561 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.855043 kubelet[1561]: E0318 08:54:56.855015 1561 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.149:6443/api/v1/nodes\": dial tcp 172.24.4.149:6443: connect: connection refused" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.855802 systemd[1]: Created slice kubepods-burstable-pod7c1331696c6d62f5bb8750e491d2e678.slice. Mar 18 08:54:56.870194 kubelet[1561]: I0318 08:54:56.870166 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82cb68b656f2e528ffa6d990f9c83d62-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"82cb68b656f2e528ffa6d990f9c83d62\") " pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.870463 kubelet[1561]: I0318 08:54:56.870413 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82cb68b656f2e528ffa6d990f9c83d62-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"82cb68b656f2e528ffa6d990f9c83d62\") " pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.870535 kubelet[1561]: I0318 08:54:56.870497 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.870591 kubelet[1561]: I0318 08:54:56.870561 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82cb68b656f2e528ffa6d990f9c83d62-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"82cb68b656f2e528ffa6d990f9c83d62\") " pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.870697 kubelet[1561]: I0318 08:54:56.870672 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.870945 kubelet[1561]: I0318 08:54:56.870869 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.871048 kubelet[1561]: I0318 08:54:56.870977 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.871177 kubelet[1561]: I0318 08:54:56.871081 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.871278 kubelet[1561]: I0318 08:54:56.871243 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c1331696c6d62f5bb8750e491d2e678-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"7c1331696c6d62f5bb8750e491d2e678\") " pod="kube-system/kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:56.872427 kubelet[1561]: E0318 08:54:56.872326 1561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-7-00419dcf52.novalocal?timeout=10s\": dial tcp 172.24.4.149:6443: connect: connection refused" interval="400ms" Mar 18 08:54:57.059466 kubelet[1561]: I0318 08:54:57.059418 1561 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:57.060434 kubelet[1561]: E0318 08:54:57.060384 1561 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.149:6443/api/v1/nodes\": dial tcp 172.24.4.149:6443: connect: connection refused" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:57.143248 env[1162]: time="2025-03-18T08:54:57.140861002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal,Uid:82cb68b656f2e528ffa6d990f9c83d62,Namespace:kube-system,Attempt:0,}" Mar 18 08:54:57.150201 env[1162]: time="2025-03-18T08:54:57.150074881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal,Uid:598218f03fc7ad8bbb70a1bfbf03f074,Namespace:kube-system,Attempt:0,}" Mar 18 08:54:57.162707 env[1162]: time="2025-03-18T08:54:57.162595100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal,Uid:7c1331696c6d62f5bb8750e491d2e678,Namespace:kube-system,Attempt:0,}" Mar 18 08:54:57.274168 kubelet[1561]: E0318 08:54:57.274013 1561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-7-00419dcf52.novalocal?timeout=10s\": dial tcp 172.24.4.149:6443: connect: connection refused" interval="800ms" Mar 18 08:54:57.449523 kubelet[1561]: W0318 08:54:57.449252 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-7-00419dcf52.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:57.449523 kubelet[1561]: E0318 08:54:57.449407 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-7-00419dcf52.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:57.463541 kubelet[1561]: I0318 08:54:57.463451 1561 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:57.464200 kubelet[1561]: E0318 08:54:57.464066 1561 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.149:6443/api/v1/nodes\": dial tcp 172.24.4.149:6443: connect: connection refused" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:54:57.619927 kubelet[1561]: W0318 08:54:57.619774 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:57.619927 kubelet[1561]: E0318 08:54:57.619923 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:57.672758 kubelet[1561]: W0318 08:54:57.672576 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:57.672758 kubelet[1561]: E0318 08:54:57.672697 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:57.740393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082309853.mount: Deactivated successfully. Mar 18 08:54:57.765325 env[1162]: time="2025-03-18T08:54:57.765176418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.770365 env[1162]: time="2025-03-18T08:54:57.770315955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.773980 env[1162]: time="2025-03-18T08:54:57.773930333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.778102 env[1162]: time="2025-03-18T08:54:57.777978294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.782347 env[1162]: time="2025-03-18T08:54:57.781445386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.789646 env[1162]: time="2025-03-18T08:54:57.789573389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.797530 env[1162]: time="2025-03-18T08:54:57.797460549Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.806104 env[1162]: time="2025-03-18T08:54:57.806044496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.808417 env[1162]: time="2025-03-18T08:54:57.808361701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.810047 kubelet[1561]: W0318 08:54:57.809914 1561 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.149:6443: connect: connection refused Mar 18 08:54:57.810245 kubelet[1561]: E0318 08:54:57.810076 1561 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.149:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:54:57.810837 env[1162]: time="2025-03-18T08:54:57.810779966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.812967 env[1162]: time="2025-03-18T08:54:57.812902817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.815208 env[1162]: time="2025-03-18T08:54:57.815092023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:54:57.846282 env[1162]: time="2025-03-18T08:54:57.846169701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:54:57.846495 env[1162]: time="2025-03-18T08:54:57.846277513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:54:57.846495 env[1162]: time="2025-03-18T08:54:57.846306988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:54:57.847622 env[1162]: time="2025-03-18T08:54:57.846918626Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e9f709916a3faa23cc95c62b7c316da0a636dac2191eea614175b9c3dedd57c pid=1598 runtime=io.containerd.runc.v2 Mar 18 08:54:57.879451 systemd[1]: Started cri-containerd-4e9f709916a3faa23cc95c62b7c316da0a636dac2191eea614175b9c3dedd57c.scope. Mar 18 08:54:57.910163 env[1162]: time="2025-03-18T08:54:57.910077615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:54:57.910407 env[1162]: time="2025-03-18T08:54:57.910383098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:54:57.910514 env[1162]: time="2025-03-18T08:54:57.910491902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:54:57.910944 env[1162]: time="2025-03-18T08:54:57.910886753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:54:57.911053 env[1162]: time="2025-03-18T08:54:57.911031825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:54:57.911174 env[1162]: time="2025-03-18T08:54:57.911152151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:54:57.911391 env[1162]: time="2025-03-18T08:54:57.911366402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/281fa9202635f07b698c36a29617e50b209cd053eda8dd8b7692d928a3758587 pid=1637 runtime=io.containerd.runc.v2 Mar 18 08:54:57.911575 env[1162]: time="2025-03-18T08:54:57.911551089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86403357652030b5df97cace05e000d167cd9dc52b97c455f8b558eee94f8616 pid=1638 runtime=io.containerd.runc.v2 Mar 18 08:54:57.930417 systemd[1]: Started cri-containerd-86403357652030b5df97cace05e000d167cd9dc52b97c455f8b558eee94f8616.scope. Mar 18 08:54:57.947490 systemd[1]: Started cri-containerd-281fa9202635f07b698c36a29617e50b209cd053eda8dd8b7692d928a3758587.scope. Mar 18 08:54:57.961910 env[1162]: time="2025-03-18T08:54:57.961862305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal,Uid:7c1331696c6d62f5bb8750e491d2e678,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9f709916a3faa23cc95c62b7c316da0a636dac2191eea614175b9c3dedd57c\"" Mar 18 08:54:57.965349 env[1162]: time="2025-03-18T08:54:57.965310852Z" level=info msg="CreateContainer within sandbox \"4e9f709916a3faa23cc95c62b7c316da0a636dac2191eea614175b9c3dedd57c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 18 08:54:57.992499 env[1162]: time="2025-03-18T08:54:57.992390653Z" level=info msg="CreateContainer within sandbox \"4e9f709916a3faa23cc95c62b7c316da0a636dac2191eea614175b9c3dedd57c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e23f00ceb287df5c923e99b7fdc7940e61226b64c55102031bc3c5457e3ad47\"" Mar 18 08:54:57.993370 env[1162]: time="2025-03-18T08:54:57.993348630Z" level=info msg="StartContainer for \"9e23f00ceb287df5c923e99b7fdc7940e61226b64c55102031bc3c5457e3ad47\"" Mar 18 08:54:58.002901 env[1162]: time="2025-03-18T08:54:58.002856630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal,Uid:598218f03fc7ad8bbb70a1bfbf03f074,Namespace:kube-system,Attempt:0,} returns sandbox id \"86403357652030b5df97cace05e000d167cd9dc52b97c455f8b558eee94f8616\"" Mar 18 08:54:58.005295 env[1162]: time="2025-03-18T08:54:58.005267371Z" level=info msg="CreateContainer within sandbox \"86403357652030b5df97cace05e000d167cd9dc52b97c455f8b558eee94f8616\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 18 08:54:58.020205 systemd[1]: Started cri-containerd-9e23f00ceb287df5c923e99b7fdc7940e61226b64c55102031bc3c5457e3ad47.scope. Mar 18 08:54:58.038757 env[1162]: time="2025-03-18T08:54:58.038685289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal,Uid:82cb68b656f2e528ffa6d990f9c83d62,Namespace:kube-system,Attempt:0,} returns sandbox id \"281fa9202635f07b698c36a29617e50b209cd053eda8dd8b7692d928a3758587\"" Mar 18 08:54:58.043047 env[1162]: time="2025-03-18T08:54:58.043008266Z" level=info msg="CreateContainer within sandbox \"86403357652030b5df97cace05e000d167cd9dc52b97c455f8b558eee94f8616\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d8bcd95c2a8f9e11b36622f83e95425ef5a1661915d047218bef052db013219\"" Mar 18 08:54:58.044995 env[1162]: time="2025-03-18T08:54:58.044955608Z" level=info msg="CreateContainer within sandbox \"281fa9202635f07b698c36a29617e50b209cd053eda8dd8b7692d928a3758587\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 18 08:54:58.045444 env[1162]: time="2025-03-18T08:54:58.045414407Z" level=info msg="StartContainer for \"5d8bcd95c2a8f9e11b36622f83e95425ef5a1661915d047218bef052db013219\"" Mar 18 08:54:58.064380 env[1162]: time="2025-03-18T08:54:58.064328717Z" level=info msg="CreateContainer within sandbox \"281fa9202635f07b698c36a29617e50b209cd053eda8dd8b7692d928a3758587\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"170fe08909fa6ddcff350ecfc92b6fe37c9e0458eacb34b3632fd197312531ef\"" Mar 18 08:54:58.065059 env[1162]: time="2025-03-18T08:54:58.065034992Z" level=info msg="StartContainer for \"170fe08909fa6ddcff350ecfc92b6fe37c9e0458eacb34b3632fd197312531ef\"" Mar 18 08:54:58.073033 systemd[1]: Started cri-containerd-5d8bcd95c2a8f9e11b36622f83e95425ef5a1661915d047218bef052db013219.scope. Mar 18 08:54:58.075441 kubelet[1561]: E0318 08:54:58.075362 1561 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-7-00419dcf52.novalocal?timeout=10s\": dial tcp 172.24.4.149:6443: connect: connection refused" interval="1.6s" Mar 18 08:54:58.096835 systemd[1]: Started cri-containerd-170fe08909fa6ddcff350ecfc92b6fe37c9e0458eacb34b3632fd197312531ef.scope. Mar 18 08:54:58.129692 env[1162]: time="2025-03-18T08:54:58.129643451Z" level=info msg="StartContainer for \"9e23f00ceb287df5c923e99b7fdc7940e61226b64c55102031bc3c5457e3ad47\" returns successfully" Mar 18 08:54:58.172016 env[1162]: time="2025-03-18T08:54:58.171619227Z" level=info msg="StartContainer for \"170fe08909fa6ddcff350ecfc92b6fe37c9e0458eacb34b3632fd197312531ef\" returns successfully" Mar 18 08:54:58.217211 env[1162]: time="2025-03-18T08:54:58.217150250Z" level=info msg="StartContainer for \"5d8bcd95c2a8f9e11b36622f83e95425ef5a1661915d047218bef052db013219\" returns successfully" Mar 18 08:54:58.275153 kubelet[1561]: I0318 08:54:58.271631 1561 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:00.577344 kubelet[1561]: E0318 08:55:00.577305 1561 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-7-7-00419dcf52.novalocal\" not found" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:00.631717 kubelet[1561]: I0318 08:55:00.631684 1561 apiserver.go:52] "Watching apiserver" Mar 18 08:55:00.669713 kubelet[1561]: I0318 08:55:00.669669 1561 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 18 08:55:00.674198 kubelet[1561]: I0318 08:55:00.674167 1561 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:00.776947 kubelet[1561]: E0318 08:55:00.776887 1561 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:02.990972 systemd[1]: Reloading. Mar 18 08:55:03.146190 /usr/lib/systemd/system-generators/torcx-generator[1846]: time="2025-03-18T08:55:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 18 08:55:03.147172 /usr/lib/systemd/system-generators/torcx-generator[1846]: time="2025-03-18T08:55:03Z" level=info msg="torcx already run" Mar 18 08:55:03.252711 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 18 08:55:03.252727 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 18 08:55:03.278964 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 18 08:55:03.396830 systemd[1]: Stopping kubelet.service... Mar 18 08:55:03.416570 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 08:55:03.416745 systemd[1]: Stopped kubelet.service. Mar 18 08:55:03.416815 systemd[1]: kubelet.service: Consumed 1.750s CPU time. Mar 18 08:55:03.418452 systemd[1]: Starting kubelet.service... Mar 18 08:55:03.505840 systemd[1]: Started kubelet.service. Mar 18 08:55:03.562791 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:55:03.563138 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 08:55:03.563193 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:55:03.563373 kubelet[1897]: I0318 08:55:03.563349 1897 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 08:55:03.570361 kubelet[1897]: I0318 08:55:03.570340 1897 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 18 08:55:03.570463 kubelet[1897]: I0318 08:55:03.570453 1897 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 08:55:03.570751 kubelet[1897]: I0318 08:55:03.570737 1897 server.go:929] "Client rotation is on, will bootstrap in background" Mar 18 08:55:03.572379 kubelet[1897]: I0318 08:55:03.572363 1897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 08:55:03.574747 kubelet[1897]: I0318 08:55:03.574716 1897 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 18 08:55:03.578569 kubelet[1897]: E0318 08:55:03.578538 1897 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 18 08:55:03.578569 kubelet[1897]: I0318 08:55:03.578566 1897 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 18 08:55:03.585960 kubelet[1897]: I0318 08:55:03.585930 1897 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 18 08:55:03.586097 kubelet[1897]: I0318 08:55:03.586078 1897 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 08:55:03.586244 kubelet[1897]: I0318 08:55:03.586213 1897 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 08:55:03.586429 kubelet[1897]: I0318 08:55:03.586243 1897 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-7-00419dcf52.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 08:55:03.586545 kubelet[1897]: I0318 08:55:03.586441 1897 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 08:55:03.586545 kubelet[1897]: I0318 08:55:03.586453 1897 container_manager_linux.go:300] "Creating device plugin manager" Mar 18 08:55:03.586545 kubelet[1897]: I0318 08:55:03.586496 1897 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:55:03.586663 kubelet[1897]: I0318 08:55:03.586601 1897 kubelet.go:408] "Attempting to sync node with API server" Mar 18 08:55:03.586663 kubelet[1897]: I0318 08:55:03.586613 1897 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 08:55:03.586663 kubelet[1897]: I0318 08:55:03.586644 1897 kubelet.go:314] "Adding apiserver pod source" Mar 18 08:55:03.586663 kubelet[1897]: I0318 08:55:03.586656 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 08:55:03.587610 kubelet[1897]: I0318 08:55:03.587586 1897 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 18 08:55:03.588187 kubelet[1897]: I0318 08:55:03.588174 1897 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 08:55:03.588734 kubelet[1897]: I0318 08:55:03.588721 1897 server.go:1269] "Started kubelet" Mar 18 08:55:03.591135 kubelet[1897]: I0318 08:55:03.591107 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 08:55:03.598647 kubelet[1897]: I0318 08:55:03.595101 1897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 08:55:03.602790 kubelet[1897]: I0318 08:55:03.601510 1897 server.go:460] "Adding debug handlers to kubelet server" Mar 18 08:55:03.602790 kubelet[1897]: I0318 08:55:03.602365 1897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 08:55:03.602790 kubelet[1897]: I0318 08:55:03.602531 1897 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 08:55:03.602790 kubelet[1897]: I0318 08:55:03.602742 1897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 18 08:55:03.604807 kubelet[1897]: I0318 08:55:03.604790 1897 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 08:55:03.605185 kubelet[1897]: E0318 08:55:03.605168 1897 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-7-7-00419dcf52.novalocal\" not found" Mar 18 08:55:03.605530 kubelet[1897]: I0318 08:55:03.605517 1897 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 18 08:55:03.605750 kubelet[1897]: I0318 08:55:03.605738 1897 reconciler.go:26] "Reconciler: start to sync state" Mar 18 08:55:03.615748 kubelet[1897]: I0318 08:55:03.614930 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 08:55:03.616928 kubelet[1897]: I0318 08:55:03.616912 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 08:55:03.617036 kubelet[1897]: I0318 08:55:03.617025 1897 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 08:55:03.617134 kubelet[1897]: I0318 08:55:03.617103 1897 kubelet.go:2321] "Starting kubelet main sync loop" Mar 18 08:55:03.617232 kubelet[1897]: E0318 08:55:03.617215 1897 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 08:55:03.630724 kubelet[1897]: I0318 08:55:03.630690 1897 factory.go:221] Registration of the systemd container factory successfully Mar 18 08:55:03.630976 kubelet[1897]: I0318 08:55:03.630957 1897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 18 08:55:03.633846 kubelet[1897]: I0318 08:55:03.633831 1897 factory.go:221] Registration of the containerd container factory successfully Mar 18 08:55:03.637040 kubelet[1897]: E0318 08:55:03.637013 1897 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 18 08:55:03.684782 kubelet[1897]: I0318 08:55:03.684762 1897 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 18 08:55:03.684958 kubelet[1897]: I0318 08:55:03.684946 1897 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 18 08:55:03.685045 kubelet[1897]: I0318 08:55:03.685034 1897 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:55:03.685327 kubelet[1897]: I0318 08:55:03.685305 1897 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 08:55:03.685420 kubelet[1897]: I0318 08:55:03.685394 1897 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 08:55:03.685485 kubelet[1897]: I0318 08:55:03.685476 1897 policy_none.go:49] "None policy: Start" Mar 18 08:55:03.686628 kubelet[1897]: I0318 08:55:03.686600 1897 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 08:55:03.686685 kubelet[1897]: I0318 08:55:03.686641 1897 state_mem.go:35] "Initializing new in-memory state store" Mar 18 08:55:03.686858 kubelet[1897]: I0318 08:55:03.686833 1897 state_mem.go:75] "Updated machine memory state" Mar 18 08:55:03.690956 kubelet[1897]: I0318 08:55:03.690935 1897 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 08:55:03.691104 kubelet[1897]: I0318 08:55:03.691086 1897 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 08:55:03.691206 kubelet[1897]: I0318 08:55:03.691107 1897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 08:55:03.691642 kubelet[1897]: I0318 08:55:03.691628 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 08:55:03.735667 kubelet[1897]: W0318 08:55:03.734046 1897 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 08:55:03.735667 kubelet[1897]: W0318 08:55:03.734314 1897 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 08:55:03.736602 kubelet[1897]: W0318 08:55:03.736584 1897 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 08:55:03.797335 kubelet[1897]: I0318 08:55:03.797193 1897 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.807332 kubelet[1897]: I0318 08:55:03.807280 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.807726 kubelet[1897]: I0318 08:55:03.807678 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82cb68b656f2e528ffa6d990f9c83d62-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"82cb68b656f2e528ffa6d990f9c83d62\") " pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.807938 kubelet[1897]: I0318 08:55:03.807903 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82cb68b656f2e528ffa6d990f9c83d62-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"82cb68b656f2e528ffa6d990f9c83d62\") " pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.808210 kubelet[1897]: I0318 08:55:03.808172 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.808448 kubelet[1897]: I0318 08:55:03.808410 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.808724 kubelet[1897]: I0318 08:55:03.808688 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c1331696c6d62f5bb8750e491d2e678-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"7c1331696c6d62f5bb8750e491d2e678\") " pod="kube-system/kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.809005 kubelet[1897]: I0318 08:55:03.808969 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82cb68b656f2e528ffa6d990f9c83d62-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"82cb68b656f2e528ffa6d990f9c83d62\") " pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.809657 kubelet[1897]: I0318 08:55:03.809619 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.809907 kubelet[1897]: I0318 08:55:03.809871 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/598218f03fc7ad8bbb70a1bfbf03f074-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal\" (UID: \"598218f03fc7ad8bbb70a1bfbf03f074\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.810058 kubelet[1897]: I0318 08:55:03.809524 1897 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.810318 kubelet[1897]: I0318 08:55:03.810293 1897 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:03.979058 sudo[1927]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 18 08:55:03.979703 sudo[1927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 18 08:55:04.593767 kubelet[1897]: I0318 08:55:04.593723 1897 apiserver.go:52] "Watching apiserver" Mar 18 08:55:04.614783 kubelet[1897]: I0318 08:55:04.614733 1897 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 18 08:55:04.672141 sudo[1927]: pam_unix(sudo:session): session closed for user root Mar 18 08:55:04.685191 kubelet[1897]: W0318 08:55:04.685087 1897 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 08:55:04.685552 kubelet[1897]: E0318 08:55:04.685514 1897 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:04.706431 kubelet[1897]: W0318 08:55:04.706385 1897 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 18 08:55:04.706764 kubelet[1897]: E0318 08:55:04.706492 1897 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" Mar 18 08:55:04.710517 kubelet[1897]: I0318 08:55:04.710426 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-7-00419dcf52.novalocal" podStartSLOduration=1.710397562 podStartE2EDuration="1.710397562s" podCreationTimestamp="2025-03-18 08:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:55:04.709856287 +0000 UTC m=+1.193327851" watchObservedRunningTime="2025-03-18 08:55:04.710397562 +0000 UTC m=+1.193869076" Mar 18 08:55:04.721745 kubelet[1897]: I0318 08:55:04.721673 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-7-00419dcf52.novalocal" podStartSLOduration=1.721653942 podStartE2EDuration="1.721653942s" podCreationTimestamp="2025-03-18 08:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:55:04.721643994 +0000 UTC m=+1.205115458" watchObservedRunningTime="2025-03-18 08:55:04.721653942 +0000 UTC m=+1.205125406" Mar 18 08:55:04.734207 kubelet[1897]: I0318 08:55:04.734161 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-7-00419dcf52.novalocal" podStartSLOduration=1.734146129 podStartE2EDuration="1.734146129s" podCreationTimestamp="2025-03-18 08:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:55:04.733974718 +0000 UTC m=+1.217446182" watchObservedRunningTime="2025-03-18 08:55:04.734146129 +0000 UTC m=+1.217617593" Mar 18 08:55:06.977703 sudo[1298]: pam_unix(sudo:session): session closed for user root Mar 18 08:55:07.169480 sshd[1295]: pam_unix(sshd:session): session closed for user core Mar 18 08:55:07.175970 systemd[1]: sshd@6-172.24.4.149:22-172.24.4.1:38382.service: Deactivated successfully. Mar 18 08:55:07.178265 systemd[1]: session-7.scope: Deactivated successfully. Mar 18 08:55:07.178588 systemd[1]: session-7.scope: Consumed 6.892s CPU time. Mar 18 08:55:07.182222 systemd-logind[1148]: Session 7 logged out. Waiting for processes to exit. Mar 18 08:55:07.184538 systemd-logind[1148]: Removed session 7. Mar 18 08:55:07.815530 kubelet[1897]: I0318 08:55:07.815484 1897 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 18 08:55:07.816195 env[1162]: time="2025-03-18T08:55:07.816139280Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 18 08:55:07.816636 kubelet[1897]: I0318 08:55:07.816613 1897 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 18 08:55:08.764280 systemd[1]: Created slice kubepods-besteffort-podcb38dc05_758b_4cf6_ad54_2017593030e8.slice. Mar 18 08:55:08.796647 kubelet[1897]: W0318 08:55:08.796599 1897 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-7-7-00419dcf52.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-7-00419dcf52.novalocal' and this object Mar 18 08:55:08.796800 kubelet[1897]: E0318 08:55:08.796651 1897 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510-3-7-7-00419dcf52.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-7-00419dcf52.novalocal' and this object" logger="UnhandledError" Mar 18 08:55:08.796800 kubelet[1897]: W0318 08:55:08.796695 1897 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-7-7-00419dcf52.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-7-00419dcf52.novalocal' and this object Mar 18 08:55:08.796800 kubelet[1897]: E0318 08:55:08.796708 1897 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510-3-7-7-00419dcf52.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-7-00419dcf52.novalocal' and this object" logger="UnhandledError" Mar 18 08:55:08.796800 kubelet[1897]: W0318 08:55:08.796750 1897 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-7-7-00419dcf52.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-7-7-00419dcf52.novalocal' and this object Mar 18 08:55:08.796934 kubelet[1897]: E0318 08:55:08.796763 1897 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510-3-7-7-00419dcf52.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-7-7-00419dcf52.novalocal' and this object" logger="UnhandledError" Mar 18 08:55:08.800701 systemd[1]: Created slice kubepods-burstable-pod9fb2da52_b394_4eee_9638_3d8e36278947.slice. Mar 18 08:55:08.941279 systemd[1]: Created slice kubepods-besteffort-pod190f0beb_6039_4ec5_ba7d_f4198e5e0865.slice. Mar 18 08:55:08.944138 kubelet[1897]: I0318 08:55:08.944046 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb38dc05-758b-4cf6-ad54-2017593030e8-xtables-lock\") pod \"kube-proxy-85w8g\" (UID: \"cb38dc05-758b-4cf6-ad54-2017593030e8\") " pod="kube-system/kube-proxy-85w8g" Mar 18 08:55:08.944138 kubelet[1897]: I0318 08:55:08.944180 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-cgroup\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.944138 kubelet[1897]: I0318 08:55:08.944228 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-xtables-lock\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.944138 kubelet[1897]: I0318 08:55:08.944288 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-lib-modules\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.944138 kubelet[1897]: I0318 08:55:08.944334 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-kernel\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946508 kubelet[1897]: I0318 08:55:08.944390 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsx5p\" (UniqueName: \"kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-kube-api-access-lsx5p\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946508 kubelet[1897]: I0318 08:55:08.944435 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb38dc05-758b-4cf6-ad54-2017593030e8-lib-modules\") pod \"kube-proxy-85w8g\" (UID: \"cb38dc05-758b-4cf6-ad54-2017593030e8\") " pod="kube-system/kube-proxy-85w8g" Mar 18 08:55:08.946508 kubelet[1897]: I0318 08:55:08.944477 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-etc-cni-netd\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946508 kubelet[1897]: I0318 08:55:08.944518 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb38dc05-758b-4cf6-ad54-2017593030e8-kube-proxy\") pod \"kube-proxy-85w8g\" (UID: \"cb38dc05-758b-4cf6-ad54-2017593030e8\") " pod="kube-system/kube-proxy-85w8g" Mar 18 08:55:08.946508 kubelet[1897]: I0318 08:55:08.944561 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb2da52-b394-4eee-9638-3d8e36278947-clustermesh-secrets\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946871 kubelet[1897]: I0318 08:55:08.944663 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzbz7\" (UniqueName: \"kubernetes.io/projected/cb38dc05-758b-4cf6-ad54-2017593030e8-kube-api-access-vzbz7\") pod \"kube-proxy-85w8g\" (UID: \"cb38dc05-758b-4cf6-ad54-2017593030e8\") " pod="kube-system/kube-proxy-85w8g" Mar 18 08:55:08.946871 kubelet[1897]: I0318 08:55:08.944711 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-run\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946871 kubelet[1897]: I0318 08:55:08.944751 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-hubble-tls\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946871 kubelet[1897]: I0318 08:55:08.944801 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cni-path\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946871 kubelet[1897]: I0318 08:55:08.944843 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-net\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.946871 kubelet[1897]: I0318 08:55:08.944883 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-bpf-maps\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.947333 kubelet[1897]: I0318 08:55:08.944922 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-hostproc\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:08.947333 kubelet[1897]: I0318 08:55:08.944969 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-config-path\") pod \"cilium-cq6mt\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " pod="kube-system/cilium-cq6mt" Mar 18 08:55:09.045905 kubelet[1897]: I0318 08:55:09.045804 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/190f0beb-6039-4ec5-ba7d-f4198e5e0865-cilium-config-path\") pod \"cilium-operator-5d85765b45-fn8qn\" (UID: \"190f0beb-6039-4ec5-ba7d-f4198e5e0865\") " pod="kube-system/cilium-operator-5d85765b45-fn8qn" Mar 18 08:55:09.046035 kubelet[1897]: I0318 08:55:09.045925 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwzbr\" (UniqueName: \"kubernetes.io/projected/190f0beb-6039-4ec5-ba7d-f4198e5e0865-kube-api-access-cwzbr\") pod \"cilium-operator-5d85765b45-fn8qn\" (UID: \"190f0beb-6039-4ec5-ba7d-f4198e5e0865\") " pod="kube-system/cilium-operator-5d85765b45-fn8qn" Mar 18 08:55:09.055979 kubelet[1897]: I0318 08:55:09.055932 1897 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 08:55:09.080506 env[1162]: time="2025-03-18T08:55:09.079249154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85w8g,Uid:cb38dc05-758b-4cf6-ad54-2017593030e8,Namespace:kube-system,Attempt:0,}" Mar 18 08:55:09.114250 env[1162]: time="2025-03-18T08:55:09.114065829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:55:09.114250 env[1162]: time="2025-03-18T08:55:09.114138976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:55:09.114250 env[1162]: time="2025-03-18T08:55:09.114155517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:55:09.114661 env[1162]: time="2025-03-18T08:55:09.114382964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cc79df280ecdab6b1781ecfee10937ef125d059ff6fa7ecb00342af853c3465 pid=1975 runtime=io.containerd.runc.v2 Mar 18 08:55:09.134130 systemd[1]: Started cri-containerd-8cc79df280ecdab6b1781ecfee10937ef125d059ff6fa7ecb00342af853c3465.scope. Mar 18 08:55:09.168023 env[1162]: time="2025-03-18T08:55:09.167973396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-85w8g,Uid:cb38dc05-758b-4cf6-ad54-2017593030e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cc79df280ecdab6b1781ecfee10937ef125d059ff6fa7ecb00342af853c3465\"" Mar 18 08:55:09.173345 env[1162]: time="2025-03-18T08:55:09.173288134Z" level=info msg="CreateContainer within sandbox \"8cc79df280ecdab6b1781ecfee10937ef125d059ff6fa7ecb00342af853c3465\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 18 08:55:09.195611 env[1162]: time="2025-03-18T08:55:09.195561765Z" level=info msg="CreateContainer within sandbox \"8cc79df280ecdab6b1781ecfee10937ef125d059ff6fa7ecb00342af853c3465\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"618ff3e026bfce95bab878f4dc92fa01c24a9012b7cb27f7981abc5ba4e69eba\"" Mar 18 08:55:09.196528 env[1162]: time="2025-03-18T08:55:09.196502810Z" level=info msg="StartContainer for \"618ff3e026bfce95bab878f4dc92fa01c24a9012b7cb27f7981abc5ba4e69eba\"" Mar 18 08:55:09.215785 systemd[1]: Started cri-containerd-618ff3e026bfce95bab878f4dc92fa01c24a9012b7cb27f7981abc5ba4e69eba.scope. Mar 18 08:55:09.254861 env[1162]: time="2025-03-18T08:55:09.254294523Z" level=info msg="StartContainer for \"618ff3e026bfce95bab878f4dc92fa01c24a9012b7cb27f7981abc5ba4e69eba\" returns successfully" Mar 18 08:55:09.698786 kubelet[1897]: I0318 08:55:09.698727 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-85w8g" podStartSLOduration=1.6987052459999998 podStartE2EDuration="1.698705246s" podCreationTimestamp="2025-03-18 08:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:55:09.698074713 +0000 UTC m=+6.181546187" watchObservedRunningTime="2025-03-18 08:55:09.698705246 +0000 UTC m=+6.182176710" Mar 18 08:55:10.047533 kubelet[1897]: E0318 08:55:10.047463 1897 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 08:55:10.048218 kubelet[1897]: E0318 08:55:10.047666 1897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-config-path podName:9fb2da52-b394-4eee-9638-3d8e36278947 nodeName:}" failed. No retries permitted until 2025-03-18 08:55:10.547564814 +0000 UTC m=+7.031036328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-config-path") pod "cilium-cq6mt" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947") : failed to sync configmap cache: timed out waiting for the condition Mar 18 08:55:10.048905 kubelet[1897]: E0318 08:55:10.048840 1897 secret.go:188] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 18 08:55:10.049314 kubelet[1897]: E0318 08:55:10.049289 1897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb2da52-b394-4eee-9638-3d8e36278947-clustermesh-secrets podName:9fb2da52-b394-4eee-9638-3d8e36278947 nodeName:}" failed. No retries permitted until 2025-03-18 08:55:10.549208226 +0000 UTC m=+7.032679730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9fb2da52-b394-4eee-9638-3d8e36278947-clustermesh-secrets") pod "cilium-cq6mt" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947") : failed to sync secret cache: timed out waiting for the condition Mar 18 08:55:10.071305 systemd[1]: run-containerd-runc-k8s.io-8cc79df280ecdab6b1781ecfee10937ef125d059ff6fa7ecb00342af853c3465-runc.knocXx.mount: Deactivated successfully. Mar 18 08:55:10.147638 kubelet[1897]: E0318 08:55:10.147550 1897 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 08:55:10.147858 kubelet[1897]: E0318 08:55:10.147678 1897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/190f0beb-6039-4ec5-ba7d-f4198e5e0865-cilium-config-path podName:190f0beb-6039-4ec5-ba7d-f4198e5e0865 nodeName:}" failed. No retries permitted until 2025-03-18 08:55:10.647643131 +0000 UTC m=+7.131114645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/190f0beb-6039-4ec5-ba7d-f4198e5e0865-cilium-config-path") pod "cilium-operator-5d85765b45-fn8qn" (UID: "190f0beb-6039-4ec5-ba7d-f4198e5e0865") : failed to sync configmap cache: timed out waiting for the condition Mar 18 08:55:10.605209 env[1162]: time="2025-03-18T08:55:10.604754726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cq6mt,Uid:9fb2da52-b394-4eee-9638-3d8e36278947,Namespace:kube-system,Attempt:0,}" Mar 18 08:55:10.655373 env[1162]: time="2025-03-18T08:55:10.655253261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:55:10.656689 env[1162]: time="2025-03-18T08:55:10.656611097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:55:10.657052 env[1162]: time="2025-03-18T08:55:10.656992753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:55:10.664966 env[1162]: time="2025-03-18T08:55:10.664889302Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1 pid=2184 runtime=io.containerd.runc.v2 Mar 18 08:55:10.704989 systemd[1]: Started cri-containerd-31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1.scope. Mar 18 08:55:10.729542 env[1162]: time="2025-03-18T08:55:10.729505614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cq6mt,Uid:9fb2da52-b394-4eee-9638-3d8e36278947,Namespace:kube-system,Attempt:0,} returns sandbox id \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\"" Mar 18 08:55:10.731893 env[1162]: time="2025-03-18T08:55:10.731869997Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 18 08:55:10.753418 env[1162]: time="2025-03-18T08:55:10.753368376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fn8qn,Uid:190f0beb-6039-4ec5-ba7d-f4198e5e0865,Namespace:kube-system,Attempt:0,}" Mar 18 08:55:10.774075 env[1162]: time="2025-03-18T08:55:10.774011610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:55:10.774259 env[1162]: time="2025-03-18T08:55:10.774059570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:55:10.774259 env[1162]: time="2025-03-18T08:55:10.774073757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:55:10.774339 env[1162]: time="2025-03-18T08:55:10.774295953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c pid=2225 runtime=io.containerd.runc.v2 Mar 18 08:55:10.786341 systemd[1]: Started cri-containerd-34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c.scope. Mar 18 08:55:10.826483 env[1162]: time="2025-03-18T08:55:10.826431358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-fn8qn,Uid:190f0beb-6039-4ec5-ba7d-f4198e5e0865,Namespace:kube-system,Attempt:0,} returns sandbox id \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\"" Mar 18 08:55:17.601743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519472527.mount: Deactivated successfully. Mar 18 08:55:22.221322 env[1162]: time="2025-03-18T08:55:22.221244796Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:55:22.228699 env[1162]: time="2025-03-18T08:55:22.228592537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:55:22.235309 env[1162]: time="2025-03-18T08:55:22.235239013Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:55:22.237274 env[1162]: time="2025-03-18T08:55:22.237162580Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 18 08:55:22.244323 env[1162]: time="2025-03-18T08:55:22.244221548Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 18 08:55:22.245251 env[1162]: time="2025-03-18T08:55:22.245181008Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 18 08:55:22.278547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442499159.mount: Deactivated successfully. Mar 18 08:55:22.305497 env[1162]: time="2025-03-18T08:55:22.305410054Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\"" Mar 18 08:55:22.307767 env[1162]: time="2025-03-18T08:55:22.307674101Z" level=info msg="StartContainer for \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\"" Mar 18 08:55:22.349113 systemd[1]: Started cri-containerd-3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df.scope. Mar 18 08:55:22.385326 env[1162]: time="2025-03-18T08:55:22.385278815Z" level=info msg="StartContainer for \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\" returns successfully" Mar 18 08:55:22.389731 systemd[1]: cri-containerd-3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df.scope: Deactivated successfully. Mar 18 08:55:22.871292 env[1162]: time="2025-03-18T08:55:22.871207962Z" level=info msg="shim disconnected" id=3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df Mar 18 08:55:22.871774 env[1162]: time="2025-03-18T08:55:22.871690808Z" level=warning msg="cleaning up after shim disconnected" id=3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df namespace=k8s.io Mar 18 08:55:22.871942 env[1162]: time="2025-03-18T08:55:22.871907594Z" level=info msg="cleaning up dead shim" Mar 18 08:55:22.890570 env[1162]: time="2025-03-18T08:55:22.890494603Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:55:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2310 runtime=io.containerd.runc.v2\n" Mar 18 08:55:23.267358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df-rootfs.mount: Deactivated successfully. Mar 18 08:55:23.748465 env[1162]: time="2025-03-18T08:55:23.748170135Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 18 08:55:23.805545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008831845.mount: Deactivated successfully. Mar 18 08:55:23.816025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558844938.mount: Deactivated successfully. Mar 18 08:55:23.825363 env[1162]: time="2025-03-18T08:55:23.825312512Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\"" Mar 18 08:55:23.826148 env[1162]: time="2025-03-18T08:55:23.826106121Z" level=info msg="StartContainer for \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\"" Mar 18 08:55:23.843843 systemd[1]: Started cri-containerd-2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c.scope. Mar 18 08:55:23.884553 env[1162]: time="2025-03-18T08:55:23.884425766Z" level=info msg="StartContainer for \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\" returns successfully" Mar 18 08:55:23.890364 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 18 08:55:23.890590 systemd[1]: Stopped systemd-sysctl.service. Mar 18 08:55:23.891320 systemd[1]: Stopping systemd-sysctl.service... Mar 18 08:55:23.893038 systemd[1]: Starting systemd-sysctl.service... Mar 18 08:55:23.902585 systemd[1]: Finished systemd-sysctl.service. Mar 18 08:55:23.903819 systemd[1]: cri-containerd-2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c.scope: Deactivated successfully. Mar 18 08:55:23.929951 env[1162]: time="2025-03-18T08:55:23.929895723Z" level=info msg="shim disconnected" id=2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c Mar 18 08:55:23.930176 env[1162]: time="2025-03-18T08:55:23.930157283Z" level=warning msg="cleaning up after shim disconnected" id=2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c namespace=k8s.io Mar 18 08:55:23.930270 env[1162]: time="2025-03-18T08:55:23.930255107Z" level=info msg="cleaning up dead shim" Mar 18 08:55:23.938496 env[1162]: time="2025-03-18T08:55:23.938470615Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:55:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2378 runtime=io.containerd.runc.v2\n" Mar 18 08:55:24.634937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616694054.mount: Deactivated successfully. Mar 18 08:55:24.754413 env[1162]: time="2025-03-18T08:55:24.754302504Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 18 08:55:24.825928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140176199.mount: Deactivated successfully. Mar 18 08:55:24.839459 env[1162]: time="2025-03-18T08:55:24.839388779Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\"" Mar 18 08:55:24.844811 env[1162]: time="2025-03-18T08:55:24.843820020Z" level=info msg="StartContainer for \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\"" Mar 18 08:55:24.875696 systemd[1]: Started cri-containerd-54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170.scope. Mar 18 08:55:24.917525 systemd[1]: cri-containerd-54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170.scope: Deactivated successfully. Mar 18 08:55:24.923627 env[1162]: time="2025-03-18T08:55:24.923579646Z" level=info msg="StartContainer for \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\" returns successfully" Mar 18 08:55:24.925927 env[1162]: time="2025-03-18T08:55:24.920258988Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb2da52_b394_4eee_9638_3d8e36278947.slice/cri-containerd-54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170.scope/memory.events\": no such file or directory" Mar 18 08:55:25.008409 env[1162]: time="2025-03-18T08:55:25.008307569Z" level=info msg="shim disconnected" id=54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170 Mar 18 08:55:25.008409 env[1162]: time="2025-03-18T08:55:25.008395404Z" level=warning msg="cleaning up after shim disconnected" id=54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170 namespace=k8s.io Mar 18 08:55:25.008409 env[1162]: time="2025-03-18T08:55:25.008419659Z" level=info msg="cleaning up dead shim" Mar 18 08:55:25.033563 env[1162]: time="2025-03-18T08:55:25.033478316Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" Mar 18 08:55:25.780246 env[1162]: time="2025-03-18T08:55:25.780032008Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 18 08:55:25.827315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454691533.mount: Deactivated successfully. Mar 18 08:55:25.837082 env[1162]: time="2025-03-18T08:55:25.836977007Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\"" Mar 18 08:55:25.842983 env[1162]: time="2025-03-18T08:55:25.842931274Z" level=info msg="StartContainer for \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\"" Mar 18 08:55:25.870214 systemd[1]: Started cri-containerd-7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955.scope. Mar 18 08:55:25.912731 systemd[1]: cri-containerd-7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955.scope: Deactivated successfully. Mar 18 08:55:25.914713 env[1162]: time="2025-03-18T08:55:25.914616587Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb2da52_b394_4eee_9638_3d8e36278947.slice/cri-containerd-7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955.scope/memory.events\": no such file or directory" Mar 18 08:55:25.920806 env[1162]: time="2025-03-18T08:55:25.920775948Z" level=info msg="StartContainer for \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\" returns successfully" Mar 18 08:55:25.963728 env[1162]: time="2025-03-18T08:55:25.963687467Z" level=info msg="shim disconnected" id=7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955 Mar 18 08:55:25.963923 env[1162]: time="2025-03-18T08:55:25.963903582Z" level=warning msg="cleaning up after shim disconnected" id=7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955 namespace=k8s.io Mar 18 08:55:25.963990 env[1162]: time="2025-03-18T08:55:25.963976479Z" level=info msg="cleaning up dead shim" Mar 18 08:55:25.977019 env[1162]: time="2025-03-18T08:55:25.976964667Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2496 runtime=io.containerd.runc.v2\n" Mar 18 08:55:26.265056 systemd[1]: run-containerd-runc-k8s.io-7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955-runc.338f3d.mount: Deactivated successfully. Mar 18 08:55:26.265189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955-rootfs.mount: Deactivated successfully. Mar 18 08:55:26.805927 env[1162]: time="2025-03-18T08:55:26.805888163Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 18 08:55:26.828826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228897938.mount: Deactivated successfully. Mar 18 08:55:26.835643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470866808.mount: Deactivated successfully. Mar 18 08:55:26.848137 env[1162]: time="2025-03-18T08:55:26.848067418Z" level=info msg="CreateContainer within sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\"" Mar 18 08:55:26.852147 env[1162]: time="2025-03-18T08:55:26.851225000Z" level=info msg="StartContainer for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\"" Mar 18 08:55:26.872778 systemd[1]: Started cri-containerd-8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58.scope. Mar 18 08:55:26.922722 env[1162]: time="2025-03-18T08:55:26.922671216Z" level=info msg="StartContainer for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" returns successfully" Mar 18 08:55:26.996666 env[1162]: time="2025-03-18T08:55:26.996592612Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:55:26.997866 env[1162]: time="2025-03-18T08:55:26.997836435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:55:26.999969 env[1162]: time="2025-03-18T08:55:26.999944018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 18 08:55:27.000764 env[1162]: time="2025-03-18T08:55:27.000739430Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 18 08:55:27.004269 env[1162]: time="2025-03-18T08:55:27.004242860Z" level=info msg="CreateContainer within sandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 18 08:55:27.034487 env[1162]: time="2025-03-18T08:55:27.034425557Z" level=info msg="CreateContainer within sandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\"" Mar 18 08:55:27.035498 env[1162]: time="2025-03-18T08:55:27.035462151Z" level=info msg="StartContainer for \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\"" Mar 18 08:55:27.055099 kubelet[1897]: I0318 08:55:27.053878 1897 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 18 08:55:27.064933 systemd[1]: Started cri-containerd-b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74.scope. Mar 18 08:55:27.114269 systemd[1]: Created slice kubepods-burstable-pod912e2d98_c5f4_4a1c_aade_8ac6494f89ff.slice. Mar 18 08:55:27.128901 systemd[1]: Created slice kubepods-burstable-pod390bcc9b_0e76_4767_ad55_840150fedaee.slice. Mar 18 08:55:27.198725 env[1162]: time="2025-03-18T08:55:27.198687484Z" level=info msg="StartContainer for \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\" returns successfully" Mar 18 08:55:27.293472 kubelet[1897]: I0318 08:55:27.293404 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsbcj\" (UniqueName: \"kubernetes.io/projected/390bcc9b-0e76-4767-ad55-840150fedaee-kube-api-access-zsbcj\") pod \"coredns-6f6b679f8f-dj4wp\" (UID: \"390bcc9b-0e76-4767-ad55-840150fedaee\") " pod="kube-system/coredns-6f6b679f8f-dj4wp" Mar 18 08:55:27.293616 kubelet[1897]: I0318 08:55:27.293535 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/912e2d98-c5f4-4a1c-aade-8ac6494f89ff-config-volume\") pod \"coredns-6f6b679f8f-dwctk\" (UID: \"912e2d98-c5f4-4a1c-aade-8ac6494f89ff\") " pod="kube-system/coredns-6f6b679f8f-dwctk" Mar 18 08:55:27.293616 kubelet[1897]: I0318 08:55:27.293601 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws6vp\" (UniqueName: \"kubernetes.io/projected/912e2d98-c5f4-4a1c-aade-8ac6494f89ff-kube-api-access-ws6vp\") pod \"coredns-6f6b679f8f-dwctk\" (UID: \"912e2d98-c5f4-4a1c-aade-8ac6494f89ff\") " pod="kube-system/coredns-6f6b679f8f-dwctk" Mar 18 08:55:27.293683 kubelet[1897]: I0318 08:55:27.293622 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/390bcc9b-0e76-4767-ad55-840150fedaee-config-volume\") pod \"coredns-6f6b679f8f-dj4wp\" (UID: \"390bcc9b-0e76-4767-ad55-840150fedaee\") " pod="kube-system/coredns-6f6b679f8f-dj4wp" Mar 18 08:55:27.718379 env[1162]: time="2025-03-18T08:55:27.718329765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dwctk,Uid:912e2d98-c5f4-4a1c-aade-8ac6494f89ff,Namespace:kube-system,Attempt:0,}" Mar 18 08:55:27.732850 env[1162]: time="2025-03-18T08:55:27.732809091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dj4wp,Uid:390bcc9b-0e76-4767-ad55-840150fedaee,Namespace:kube-system,Attempt:0,}" Mar 18 08:55:27.936246 kubelet[1897]: I0318 08:55:27.936182 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-fn8qn" podStartSLOduration=3.761740888 podStartE2EDuration="19.936159077s" podCreationTimestamp="2025-03-18 08:55:08 +0000 UTC" firstStartedPulling="2025-03-18 08:55:10.827981815 +0000 UTC m=+7.311453299" lastFinishedPulling="2025-03-18 08:55:27.002400024 +0000 UTC m=+23.485871488" observedRunningTime="2025-03-18 08:55:27.851001056 +0000 UTC m=+24.334472530" watchObservedRunningTime="2025-03-18 08:55:27.936159077 +0000 UTC m=+24.419630561" Mar 18 08:55:29.532086 systemd-networkd[982]: cilium_host: Link UP Mar 18 08:55:29.539950 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 18 08:55:29.540022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 18 08:55:29.539590 systemd-networkd[982]: cilium_net: Link UP Mar 18 08:55:29.539952 systemd-networkd[982]: cilium_net: Gained carrier Mar 18 08:55:29.541426 systemd-networkd[982]: cilium_host: Gained carrier Mar 18 08:55:29.665552 systemd-networkd[982]: cilium_vxlan: Link UP Mar 18 08:55:29.665562 systemd-networkd[982]: cilium_vxlan: Gained carrier Mar 18 08:55:29.844388 systemd-networkd[982]: cilium_net: Gained IPv6LL Mar 18 08:55:29.973144 kernel: NET: Registered PF_ALG protocol family Mar 18 08:55:30.420304 systemd-networkd[982]: cilium_host: Gained IPv6LL Mar 18 08:55:30.707565 systemd-networkd[982]: lxc_health: Link UP Mar 18 08:55:30.716149 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 18 08:55:30.716300 systemd-networkd[982]: lxc_health: Gained carrier Mar 18 08:55:30.932415 systemd-networkd[982]: cilium_vxlan: Gained IPv6LL Mar 18 08:55:31.274587 systemd-networkd[982]: lxcb3aa88d345aa: Link UP Mar 18 08:55:31.297687 kernel: eth0: renamed from tmp56766 Mar 18 08:55:31.307160 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb3aa88d345aa: link becomes ready Mar 18 08:55:31.307348 systemd-networkd[982]: lxcb3aa88d345aa: Gained carrier Mar 18 08:55:31.328500 systemd-networkd[982]: lxc757c711219ab: Link UP Mar 18 08:55:31.334226 kernel: eth0: renamed from tmpd38bf Mar 18 08:55:31.347157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc757c711219ab: link becomes ready Mar 18 08:55:31.347394 systemd-networkd[982]: lxc757c711219ab: Gained carrier Mar 18 08:55:31.764308 systemd-networkd[982]: lxc_health: Gained IPv6LL Mar 18 08:55:32.644389 kubelet[1897]: I0318 08:55:32.644315 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cq6mt" podStartSLOduration=13.135163498 podStartE2EDuration="24.64430002s" podCreationTimestamp="2025-03-18 08:55:08 +0000 UTC" firstStartedPulling="2025-03-18 08:55:10.730912101 +0000 UTC m=+7.214383565" lastFinishedPulling="2025-03-18 08:55:22.240048573 +0000 UTC m=+18.723520087" observedRunningTime="2025-03-18 08:55:27.93738151 +0000 UTC m=+24.420852974" watchObservedRunningTime="2025-03-18 08:55:32.64430002 +0000 UTC m=+29.127771484" Mar 18 08:55:33.108336 systemd-networkd[982]: lxcb3aa88d345aa: Gained IPv6LL Mar 18 08:55:33.364320 systemd-networkd[982]: lxc757c711219ab: Gained IPv6LL Mar 18 08:55:35.699013 env[1162]: time="2025-03-18T08:55:35.698918056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:55:35.699013 env[1162]: time="2025-03-18T08:55:35.698974372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:55:35.699490 env[1162]: time="2025-03-18T08:55:35.698989881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:55:35.699829 env[1162]: time="2025-03-18T08:55:35.699775674Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5676659e10add5cbb323b6fdb6d35b0066342e9588dff91cb7443777fceeb67b pid=3067 runtime=io.containerd.runc.v2 Mar 18 08:55:35.720539 systemd[1]: Started cri-containerd-5676659e10add5cbb323b6fdb6d35b0066342e9588dff91cb7443777fceeb67b.scope. Mar 18 08:55:35.728858 systemd[1]: run-containerd-runc-k8s.io-5676659e10add5cbb323b6fdb6d35b0066342e9588dff91cb7443777fceeb67b-runc.7paItT.mount: Deactivated successfully. Mar 18 08:55:35.800381 env[1162]: time="2025-03-18T08:55:35.800305306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:55:35.800512 env[1162]: time="2025-03-18T08:55:35.800398460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:55:35.800512 env[1162]: time="2025-03-18T08:55:35.800429258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:55:35.800630 env[1162]: time="2025-03-18T08:55:35.800596702Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d38bf3b4af0fc2135b439b1a5d3c83fd66d45db20cbb8685fba97fec7938cf23 pid=3100 runtime=io.containerd.runc.v2 Mar 18 08:55:35.808136 env[1162]: time="2025-03-18T08:55:35.808070419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dwctk,Uid:912e2d98-c5f4-4a1c-aade-8ac6494f89ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"5676659e10add5cbb323b6fdb6d35b0066342e9588dff91cb7443777fceeb67b\"" Mar 18 08:55:35.812604 env[1162]: time="2025-03-18T08:55:35.812563626Z" level=info msg="CreateContainer within sandbox \"5676659e10add5cbb323b6fdb6d35b0066342e9588dff91cb7443777fceeb67b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 18 08:55:35.833795 systemd[1]: Started cri-containerd-d38bf3b4af0fc2135b439b1a5d3c83fd66d45db20cbb8685fba97fec7938cf23.scope. Mar 18 08:55:35.836503 env[1162]: time="2025-03-18T08:55:35.836247135Z" level=info msg="CreateContainer within sandbox \"5676659e10add5cbb323b6fdb6d35b0066342e9588dff91cb7443777fceeb67b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9051c104ee88740f99d1d63a8070cae9e80f494c29255127ce24e8242b802642\"" Mar 18 08:55:35.836784 env[1162]: time="2025-03-18T08:55:35.836763703Z" level=info msg="StartContainer for \"9051c104ee88740f99d1d63a8070cae9e80f494c29255127ce24e8242b802642\"" Mar 18 08:55:35.857746 systemd[1]: Started cri-containerd-9051c104ee88740f99d1d63a8070cae9e80f494c29255127ce24e8242b802642.scope. Mar 18 08:55:35.906403 env[1162]: time="2025-03-18T08:55:35.906361592Z" level=info msg="StartContainer for \"9051c104ee88740f99d1d63a8070cae9e80f494c29255127ce24e8242b802642\" returns successfully" Mar 18 08:55:35.926435 env[1162]: time="2025-03-18T08:55:35.926395917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-dj4wp,Uid:390bcc9b-0e76-4767-ad55-840150fedaee,Namespace:kube-system,Attempt:0,} returns sandbox id \"d38bf3b4af0fc2135b439b1a5d3c83fd66d45db20cbb8685fba97fec7938cf23\"" Mar 18 08:55:35.928862 env[1162]: time="2025-03-18T08:55:35.928826886Z" level=info msg="CreateContainer within sandbox \"d38bf3b4af0fc2135b439b1a5d3c83fd66d45db20cbb8685fba97fec7938cf23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 18 08:55:35.948147 env[1162]: time="2025-03-18T08:55:35.948084673Z" level=info msg="CreateContainer within sandbox \"d38bf3b4af0fc2135b439b1a5d3c83fd66d45db20cbb8685fba97fec7938cf23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6be39fa537bc5207001e49717436303d4ae4b6cd8ddbf80e6ddf11060c1a34f9\"" Mar 18 08:55:35.948915 env[1162]: time="2025-03-18T08:55:35.948873583Z" level=info msg="StartContainer for \"6be39fa537bc5207001e49717436303d4ae4b6cd8ddbf80e6ddf11060c1a34f9\"" Mar 18 08:55:35.989761 systemd[1]: Started cri-containerd-6be39fa537bc5207001e49717436303d4ae4b6cd8ddbf80e6ddf11060c1a34f9.scope. Mar 18 08:55:36.049257 env[1162]: time="2025-03-18T08:55:36.049205623Z" level=info msg="StartContainer for \"6be39fa537bc5207001e49717436303d4ae4b6cd8ddbf80e6ddf11060c1a34f9\" returns successfully" Mar 18 08:55:36.710368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039408830.mount: Deactivated successfully. Mar 18 08:55:36.853706 kubelet[1897]: I0318 08:55:36.853531 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dwctk" podStartSLOduration=28.853468643 podStartE2EDuration="28.853468643s" podCreationTimestamp="2025-03-18 08:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:55:36.849881135 +0000 UTC m=+33.333352649" watchObservedRunningTime="2025-03-18 08:55:36.853468643 +0000 UTC m=+33.336940147" Mar 18 08:57:18.604926 systemd[1]: Started sshd@7-172.24.4.149:22-172.24.4.1:57834.service. Mar 18 08:57:20.013030 sshd[3247]: Accepted publickey for core from 172.24.4.1 port 57834 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:20.016467 sshd[3247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:20.034242 systemd[1]: Started session-8.scope. Mar 18 08:57:20.035609 systemd-logind[1148]: New session 8 of user core. Mar 18 08:57:20.704894 sshd[3247]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:20.709805 systemd[1]: sshd@7-172.24.4.149:22-172.24.4.1:57834.service: Deactivated successfully. Mar 18 08:57:20.711444 systemd[1]: session-8.scope: Deactivated successfully. Mar 18 08:57:20.712802 systemd-logind[1148]: Session 8 logged out. Waiting for processes to exit. Mar 18 08:57:20.714771 systemd-logind[1148]: Removed session 8. Mar 18 08:57:25.714309 systemd[1]: Started sshd@8-172.24.4.149:22-172.24.4.1:38304.service. Mar 18 08:57:26.845745 sshd[3260]: Accepted publickey for core from 172.24.4.1 port 38304 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:26.848431 sshd[3260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:26.858700 systemd-logind[1148]: New session 9 of user core. Mar 18 08:57:26.859483 systemd[1]: Started session-9.scope. Mar 18 08:57:27.620608 sshd[3260]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:27.627584 systemd[1]: sshd@8-172.24.4.149:22-172.24.4.1:38304.service: Deactivated successfully. Mar 18 08:57:27.629602 systemd[1]: session-9.scope: Deactivated successfully. Mar 18 08:57:27.631996 systemd-logind[1148]: Session 9 logged out. Waiting for processes to exit. Mar 18 08:57:27.635584 systemd-logind[1148]: Removed session 9. Mar 18 08:57:32.631275 systemd[1]: Started sshd@9-172.24.4.149:22-172.24.4.1:38308.service. Mar 18 08:57:33.991944 sshd[3273]: Accepted publickey for core from 172.24.4.1 port 38308 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:33.994682 sshd[3273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:34.005102 systemd-logind[1148]: New session 10 of user core. Mar 18 08:57:34.005947 systemd[1]: Started session-10.scope. Mar 18 08:57:34.810881 sshd[3273]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:34.816175 systemd-logind[1148]: Session 10 logged out. Waiting for processes to exit. Mar 18 08:57:34.816740 systemd[1]: sshd@9-172.24.4.149:22-172.24.4.1:38308.service: Deactivated successfully. Mar 18 08:57:34.818466 systemd[1]: session-10.scope: Deactivated successfully. Mar 18 08:57:34.820911 systemd-logind[1148]: Removed session 10. Mar 18 08:57:39.820292 systemd[1]: Started sshd@10-172.24.4.149:22-172.24.4.1:41908.service. Mar 18 08:57:40.936844 sshd[3287]: Accepted publickey for core from 172.24.4.1 port 41908 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:40.939622 sshd[3287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:40.950750 systemd-logind[1148]: New session 11 of user core. Mar 18 08:57:40.951362 systemd[1]: Started session-11.scope. Mar 18 08:57:41.668786 sshd[3287]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:41.676885 systemd[1]: Started sshd@11-172.24.4.149:22-172.24.4.1:41914.service. Mar 18 08:57:41.680305 systemd[1]: sshd@10-172.24.4.149:22-172.24.4.1:41908.service: Deactivated successfully. Mar 18 08:57:41.681901 systemd[1]: session-11.scope: Deactivated successfully. Mar 18 08:57:41.684561 systemd-logind[1148]: Session 11 logged out. Waiting for processes to exit. Mar 18 08:57:41.687194 systemd-logind[1148]: Removed session 11. Mar 18 08:57:43.183105 sshd[3299]: Accepted publickey for core from 172.24.4.1 port 41914 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:43.186499 sshd[3299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:43.197155 systemd-logind[1148]: New session 12 of user core. Mar 18 08:57:43.197953 systemd[1]: Started session-12.scope. Mar 18 08:57:44.052193 sshd[3299]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:44.060624 systemd[1]: sshd@11-172.24.4.149:22-172.24.4.1:41914.service: Deactivated successfully. Mar 18 08:57:44.062628 systemd[1]: session-12.scope: Deactivated successfully. Mar 18 08:57:44.064392 systemd-logind[1148]: Session 12 logged out. Waiting for processes to exit. Mar 18 08:57:44.068047 systemd[1]: Started sshd@12-172.24.4.149:22-172.24.4.1:57686.service. Mar 18 08:57:44.072435 systemd-logind[1148]: Removed session 12. Mar 18 08:57:45.297061 sshd[3309]: Accepted publickey for core from 172.24.4.1 port 57686 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:45.300411 sshd[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:45.310260 systemd-logind[1148]: New session 13 of user core. Mar 18 08:57:45.310976 systemd[1]: Started session-13.scope. Mar 18 08:57:46.158159 sshd[3309]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:46.163257 systemd[1]: sshd@12-172.24.4.149:22-172.24.4.1:57686.service: Deactivated successfully. Mar 18 08:57:46.164845 systemd[1]: session-13.scope: Deactivated successfully. Mar 18 08:57:46.166362 systemd-logind[1148]: Session 13 logged out. Waiting for processes to exit. Mar 18 08:57:46.168349 systemd-logind[1148]: Removed session 13. Mar 18 08:57:51.167575 systemd[1]: Started sshd@13-172.24.4.149:22-172.24.4.1:57692.service. Mar 18 08:57:52.322972 sshd[3321]: Accepted publickey for core from 172.24.4.1 port 57692 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:52.326256 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:52.336329 systemd-logind[1148]: New session 14 of user core. Mar 18 08:57:52.336901 systemd[1]: Started session-14.scope. Mar 18 08:57:53.086508 sshd[3321]: pam_unix(sshd:session): session closed for user core Mar 18 08:57:53.093606 systemd[1]: sshd@13-172.24.4.149:22-172.24.4.1:57692.service: Deactivated successfully. Mar 18 08:57:53.093738 systemd-logind[1148]: Session 14 logged out. Waiting for processes to exit. Mar 18 08:57:53.095783 systemd[1]: session-14.scope: Deactivated successfully. Mar 18 08:57:53.097747 systemd-logind[1148]: Removed session 14. Mar 18 08:57:58.098635 systemd[1]: Started sshd@14-172.24.4.149:22-172.24.4.1:51494.service. Mar 18 08:57:59.310944 sshd[3333]: Accepted publickey for core from 172.24.4.1 port 51494 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:57:59.316774 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:57:59.329806 systemd-logind[1148]: New session 15 of user core. Mar 18 08:57:59.331328 systemd[1]: Started session-15.scope. Mar 18 08:58:00.038343 sshd[3333]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:00.049575 systemd[1]: Started sshd@15-172.24.4.149:22-172.24.4.1:51496.service. Mar 18 08:58:00.059339 systemd[1]: sshd@14-172.24.4.149:22-172.24.4.1:51494.service: Deactivated successfully. Mar 18 08:58:00.063698 systemd[1]: session-15.scope: Deactivated successfully. Mar 18 08:58:00.068718 systemd-logind[1148]: Session 15 logged out. Waiting for processes to exit. Mar 18 08:58:00.071466 systemd-logind[1148]: Removed session 15. Mar 18 08:58:01.288187 sshd[3344]: Accepted publickey for core from 172.24.4.1 port 51496 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:01.291879 sshd[3344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:01.302294 systemd-logind[1148]: New session 16 of user core. Mar 18 08:58:01.302974 systemd[1]: Started session-16.scope. Mar 18 08:58:02.054403 sshd[3344]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:02.064668 systemd[1]: Started sshd@16-172.24.4.149:22-172.24.4.1:51498.service. Mar 18 08:58:02.066011 systemd[1]: sshd@15-172.24.4.149:22-172.24.4.1:51496.service: Deactivated successfully. Mar 18 08:58:02.067881 systemd[1]: session-16.scope: Deactivated successfully. Mar 18 08:58:02.075879 systemd-logind[1148]: Session 16 logged out. Waiting for processes to exit. Mar 18 08:58:02.081872 systemd-logind[1148]: Removed session 16. Mar 18 08:58:03.299314 sshd[3354]: Accepted publickey for core from 172.24.4.1 port 51498 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:03.301870 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:03.312244 systemd-logind[1148]: New session 17 of user core. Mar 18 08:58:03.314558 systemd[1]: Started session-17.scope. Mar 18 08:58:06.304973 sshd[3354]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:06.312608 systemd[1]: sshd@16-172.24.4.149:22-172.24.4.1:51498.service: Deactivated successfully. Mar 18 08:58:06.314054 systemd[1]: session-17.scope: Deactivated successfully. Mar 18 08:58:06.316018 systemd-logind[1148]: Session 17 logged out. Waiting for processes to exit. Mar 18 08:58:06.319545 systemd[1]: Started sshd@17-172.24.4.149:22-172.24.4.1:58624.service. Mar 18 08:58:06.323440 systemd-logind[1148]: Removed session 17. Mar 18 08:58:07.730682 sshd[3374]: Accepted publickey for core from 172.24.4.1 port 58624 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:07.736359 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:07.750102 systemd[1]: Started session-18.scope. Mar 18 08:58:07.751981 systemd-logind[1148]: New session 18 of user core. Mar 18 08:58:08.822985 sshd[3374]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:08.833446 systemd[1]: Started sshd@18-172.24.4.149:22-172.24.4.1:58634.service. Mar 18 08:58:08.834243 systemd[1]: sshd@17-172.24.4.149:22-172.24.4.1:58624.service: Deactivated successfully. Mar 18 08:58:08.835209 systemd[1]: session-18.scope: Deactivated successfully. Mar 18 08:58:08.837459 systemd-logind[1148]: Session 18 logged out. Waiting for processes to exit. Mar 18 08:58:08.839437 systemd-logind[1148]: Removed session 18. Mar 18 08:58:10.029188 sshd[3383]: Accepted publickey for core from 172.24.4.1 port 58634 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:10.032268 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:10.042180 systemd-logind[1148]: New session 19 of user core. Mar 18 08:58:10.042944 systemd[1]: Started session-19.scope. Mar 18 08:58:10.641216 sshd[3383]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:10.646389 systemd[1]: sshd@18-172.24.4.149:22-172.24.4.1:58634.service: Deactivated successfully. Mar 18 08:58:10.647986 systemd[1]: session-19.scope: Deactivated successfully. Mar 18 08:58:10.649637 systemd-logind[1148]: Session 19 logged out. Waiting for processes to exit. Mar 18 08:58:10.651785 systemd-logind[1148]: Removed session 19. Mar 18 08:58:15.651269 systemd[1]: Started sshd@19-172.24.4.149:22-172.24.4.1:46380.service. Mar 18 08:58:16.846282 sshd[3401]: Accepted publickey for core from 172.24.4.1 port 46380 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:16.849258 sshd[3401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:16.860736 systemd[1]: Started session-20.scope. Mar 18 08:58:16.861624 systemd-logind[1148]: New session 20 of user core. Mar 18 08:58:17.576604 sshd[3401]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:17.582269 systemd-logind[1148]: Session 20 logged out. Waiting for processes to exit. Mar 18 08:58:17.583061 systemd[1]: sshd@19-172.24.4.149:22-172.24.4.1:46380.service: Deactivated successfully. Mar 18 08:58:17.584670 systemd[1]: session-20.scope: Deactivated successfully. Mar 18 08:58:17.587205 systemd-logind[1148]: Removed session 20. Mar 18 08:58:22.585704 systemd[1]: Started sshd@20-172.24.4.149:22-172.24.4.1:46394.service. Mar 18 08:58:23.919182 sshd[3413]: Accepted publickey for core from 172.24.4.1 port 46394 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:23.921777 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:23.932314 systemd-logind[1148]: New session 21 of user core. Mar 18 08:58:23.933077 systemd[1]: Started session-21.scope. Mar 18 08:58:24.632958 sshd[3413]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:24.638441 systemd[1]: sshd@20-172.24.4.149:22-172.24.4.1:46394.service: Deactivated successfully. Mar 18 08:58:24.640097 systemd[1]: session-21.scope: Deactivated successfully. Mar 18 08:58:24.641500 systemd-logind[1148]: Session 21 logged out. Waiting for processes to exit. Mar 18 08:58:24.643675 systemd-logind[1148]: Removed session 21. Mar 18 08:58:29.642527 systemd[1]: Started sshd@21-172.24.4.149:22-172.24.4.1:33654.service. Mar 18 08:58:30.855181 sshd[3425]: Accepted publickey for core from 172.24.4.1 port 33654 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:30.858711 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:30.870908 systemd[1]: Started session-22.scope. Mar 18 08:58:30.871824 systemd-logind[1148]: New session 22 of user core. Mar 18 08:58:31.727289 sshd[3425]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:31.732778 systemd[1]: Started sshd@22-172.24.4.149:22-172.24.4.1:33664.service. Mar 18 08:58:31.737646 systemd[1]: sshd@21-172.24.4.149:22-172.24.4.1:33654.service: Deactivated successfully. Mar 18 08:58:31.739685 systemd[1]: session-22.scope: Deactivated successfully. Mar 18 08:58:31.745504 systemd-logind[1148]: Session 22 logged out. Waiting for processes to exit. Mar 18 08:58:31.749319 systemd-logind[1148]: Removed session 22. Mar 18 08:58:32.963386 sshd[3436]: Accepted publickey for core from 172.24.4.1 port 33664 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:32.966325 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:32.976567 systemd-logind[1148]: New session 23 of user core. Mar 18 08:58:32.976931 systemd[1]: Started session-23.scope. Mar 18 08:58:35.293661 kubelet[1897]: I0318 08:58:35.293497 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-dj4wp" podStartSLOduration=207.293466111 podStartE2EDuration="3m27.293466111s" podCreationTimestamp="2025-03-18 08:55:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:55:36.923983642 +0000 UTC m=+33.407455106" watchObservedRunningTime="2025-03-18 08:58:35.293466111 +0000 UTC m=+211.776937625" Mar 18 08:58:35.329776 systemd[1]: run-containerd-runc-k8s.io-8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58-runc.G1uIBT.mount: Deactivated successfully. Mar 18 08:58:35.331865 env[1162]: time="2025-03-18T08:58:35.331810471Z" level=info msg="StopContainer for \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\" with timeout 30 (s)" Mar 18 08:58:35.333833 env[1162]: time="2025-03-18T08:58:35.333362177Z" level=info msg="Stop container \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\" with signal terminated" Mar 18 08:58:35.348619 systemd[1]: cri-containerd-b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74.scope: Deactivated successfully. Mar 18 08:58:35.358762 env[1162]: time="2025-03-18T08:58:35.358708332Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 18 08:58:35.365006 env[1162]: time="2025-03-18T08:58:35.364973565Z" level=info msg="StopContainer for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" with timeout 2 (s)" Mar 18 08:58:35.365381 env[1162]: time="2025-03-18T08:58:35.365361434Z" level=info msg="Stop container \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" with signal terminated" Mar 18 08:58:35.372254 systemd-networkd[982]: lxc_health: Link DOWN Mar 18 08:58:35.372263 systemd-networkd[982]: lxc_health: Lost carrier Mar 18 08:58:35.383896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74-rootfs.mount: Deactivated successfully. Mar 18 08:58:35.415190 env[1162]: time="2025-03-18T08:58:35.415140059Z" level=info msg="shim disconnected" id=b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74 Mar 18 08:58:35.415190 env[1162]: time="2025-03-18T08:58:35.415183360Z" level=warning msg="cleaning up after shim disconnected" id=b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74 namespace=k8s.io Mar 18 08:58:35.415190 env[1162]: time="2025-03-18T08:58:35.415195122Z" level=info msg="cleaning up dead shim" Mar 18 08:58:35.418457 systemd[1]: cri-containerd-8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58.scope: Deactivated successfully. Mar 18 08:58:35.418710 systemd[1]: cri-containerd-8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58.scope: Consumed 8.562s CPU time. Mar 18 08:58:35.430548 env[1162]: time="2025-03-18T08:58:35.430487707Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3490 runtime=io.containerd.runc.v2\n" Mar 18 08:58:35.435590 env[1162]: time="2025-03-18T08:58:35.435556752Z" level=info msg="StopContainer for \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\" returns successfully" Mar 18 08:58:35.436695 env[1162]: time="2025-03-18T08:58:35.436621603Z" level=info msg="StopPodSandbox for \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\"" Mar 18 08:58:35.436695 env[1162]: time="2025-03-18T08:58:35.436679482Z" level=info msg="Container to stop \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:35.438699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c-shm.mount: Deactivated successfully. Mar 18 08:58:35.444746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58-rootfs.mount: Deactivated successfully. Mar 18 08:58:35.449949 systemd[1]: cri-containerd-34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c.scope: Deactivated successfully. Mar 18 08:58:35.457557 env[1162]: time="2025-03-18T08:58:35.457510854Z" level=info msg="shim disconnected" id=8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58 Mar 18 08:58:35.457784 env[1162]: time="2025-03-18T08:58:35.457764881Z" level=warning msg="cleaning up after shim disconnected" id=8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58 namespace=k8s.io Mar 18 08:58:35.457875 env[1162]: time="2025-03-18T08:58:35.457860091Z" level=info msg="cleaning up dead shim" Mar 18 08:58:35.466918 env[1162]: time="2025-03-18T08:58:35.466877845Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3526 runtime=io.containerd.runc.v2\n" Mar 18 08:58:35.484017 env[1162]: time="2025-03-18T08:58:35.483974150Z" level=info msg="StopContainer for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" returns successfully" Mar 18 08:58:35.484721 env[1162]: time="2025-03-18T08:58:35.484693070Z" level=info msg="StopPodSandbox for \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\"" Mar 18 08:58:35.484859 env[1162]: time="2025-03-18T08:58:35.484835348Z" level=info msg="Container to stop \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:35.484999 env[1162]: time="2025-03-18T08:58:35.484979207Z" level=info msg="Container to stop \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:35.485134 env[1162]: time="2025-03-18T08:58:35.485091269Z" level=info msg="Container to stop \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:35.485226 env[1162]: time="2025-03-18T08:58:35.485203870Z" level=info msg="Container to stop \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:35.485310 env[1162]: time="2025-03-18T08:58:35.485291594Z" level=info msg="Container to stop \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:35.491630 systemd[1]: cri-containerd-31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1.scope: Deactivated successfully. Mar 18 08:58:35.494386 env[1162]: time="2025-03-18T08:58:35.494351939Z" level=info msg="shim disconnected" id=34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c Mar 18 08:58:35.494963 env[1162]: time="2025-03-18T08:58:35.494944392Z" level=warning msg="cleaning up after shim disconnected" id=34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c namespace=k8s.io Mar 18 08:58:35.495038 env[1162]: time="2025-03-18T08:58:35.495023902Z" level=info msg="cleaning up dead shim" Mar 18 08:58:35.505661 env[1162]: time="2025-03-18T08:58:35.505624671Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3554 runtime=io.containerd.runc.v2\n" Mar 18 08:58:35.506096 env[1162]: time="2025-03-18T08:58:35.506071030Z" level=info msg="TearDown network for sandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" successfully" Mar 18 08:58:35.506220 env[1162]: time="2025-03-18T08:58:35.506200062Z" level=info msg="StopPodSandbox for \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" returns successfully" Mar 18 08:58:35.539721 env[1162]: time="2025-03-18T08:58:35.539678338Z" level=info msg="shim disconnected" id=31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1 Mar 18 08:58:35.540095 env[1162]: time="2025-03-18T08:58:35.540076517Z" level=warning msg="cleaning up after shim disconnected" id=31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1 namespace=k8s.io Mar 18 08:58:35.540196 env[1162]: time="2025-03-18T08:58:35.540180292Z" level=info msg="cleaning up dead shim" Mar 18 08:58:35.550580 env[1162]: time="2025-03-18T08:58:35.548535371Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3580 runtime=io.containerd.runc.v2\n" Mar 18 08:58:35.550580 env[1162]: time="2025-03-18T08:58:35.549630138Z" level=info msg="TearDown network for sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" successfully" Mar 18 08:58:35.550580 env[1162]: time="2025-03-18T08:58:35.549710770Z" level=info msg="StopPodSandbox for \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" returns successfully" Mar 18 08:58:35.687420 kubelet[1897]: I0318 08:58:35.687220 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsx5p\" (UniqueName: \"kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-kube-api-access-lsx5p\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.687883 kubelet[1897]: I0318 08:58:35.687848 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-net\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.688206 kubelet[1897]: I0318 08:58:35.688108 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-config-path\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.688510 kubelet[1897]: I0318 08:58:35.688438 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-hostproc\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.688795 kubelet[1897]: I0318 08:58:35.688731 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb2da52-b394-4eee-9638-3d8e36278947-clustermesh-secrets\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.689061 kubelet[1897]: I0318 08:58:35.689003 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-bpf-maps\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.689364 kubelet[1897]: I0318 08:58:35.689282 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-cgroup\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.689622 kubelet[1897]: I0318 08:58:35.689568 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-xtables-lock\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.689898 kubelet[1897]: I0318 08:58:35.689826 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwzbr\" (UniqueName: \"kubernetes.io/projected/190f0beb-6039-4ec5-ba7d-f4198e5e0865-kube-api-access-cwzbr\") pod \"190f0beb-6039-4ec5-ba7d-f4198e5e0865\" (UID: \"190f0beb-6039-4ec5-ba7d-f4198e5e0865\") " Mar 18 08:58:35.690188 kubelet[1897]: I0318 08:58:35.690102 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-kernel\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.690473 kubelet[1897]: I0318 08:58:35.690404 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-etc-cni-netd\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.690769 kubelet[1897]: I0318 08:58:35.690701 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-run\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.697159 kubelet[1897]: I0318 08:58:35.697068 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cni-path\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.697327 kubelet[1897]: I0318 08:58:35.697174 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-lib-modules\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.697327 kubelet[1897]: I0318 08:58:35.697230 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-hubble-tls\") pod \"9fb2da52-b394-4eee-9638-3d8e36278947\" (UID: \"9fb2da52-b394-4eee-9638-3d8e36278947\") " Mar 18 08:58:35.697327 kubelet[1897]: I0318 08:58:35.697281 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/190f0beb-6039-4ec5-ba7d-f4198e5e0865-cilium-config-path\") pod \"190f0beb-6039-4ec5-ba7d-f4198e5e0865\" (UID: \"190f0beb-6039-4ec5-ba7d-f4198e5e0865\") " Mar 18 08:58:35.697833 kubelet[1897]: I0318 08:58:35.689875 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-hostproc" (OuterVolumeSpecName: "hostproc") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.698153 kubelet[1897]: I0318 08:58:35.689931 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.698369 kubelet[1897]: I0318 08:58:35.690985 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.698596 kubelet[1897]: I0318 08:58:35.692278 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.698793 kubelet[1897]: I0318 08:58:35.692367 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.699163 kubelet[1897]: I0318 08:58:35.692396 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.699355 kubelet[1897]: I0318 08:58:35.696935 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.699562 kubelet[1897]: I0318 08:58:35.696987 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.703799 kubelet[1897]: I0318 08:58:35.703614 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cni-path" (OuterVolumeSpecName: "cni-path") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.704275 kubelet[1897]: I0318 08:58:35.704047 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:35.708037 kubelet[1897]: I0318 08:58:35.707992 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/190f0beb-6039-4ec5-ba7d-f4198e5e0865-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "190f0beb-6039-4ec5-ba7d-f4198e5e0865" (UID: "190f0beb-6039-4ec5-ba7d-f4198e5e0865"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:35.708505 kubelet[1897]: I0318 08:58:35.708465 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-kube-api-access-lsx5p" (OuterVolumeSpecName: "kube-api-access-lsx5p") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "kube-api-access-lsx5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:35.709005 kubelet[1897]: I0318 08:58:35.708929 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:35.712913 kubelet[1897]: I0318 08:58:35.712822 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fb2da52-b394-4eee-9638-3d8e36278947-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:58:35.714409 kubelet[1897]: I0318 08:58:35.714363 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/190f0beb-6039-4ec5-ba7d-f4198e5e0865-kube-api-access-cwzbr" (OuterVolumeSpecName: "kube-api-access-cwzbr") pod "190f0beb-6039-4ec5-ba7d-f4198e5e0865" (UID: "190f0beb-6039-4ec5-ba7d-f4198e5e0865"). InnerVolumeSpecName "kube-api-access-cwzbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:35.720865 kubelet[1897]: I0318 08:58:35.720806 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9fb2da52-b394-4eee-9638-3d8e36278947" (UID: "9fb2da52-b394-4eee-9638-3d8e36278947"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798548 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-cgroup\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798633 1897 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-xtables-lock\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798678 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-kernel\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798722 1897 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-etc-cni-netd\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798766 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-run\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798815 1897 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-cni-path\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.805214 kubelet[1897]: I0318 08:58:35.798849 1897 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cwzbr\" (UniqueName: \"kubernetes.io/projected/190f0beb-6039-4ec5-ba7d-f4198e5e0865-kube-api-access-cwzbr\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.798882 1897 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-lib-modules\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.798917 1897 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-hubble-tls\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.798945 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/190f0beb-6039-4ec5-ba7d-f4198e5e0865-cilium-config-path\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.798971 1897 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lsx5p\" (UniqueName: \"kubernetes.io/projected/9fb2da52-b394-4eee-9638-3d8e36278947-kube-api-access-lsx5p\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.798997 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-host-proc-sys-net\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.799020 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fb2da52-b394-4eee-9638-3d8e36278947-cilium-config-path\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.806111 kubelet[1897]: I0318 08:58:35.799043 1897 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-hostproc\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.807967 kubelet[1897]: I0318 08:58:35.799066 1897 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fb2da52-b394-4eee-9638-3d8e36278947-clustermesh-secrets\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:35.807967 kubelet[1897]: I0318 08:58:35.799088 1897 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fb2da52-b394-4eee-9638-3d8e36278947-bpf-maps\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:36.328185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c-rootfs.mount: Deactivated successfully. Mar 18 08:58:36.328401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1-rootfs.mount: Deactivated successfully. Mar 18 08:58:36.328540 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1-shm.mount: Deactivated successfully. Mar 18 08:58:36.328708 systemd[1]: var-lib-kubelet-pods-9fb2da52\x2db394\x2d4eee\x2d9638\x2d3d8e36278947-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 18 08:58:36.328857 systemd[1]: var-lib-kubelet-pods-9fb2da52\x2db394\x2d4eee\x2d9638\x2d3d8e36278947-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 18 08:58:36.328997 systemd[1]: var-lib-kubelet-pods-190f0beb\x2d6039\x2d4ec5\x2dba7d\x2df4198e5e0865-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcwzbr.mount: Deactivated successfully. Mar 18 08:58:36.329179 systemd[1]: var-lib-kubelet-pods-9fb2da52\x2db394\x2d4eee\x2d9638\x2d3d8e36278947-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlsx5p.mount: Deactivated successfully. Mar 18 08:58:36.452549 kubelet[1897]: I0318 08:58:36.452403 1897 scope.go:117] "RemoveContainer" containerID="b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74" Mar 18 08:58:36.458236 systemd[1]: Removed slice kubepods-besteffort-pod190f0beb_6039_4ec5_ba7d_f4198e5e0865.slice. Mar 18 08:58:36.459927 env[1162]: time="2025-03-18T08:58:36.458414755Z" level=info msg="RemoveContainer for \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\"" Mar 18 08:58:36.478363 systemd[1]: Removed slice kubepods-burstable-pod9fb2da52_b394_4eee_9638_3d8e36278947.slice. Mar 18 08:58:36.478609 systemd[1]: kubepods-burstable-pod9fb2da52_b394_4eee_9638_3d8e36278947.slice: Consumed 8.670s CPU time. Mar 18 08:58:36.487078 env[1162]: time="2025-03-18T08:58:36.487008723Z" level=info msg="RemoveContainer for \"b98928c328331139d3fc211cad1706d0678d30f32fa40ab59046ea5b655ded74\" returns successfully" Mar 18 08:58:36.487880 kubelet[1897]: I0318 08:58:36.487801 1897 scope.go:117] "RemoveContainer" containerID="8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58" Mar 18 08:58:36.491519 env[1162]: time="2025-03-18T08:58:36.491457963Z" level=info msg="RemoveContainer for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\"" Mar 18 08:58:36.507959 env[1162]: time="2025-03-18T08:58:36.507881302Z" level=info msg="RemoveContainer for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" returns successfully" Mar 18 08:58:36.508704 kubelet[1897]: I0318 08:58:36.508646 1897 scope.go:117] "RemoveContainer" containerID="7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955" Mar 18 08:58:36.512920 env[1162]: time="2025-03-18T08:58:36.512849617Z" level=info msg="RemoveContainer for \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\"" Mar 18 08:58:36.530178 env[1162]: time="2025-03-18T08:58:36.527255285Z" level=info msg="RemoveContainer for \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\" returns successfully" Mar 18 08:58:36.530418 kubelet[1897]: I0318 08:58:36.528037 1897 scope.go:117] "RemoveContainer" containerID="54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170" Mar 18 08:58:36.531647 env[1162]: time="2025-03-18T08:58:36.531150515Z" level=info msg="RemoveContainer for \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\"" Mar 18 08:58:36.536811 env[1162]: time="2025-03-18T08:58:36.536714228Z" level=info msg="RemoveContainer for \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\" returns successfully" Mar 18 08:58:36.537061 kubelet[1897]: I0318 08:58:36.537041 1897 scope.go:117] "RemoveContainer" containerID="2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c" Mar 18 08:58:36.538964 env[1162]: time="2025-03-18T08:58:36.538935552Z" level=info msg="RemoveContainer for \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\"" Mar 18 08:58:36.542628 env[1162]: time="2025-03-18T08:58:36.542603565Z" level=info msg="RemoveContainer for \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\" returns successfully" Mar 18 08:58:36.542953 kubelet[1897]: I0318 08:58:36.542904 1897 scope.go:117] "RemoveContainer" containerID="3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df" Mar 18 08:58:36.544956 env[1162]: time="2025-03-18T08:58:36.544885112Z" level=info msg="RemoveContainer for \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\"" Mar 18 08:58:36.550836 env[1162]: time="2025-03-18T08:58:36.550769147Z" level=info msg="RemoveContainer for \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\" returns successfully" Mar 18 08:58:36.551037 kubelet[1897]: I0318 08:58:36.550997 1897 scope.go:117] "RemoveContainer" containerID="8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58" Mar 18 08:58:36.551527 env[1162]: time="2025-03-18T08:58:36.551420832Z" level=error msg="ContainerStatus for \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\": not found" Mar 18 08:58:36.551834 kubelet[1897]: E0318 08:58:36.551797 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\": not found" containerID="8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58" Mar 18 08:58:36.551979 kubelet[1897]: I0318 08:58:36.551906 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58"} err="failed to get container status \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b106c43335b13c7970a229133c9757cd25ad00cb57b14a23c2313cba2971d58\": not found" Mar 18 08:58:36.552052 kubelet[1897]: I0318 08:58:36.552039 1897 scope.go:117] "RemoveContainer" containerID="7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955" Mar 18 08:58:36.552597 env[1162]: time="2025-03-18T08:58:36.552470875Z" level=error msg="ContainerStatus for \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\": not found" Mar 18 08:58:36.552991 kubelet[1897]: E0318 08:58:36.552931 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\": not found" containerID="7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955" Mar 18 08:58:36.553093 kubelet[1897]: I0318 08:58:36.553044 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955"} err="failed to get container status \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\": rpc error: code = NotFound desc = an error occurred when try to find container \"7046c49b6b313e69e2c61cc2c898cf6c7d03087ca94f1ce6edd433b0b17d3955\": not found" Mar 18 08:58:36.553197 kubelet[1897]: I0318 08:58:36.553167 1897 scope.go:117] "RemoveContainer" containerID="54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170" Mar 18 08:58:36.553533 env[1162]: time="2025-03-18T08:58:36.553488547Z" level=error msg="ContainerStatus for \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\": not found" Mar 18 08:58:36.553749 kubelet[1897]: E0318 08:58:36.553733 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\": not found" containerID="54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170" Mar 18 08:58:36.553839 kubelet[1897]: I0318 08:58:36.553816 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170"} err="failed to get container status \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\": rpc error: code = NotFound desc = an error occurred when try to find container \"54cfb131b9c38fae7c87fdc471320796649aaa6d898efa8a220900e09bc47170\": not found" Mar 18 08:58:36.553907 kubelet[1897]: I0318 08:58:36.553896 1897 scope.go:117] "RemoveContainer" containerID="2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c" Mar 18 08:58:36.554161 env[1162]: time="2025-03-18T08:58:36.554101810Z" level=error msg="ContainerStatus for \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\": not found" Mar 18 08:58:36.554486 kubelet[1897]: E0318 08:58:36.554437 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\": not found" containerID="2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c" Mar 18 08:58:36.554601 kubelet[1897]: I0318 08:58:36.554561 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c"} err="failed to get container status \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a7c403706814d5984b1ec263cf5d6067babb38e1849d002ea7303bcf310229c\": not found" Mar 18 08:58:36.554675 kubelet[1897]: I0318 08:58:36.554645 1897 scope.go:117] "RemoveContainer" containerID="3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df" Mar 18 08:58:36.554943 env[1162]: time="2025-03-18T08:58:36.554899479Z" level=error msg="ContainerStatus for \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\": not found" Mar 18 08:58:36.555139 kubelet[1897]: E0318 08:58:36.555100 1897 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\": not found" containerID="3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df" Mar 18 08:58:36.555227 kubelet[1897]: I0318 08:58:36.555209 1897 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df"} err="failed to get container status \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c3f8cf7f838199823b8ca322c56e1d49689e142ff9dd74cc9a4a9bf9ed439df\": not found" Mar 18 08:58:37.346772 sshd[3436]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:37.352183 systemd[1]: sshd@22-172.24.4.149:22-172.24.4.1:33664.service: Deactivated successfully. Mar 18 08:58:37.354406 systemd[1]: session-23.scope: Deactivated successfully. Mar 18 08:58:37.354969 systemd[1]: session-23.scope: Consumed 1.320s CPU time. Mar 18 08:58:37.356356 systemd-logind[1148]: Session 23 logged out. Waiting for processes to exit. Mar 18 08:58:37.358747 systemd[1]: Started sshd@23-172.24.4.149:22-172.24.4.1:52622.service. Mar 18 08:58:37.363524 systemd-logind[1148]: Removed session 23. Mar 18 08:58:37.622865 kubelet[1897]: I0318 08:58:37.622668 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="190f0beb-6039-4ec5-ba7d-f4198e5e0865" path="/var/lib/kubelet/pods/190f0beb-6039-4ec5-ba7d-f4198e5e0865/volumes" Mar 18 08:58:37.624938 kubelet[1897]: I0318 08:58:37.624891 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" path="/var/lib/kubelet/pods/9fb2da52-b394-4eee-9638-3d8e36278947/volumes" Mar 18 08:58:38.760953 kubelet[1897]: E0318 08:58:38.760862 1897 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 18 08:58:38.923819 sshd[3603]: Accepted publickey for core from 172.24.4.1 port 52622 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:38.926347 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:38.937699 systemd[1]: Started session-24.scope. Mar 18 08:58:38.938442 systemd-logind[1148]: New session 24 of user core. Mar 18 08:58:40.429508 kubelet[1897]: E0318 08:58:40.429391 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" containerName="apply-sysctl-overwrites" Mar 18 08:58:40.429508 kubelet[1897]: E0318 08:58:40.429423 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" containerName="mount-bpf-fs" Mar 18 08:58:40.429508 kubelet[1897]: E0318 08:58:40.429431 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" containerName="cilium-agent" Mar 18 08:58:40.429508 kubelet[1897]: E0318 08:58:40.429438 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" containerName="mount-cgroup" Mar 18 08:58:40.429508 kubelet[1897]: E0318 08:58:40.429444 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" containerName="clean-cilium-state" Mar 18 08:58:40.429508 kubelet[1897]: E0318 08:58:40.429451 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="190f0beb-6039-4ec5-ba7d-f4198e5e0865" containerName="cilium-operator" Mar 18 08:58:40.429508 kubelet[1897]: I0318 08:58:40.429527 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fb2da52-b394-4eee-9638-3d8e36278947" containerName="cilium-agent" Mar 18 08:58:40.429508 kubelet[1897]: I0318 08:58:40.429537 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="190f0beb-6039-4ec5-ba7d-f4198e5e0865" containerName="cilium-operator" Mar 18 08:58:40.440105 systemd[1]: Created slice kubepods-burstable-podf7384662_639d_49eb_9027_83d56b2a5f58.slice. Mar 18 08:58:40.531356 kubelet[1897]: I0318 08:58:40.531317 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-cgroup\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.531584 kubelet[1897]: I0318 08:58:40.531557 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-run\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.531759 kubelet[1897]: I0318 08:58:40.531675 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-config-path\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.531881 kubelet[1897]: I0318 08:58:40.531866 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-lib-modules\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532003 kubelet[1897]: I0318 08:58:40.531989 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-kernel\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532155 kubelet[1897]: I0318 08:58:40.532104 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-hubble-tls\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532255 kubelet[1897]: I0318 08:58:40.532242 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-ipsec-secrets\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532375 kubelet[1897]: I0318 08:58:40.532361 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-net\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532492 kubelet[1897]: I0318 08:58:40.532477 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-bpf-maps\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532606 kubelet[1897]: I0318 08:58:40.532593 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-hostproc\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532717 kubelet[1897]: I0318 08:58:40.532704 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-xtables-lock\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532843 kubelet[1897]: I0318 08:58:40.532798 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cni-path\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.532961 kubelet[1897]: I0318 08:58:40.532947 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-clustermesh-secrets\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.533071 kubelet[1897]: I0318 08:58:40.533058 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-etc-cni-netd\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.533205 kubelet[1897]: I0318 08:58:40.533181 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj4fz\" (UniqueName: \"kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-kube-api-access-cj4fz\") pod \"cilium-bq2rc\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " pod="kube-system/cilium-bq2rc" Mar 18 08:58:40.575423 sshd[3603]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:40.584243 systemd[1]: sshd@23-172.24.4.149:22-172.24.4.1:52622.service: Deactivated successfully. Mar 18 08:58:40.585839 systemd[1]: session-24.scope: Deactivated successfully. Mar 18 08:58:40.588373 systemd-logind[1148]: Session 24 logged out. Waiting for processes to exit. Mar 18 08:58:40.593007 systemd[1]: Started sshd@24-172.24.4.149:22-172.24.4.1:52624.service. Mar 18 08:58:40.597392 systemd-logind[1148]: Removed session 24. Mar 18 08:58:40.744292 env[1162]: time="2025-03-18T08:58:40.744007832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq2rc,Uid:f7384662-639d-49eb-9027-83d56b2a5f58,Namespace:kube-system,Attempt:0,}" Mar 18 08:58:40.774031 env[1162]: time="2025-03-18T08:58:40.773975458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:58:40.774250 env[1162]: time="2025-03-18T08:58:40.774223763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:58:40.774347 env[1162]: time="2025-03-18T08:58:40.774324313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:58:40.774647 env[1162]: time="2025-03-18T08:58:40.774602225Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1 pid=3629 runtime=io.containerd.runc.v2 Mar 18 08:58:40.802882 systemd[1]: Started cri-containerd-4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1.scope. Mar 18 08:58:40.842587 env[1162]: time="2025-03-18T08:58:40.842516645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq2rc,Uid:f7384662-639d-49eb-9027-83d56b2a5f58,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\"" Mar 18 08:58:40.846325 env[1162]: time="2025-03-18T08:58:40.846284545Z" level=info msg="CreateContainer within sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 18 08:58:40.862743 env[1162]: time="2025-03-18T08:58:40.862684508Z" level=info msg="CreateContainer within sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\"" Mar 18 08:58:40.864929 env[1162]: time="2025-03-18T08:58:40.864891895Z" level=info msg="StartContainer for \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\"" Mar 18 08:58:40.879905 systemd[1]: Started cri-containerd-528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9.scope. Mar 18 08:58:40.891423 systemd[1]: cri-containerd-528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9.scope: Deactivated successfully. Mar 18 08:58:40.912880 env[1162]: time="2025-03-18T08:58:40.912820816Z" level=info msg="shim disconnected" id=528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9 Mar 18 08:58:40.912880 env[1162]: time="2025-03-18T08:58:40.912873886Z" level=warning msg="cleaning up after shim disconnected" id=528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9 namespace=k8s.io Mar 18 08:58:40.912880 env[1162]: time="2025-03-18T08:58:40.912885648Z" level=info msg="cleaning up dead shim" Mar 18 08:58:40.921182 env[1162]: time="2025-03-18T08:58:40.921143483Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3691 runtime=io.containerd.runc.v2\ntime=\"2025-03-18T08:58:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 18 08:58:40.921961 env[1162]: time="2025-03-18T08:58:40.921905274Z" level=error msg="Failed to pipe stdout of container \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\"" error="reading from a closed fifo" Mar 18 08:58:40.922138 env[1162]: time="2025-03-18T08:58:40.922077748Z" level=error msg="Failed to pipe stderr of container \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\"" error="reading from a closed fifo" Mar 18 08:58:40.922406 env[1162]: time="2025-03-18T08:58:40.921564544Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Mar 18 08:58:40.926263 env[1162]: time="2025-03-18T08:58:40.926218989Z" level=error msg="StartContainer for \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 18 08:58:40.926765 kubelet[1897]: E0318 08:58:40.926541 1897 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9" Mar 18 08:58:40.927465 kubelet[1897]: E0318 08:58:40.926719 1897 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 18 08:58:40.927465 kubelet[1897]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 18 08:58:40.927465 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 18 08:58:40.927465 kubelet[1897]: rm /hostbin/cilium-mount Mar 18 08:58:40.927726 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cj4fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bq2rc_kube-system(f7384662-639d-49eb-9027-83d56b2a5f58): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 18 08:58:40.927726 kubelet[1897]: > logger="UnhandledError" Mar 18 08:58:40.928872 kubelet[1897]: E0318 08:58:40.928199 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bq2rc" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" Mar 18 08:58:41.496435 env[1162]: time="2025-03-18T08:58:41.496328157Z" level=info msg="CreateContainer within sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Mar 18 08:58:41.530062 env[1162]: time="2025-03-18T08:58:41.529983770Z" level=info msg="CreateContainer within sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\"" Mar 18 08:58:41.541574 env[1162]: time="2025-03-18T08:58:41.541503073Z" level=info msg="StartContainer for \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\"" Mar 18 08:58:41.582276 systemd[1]: Started cri-containerd-ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389.scope. Mar 18 08:58:41.593638 systemd[1]: cri-containerd-ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389.scope: Deactivated successfully. Mar 18 08:58:41.593811 systemd[1]: Stopped cri-containerd-ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389.scope. Mar 18 08:58:41.605622 env[1162]: time="2025-03-18T08:58:41.605567037Z" level=info msg="shim disconnected" id=ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389 Mar 18 08:58:41.605804 env[1162]: time="2025-03-18T08:58:41.605626168Z" level=warning msg="cleaning up after shim disconnected" id=ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389 namespace=k8s.io Mar 18 08:58:41.605804 env[1162]: time="2025-03-18T08:58:41.605643330Z" level=info msg="cleaning up dead shim" Mar 18 08:58:41.614592 env[1162]: time="2025-03-18T08:58:41.614521260Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3729 runtime=io.containerd.runc.v2\ntime=\"2025-03-18T08:58:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 18 08:58:41.615080 env[1162]: time="2025-03-18T08:58:41.615022662Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Mar 18 08:58:41.615545 env[1162]: time="2025-03-18T08:58:41.615507282Z" level=error msg="Failed to pipe stdout of container \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\"" error="reading from a closed fifo" Mar 18 08:58:41.615779 env[1162]: time="2025-03-18T08:58:41.615638760Z" level=error msg="Failed to pipe stderr of container \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\"" error="reading from a closed fifo" Mar 18 08:58:41.620439 env[1162]: time="2025-03-18T08:58:41.620403201Z" level=error msg="StartContainer for \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 18 08:58:41.621582 kubelet[1897]: E0318 08:58:41.620603 1897 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389" Mar 18 08:58:41.621582 kubelet[1897]: E0318 08:58:41.620710 1897 kuberuntime_manager.go:1272] "Unhandled Error" err=< Mar 18 08:58:41.621582 kubelet[1897]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 18 08:58:41.621582 kubelet[1897]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 18 08:58:41.621582 kubelet[1897]: rm /hostbin/cilium-mount Mar 18 08:58:41.621582 kubelet[1897]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cj4fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-bq2rc_kube-system(f7384662-639d-49eb-9027-83d56b2a5f58): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 18 08:58:41.621582 kubelet[1897]: > logger="UnhandledError" Mar 18 08:58:41.622468 kubelet[1897]: E0318 08:58:41.622408 1897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-bq2rc" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" Mar 18 08:58:41.745151 sshd[3616]: Accepted publickey for core from 172.24.4.1 port 52624 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:41.747878 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:41.756189 systemd-logind[1148]: New session 25 of user core. Mar 18 08:58:41.757372 systemd[1]: Started session-25.scope. Mar 18 08:58:42.438100 sshd[3616]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:42.446706 systemd[1]: Started sshd@25-172.24.4.149:22-172.24.4.1:52638.service. Mar 18 08:58:42.448832 systemd[1]: sshd@24-172.24.4.149:22-172.24.4.1:52624.service: Deactivated successfully. Mar 18 08:58:42.450876 systemd[1]: session-25.scope: Deactivated successfully. Mar 18 08:58:42.454264 systemd-logind[1148]: Session 25 logged out. Waiting for processes to exit. Mar 18 08:58:42.459838 systemd-logind[1148]: Removed session 25. Mar 18 08:58:42.493856 kubelet[1897]: I0318 08:58:42.493790 1897 scope.go:117] "RemoveContainer" containerID="528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9" Mar 18 08:58:42.496163 env[1162]: time="2025-03-18T08:58:42.495023098Z" level=info msg="StopPodSandbox for \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\"" Mar 18 08:58:42.496163 env[1162]: time="2025-03-18T08:58:42.495276044Z" level=info msg="Container to stop \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:42.496163 env[1162]: time="2025-03-18T08:58:42.495459188Z" level=info msg="Container to stop \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 18 08:58:42.502663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1-shm.mount: Deactivated successfully. Mar 18 08:58:42.509680 env[1162]: time="2025-03-18T08:58:42.509597611Z" level=info msg="RemoveContainer for \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\"" Mar 18 08:58:42.525510 systemd[1]: cri-containerd-4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1.scope: Deactivated successfully. Mar 18 08:58:42.542187 env[1162]: time="2025-03-18T08:58:42.539965796Z" level=info msg="RemoveContainer for \"528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9\" returns successfully" Mar 18 08:58:42.569450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1-rootfs.mount: Deactivated successfully. Mar 18 08:58:42.584925 env[1162]: time="2025-03-18T08:58:42.584870552Z" level=info msg="shim disconnected" id=4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1 Mar 18 08:58:42.584925 env[1162]: time="2025-03-18T08:58:42.584921698Z" level=warning msg="cleaning up after shim disconnected" id=4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1 namespace=k8s.io Mar 18 08:58:42.585103 env[1162]: time="2025-03-18T08:58:42.584933299Z" level=info msg="cleaning up dead shim" Mar 18 08:58:42.593868 env[1162]: time="2025-03-18T08:58:42.593814425Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3772 runtime=io.containerd.runc.v2\n" Mar 18 08:58:42.594202 env[1162]: time="2025-03-18T08:58:42.594164844Z" level=info msg="TearDown network for sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" successfully" Mar 18 08:58:42.594202 env[1162]: time="2025-03-18T08:58:42.594195892Z" level=info msg="StopPodSandbox for \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" returns successfully" Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664285 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664328 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-cgroup\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664377 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-hubble-tls\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664397 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-clustermesh-secrets\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664521 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-etc-cni-netd\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664543 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-xtables-lock\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664562 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-bpf-maps\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664577 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-hostproc\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664628 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-kernel\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664645 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cni-path\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664664 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-config-path\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664706 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-lib-modules\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664754 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-ipsec-secrets\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664792 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-run\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664809 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-net\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.665635 kubelet[1897]: I0318 08:58:42.664829 1897 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj4fz\" (UniqueName: \"kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-kube-api-access-cj4fz\") pod \"f7384662-639d-49eb-9027-83d56b2a5f58\" (UID: \"f7384662-639d-49eb-9027-83d56b2a5f58\") " Mar 18 08:58:42.666423 kubelet[1897]: I0318 08:58:42.664882 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-cgroup\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.667030 kubelet[1897]: I0318 08:58:42.666737 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cni-path" (OuterVolumeSpecName: "cni-path") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.667030 kubelet[1897]: I0318 08:58:42.666766 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.667030 kubelet[1897]: I0318 08:58:42.666806 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.667030 kubelet[1897]: I0318 08:58:42.666824 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.667030 kubelet[1897]: I0318 08:58:42.666838 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-hostproc" (OuterVolumeSpecName: "hostproc") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.667030 kubelet[1897]: I0318 08:58:42.666852 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.669443 kubelet[1897]: I0318 08:58:42.669419 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.671859 kubelet[1897]: I0318 08:58:42.671724 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:42.672281 kubelet[1897]: I0318 08:58:42.671763 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.672382 kubelet[1897]: I0318 08:58:42.671814 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:42.674387 systemd[1]: var-lib-kubelet-pods-f7384662\x2d639d\x2d49eb\x2d9027\x2d83d56b2a5f58-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 18 08:58:42.678893 systemd[1]: var-lib-kubelet-pods-f7384662\x2d639d\x2d49eb\x2d9027\x2d83d56b2a5f58-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcj4fz.mount: Deactivated successfully. Mar 18 08:58:42.681086 kubelet[1897]: I0318 08:58:42.681024 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:42.681654 kubelet[1897]: I0318 08:58:42.681623 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-kube-api-access-cj4fz" (OuterVolumeSpecName: "kube-api-access-cj4fz") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "kube-api-access-cj4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:42.684596 systemd[1]: var-lib-kubelet-pods-f7384662\x2d639d\x2d49eb\x2d9027\x2d83d56b2a5f58-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 18 08:58:42.686163 kubelet[1897]: I0318 08:58:42.686139 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:58:42.688274 systemd[1]: var-lib-kubelet-pods-f7384662\x2d639d\x2d49eb\x2d9027\x2d83d56b2a5f58-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 18 08:58:42.690079 kubelet[1897]: I0318 08:58:42.690056 1897 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f7384662-639d-49eb-9027-83d56b2a5f58" (UID: "f7384662-639d-49eb-9027-83d56b2a5f58"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:58:42.765680 kubelet[1897]: I0318 08:58:42.765627 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-kernel\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.766018 kubelet[1897]: I0318 08:58:42.765958 1897 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cni-path\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.766309 kubelet[1897]: I0318 08:58:42.766250 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-config-path\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.766653 kubelet[1897]: I0318 08:58:42.766601 1897 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-lib-modules\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.766998 kubelet[1897]: I0318 08:58:42.766968 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-ipsec-secrets\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.767282 kubelet[1897]: I0318 08:58:42.767218 1897 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-cilium-run\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.767539 kubelet[1897]: I0318 08:58:42.767510 1897 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-host-proc-sys-net\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.767784 kubelet[1897]: I0318 08:58:42.767727 1897 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cj4fz\" (UniqueName: \"kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-kube-api-access-cj4fz\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.768043 kubelet[1897]: I0318 08:58:42.768010 1897 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f7384662-639d-49eb-9027-83d56b2a5f58-hubble-tls\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.768278 kubelet[1897]: I0318 08:58:42.768249 1897 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f7384662-639d-49eb-9027-83d56b2a5f58-clustermesh-secrets\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.768628 kubelet[1897]: I0318 08:58:42.768598 1897 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-xtables-lock\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.768850 kubelet[1897]: I0318 08:58:42.768820 1897 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-etc-cni-netd\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.769068 kubelet[1897]: I0318 08:58:42.769039 1897 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-bpf-maps\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:42.769333 kubelet[1897]: I0318 08:58:42.769305 1897 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f7384662-639d-49eb-9027-83d56b2a5f58-hostproc\") on node \"ci-3510-3-7-7-00419dcf52.novalocal\" DevicePath \"\"" Mar 18 08:58:43.499999 kubelet[1897]: I0318 08:58:43.499934 1897 scope.go:117] "RemoveContainer" containerID="ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389" Mar 18 08:58:43.504825 env[1162]: time="2025-03-18T08:58:43.504255354Z" level=info msg="RemoveContainer for \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\"" Mar 18 08:58:43.510010 env[1162]: time="2025-03-18T08:58:43.509947979Z" level=info msg="RemoveContainer for \"ef4641ac59266d954dc5cb9a5a0c5c52eaf7e2da648a3bf606ebd86d7d490389\" returns successfully" Mar 18 08:58:43.516805 systemd[1]: Removed slice kubepods-burstable-podf7384662_639d_49eb_9027_83d56b2a5f58.slice. Mar 18 08:58:43.620525 kubelet[1897]: I0318 08:58:43.620483 1897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" path="/var/lib/kubelet/pods/f7384662-639d-49eb-9027-83d56b2a5f58/volumes" Mar 18 08:58:43.636735 sshd[3752]: Accepted publickey for core from 172.24.4.1 port 52638 ssh2: RSA SHA256:trCuDUD/nS6E66z3GvGn3KNpSa4/x72nw+QDrOahGb4 Mar 18 08:58:43.637875 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 18 08:58:43.642625 systemd[1]: Started session-26.scope. Mar 18 08:58:43.643065 systemd-logind[1148]: New session 26 of user core. Mar 18 08:58:43.687713 kubelet[1897]: E0318 08:58:43.687678 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" containerName="mount-cgroup" Mar 18 08:58:43.687713 kubelet[1897]: E0318 08:58:43.687705 1897 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" containerName="mount-cgroup" Mar 18 08:58:43.688071 kubelet[1897]: I0318 08:58:43.687732 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" containerName="mount-cgroup" Mar 18 08:58:43.688071 kubelet[1897]: I0318 08:58:43.687739 1897 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7384662-639d-49eb-9027-83d56b2a5f58" containerName="mount-cgroup" Mar 18 08:58:43.692482 systemd[1]: Created slice kubepods-burstable-podaec83931_7ccd_4a5d_acc2_f84e5ce97fbb.slice. Mar 18 08:58:43.761564 kubelet[1897]: E0318 08:58:43.761466 1897 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 18 08:58:43.777471 kubelet[1897]: I0318 08:58:43.777450 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-hubble-tls\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.777596 kubelet[1897]: I0318 08:58:43.777580 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-clustermesh-secrets\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.777687 kubelet[1897]: I0318 08:58:43.777671 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-cilium-config-path\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.777767 kubelet[1897]: I0318 08:58:43.777754 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-cilium-ipsec-secrets\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.777850 kubelet[1897]: I0318 08:58:43.777835 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-host-proc-sys-kernel\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.777936 kubelet[1897]: I0318 08:58:43.777922 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-cilium-run\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778042 kubelet[1897]: I0318 08:58:43.778026 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-lib-modules\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778153 kubelet[1897]: I0318 08:58:43.778137 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-etc-cni-netd\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778242 kubelet[1897]: I0318 08:58:43.778227 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-hostproc\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778331 kubelet[1897]: I0318 08:58:43.778316 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmzz\" (UniqueName: \"kubernetes.io/projected/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-kube-api-access-jqmzz\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778412 kubelet[1897]: I0318 08:58:43.778398 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-bpf-maps\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778555 kubelet[1897]: I0318 08:58:43.778482 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-cilium-cgroup\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778647 kubelet[1897]: I0318 08:58:43.778633 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-cni-path\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778733 kubelet[1897]: I0318 08:58:43.778719 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-xtables-lock\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.778820 kubelet[1897]: I0318 08:58:43.778806 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aec83931-7ccd-4a5d-acc2-f84e5ce97fbb-host-proc-sys-net\") pod \"cilium-wqfcw\" (UID: \"aec83931-7ccd-4a5d-acc2-f84e5ce97fbb\") " pod="kube-system/cilium-wqfcw" Mar 18 08:58:43.996427 env[1162]: time="2025-03-18T08:58:43.996046755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqfcw,Uid:aec83931-7ccd-4a5d-acc2-f84e5ce97fbb,Namespace:kube-system,Attempt:0,}" Mar 18 08:58:44.018232 env[1162]: time="2025-03-18T08:58:44.016430409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 18 08:58:44.018232 env[1162]: time="2025-03-18T08:58:44.016480162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 18 08:58:44.018232 env[1162]: time="2025-03-18T08:58:44.016495813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 18 08:58:44.018872 kubelet[1897]: W0318 08:58:44.018831 1897 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7384662_639d_49eb_9027_83d56b2a5f58.slice/cri-containerd-528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9.scope WatchSource:0}: container "528d230dcd184fa21e8d7a68af849ca2bea192857bac6401b252bf76fa32fcb9" in namespace "k8s.io": not found Mar 18 08:58:44.020513 env[1162]: time="2025-03-18T08:58:44.020448568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81 pid=3805 runtime=io.containerd.runc.v2 Mar 18 08:58:44.050663 systemd[1]: Started cri-containerd-5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81.scope. Mar 18 08:58:44.080322 env[1162]: time="2025-03-18T08:58:44.080285385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqfcw,Uid:aec83931-7ccd-4a5d-acc2-f84e5ce97fbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\"" Mar 18 08:58:44.083876 env[1162]: time="2025-03-18T08:58:44.083847506Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 18 08:58:44.098283 env[1162]: time="2025-03-18T08:58:44.098242221Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d\"" Mar 18 08:58:44.099070 env[1162]: time="2025-03-18T08:58:44.099048735Z" level=info msg="StartContainer for \"874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d\"" Mar 18 08:58:44.123023 systemd[1]: Started cri-containerd-874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d.scope. Mar 18 08:58:44.182781 systemd[1]: cri-containerd-874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d.scope: Deactivated successfully. Mar 18 08:58:44.335185 env[1162]: time="2025-03-18T08:58:44.334963090Z" level=info msg="StartContainer for \"874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d\" returns successfully" Mar 18 08:58:44.400253 env[1162]: time="2025-03-18T08:58:44.400211623Z" level=info msg="shim disconnected" id=874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d Mar 18 08:58:44.400512 env[1162]: time="2025-03-18T08:58:44.400493112Z" level=warning msg="cleaning up after shim disconnected" id=874b56cb42d46a2991307130042c1506762656bf54fb091bded8a2a3d03a634d namespace=k8s.io Mar 18 08:58:44.400606 env[1162]: time="2025-03-18T08:58:44.400590906Z" level=info msg="cleaning up dead shim" Mar 18 08:58:44.408884 env[1162]: time="2025-03-18T08:58:44.408841887Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3890 runtime=io.containerd.runc.v2\n" Mar 18 08:58:44.517853 env[1162]: time="2025-03-18T08:58:44.517767967Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 18 08:58:44.547564 env[1162]: time="2025-03-18T08:58:44.547428702Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a\"" Mar 18 08:58:44.548979 env[1162]: time="2025-03-18T08:58:44.548190924Z" level=info msg="StartContainer for \"ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a\"" Mar 18 08:58:44.578911 systemd[1]: Started cri-containerd-ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a.scope. Mar 18 08:58:44.622574 systemd[1]: cri-containerd-ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a.scope: Deactivated successfully. Mar 18 08:58:44.624302 env[1162]: time="2025-03-18T08:58:44.624248414Z" level=info msg="StartContainer for \"ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a\" returns successfully" Mar 18 08:58:44.648178 env[1162]: time="2025-03-18T08:58:44.648102780Z" level=info msg="shim disconnected" id=ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a Mar 18 08:58:44.648402 env[1162]: time="2025-03-18T08:58:44.648382846Z" level=warning msg="cleaning up after shim disconnected" id=ab5bc2c5b550b9b28cec037f3dabf71b1729c8c6d4ac3022601406767b44e57a namespace=k8s.io Mar 18 08:58:44.648498 env[1162]: time="2025-03-18T08:58:44.648482903Z" level=info msg="cleaning up dead shim" Mar 18 08:58:44.657246 env[1162]: time="2025-03-18T08:58:44.657197646Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3951 runtime=io.containerd.runc.v2\n" Mar 18 08:58:45.549173 env[1162]: time="2025-03-18T08:58:45.547261151Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 18 08:58:45.618637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49730834.mount: Deactivated successfully. Mar 18 08:58:45.631046 env[1162]: time="2025-03-18T08:58:45.630977839Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292\"" Mar 18 08:58:45.631613 env[1162]: time="2025-03-18T08:58:45.631583106Z" level=info msg="StartContainer for \"2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292\"" Mar 18 08:58:45.652232 systemd[1]: Started cri-containerd-2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292.scope. Mar 18 08:58:45.685289 systemd[1]: cri-containerd-2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292.scope: Deactivated successfully. Mar 18 08:58:45.689639 env[1162]: time="2025-03-18T08:58:45.689606423Z" level=info msg="StartContainer for \"2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292\" returns successfully" Mar 18 08:58:45.716603 env[1162]: time="2025-03-18T08:58:45.716556364Z" level=info msg="shim disconnected" id=2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292 Mar 18 08:58:45.716924 env[1162]: time="2025-03-18T08:58:45.716902644Z" level=warning msg="cleaning up after shim disconnected" id=2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292 namespace=k8s.io Mar 18 08:58:45.717017 env[1162]: time="2025-03-18T08:58:45.716997692Z" level=info msg="cleaning up dead shim" Mar 18 08:58:45.725397 env[1162]: time="2025-03-18T08:58:45.725357889Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4011 runtime=io.containerd.runc.v2\n" Mar 18 08:58:45.899169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d70b571133478a29c8d59b0275d0d47e5ea9b4384de3f7401266d82a53fd292-rootfs.mount: Deactivated successfully. Mar 18 08:58:46.556965 env[1162]: time="2025-03-18T08:58:46.556891577Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 18 08:58:46.601717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188296968.mount: Deactivated successfully. Mar 18 08:58:46.611636 env[1162]: time="2025-03-18T08:58:46.611536305Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283\"" Mar 18 08:58:46.615503 env[1162]: time="2025-03-18T08:58:46.615422276Z" level=info msg="StartContainer for \"dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283\"" Mar 18 08:58:46.643554 systemd[1]: Started cri-containerd-dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283.scope. Mar 18 08:58:46.674079 systemd[1]: cri-containerd-dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283.scope: Deactivated successfully. Mar 18 08:58:46.675645 env[1162]: time="2025-03-18T08:58:46.675422245Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec83931_7ccd_4a5d_acc2_f84e5ce97fbb.slice/cri-containerd-dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283.scope/memory.events\": no such file or directory" Mar 18 08:58:46.679941 env[1162]: time="2025-03-18T08:58:46.679907160Z" level=info msg="StartContainer for \"dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283\" returns successfully" Mar 18 08:58:46.703365 env[1162]: time="2025-03-18T08:58:46.703322970Z" level=info msg="shim disconnected" id=dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283 Mar 18 08:58:46.703579 env[1162]: time="2025-03-18T08:58:46.703560306Z" level=warning msg="cleaning up after shim disconnected" id=dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283 namespace=k8s.io Mar 18 08:58:46.704874 env[1162]: time="2025-03-18T08:58:46.704846032Z" level=info msg="cleaning up dead shim" Mar 18 08:58:46.712486 env[1162]: time="2025-03-18T08:58:46.712447883Z" level=warning msg="cleanup warnings time=\"2025-03-18T08:58:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" Mar 18 08:58:46.899941 systemd[1]: run-containerd-runc-k8s.io-dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283-runc.aQ8JYf.mount: Deactivated successfully. Mar 18 08:58:46.901322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dffc6ded88a2205e284b18211cb7a240b9993e5177e0d20ae17ad309d8cce283-rootfs.mount: Deactivated successfully. Mar 18 08:58:47.569085 env[1162]: time="2025-03-18T08:58:47.568998433Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 18 08:58:47.623175 env[1162]: time="2025-03-18T08:58:47.622542342Z" level=info msg="CreateContainer within sandbox \"5d6faf4db06d836410f39c31f4df41f123bb9a32122491cc800c797cd3262f81\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c60995d4cd85a067e536116e62b62a99d76a62bcbfd2e191a6ac1b02ecd890f\"" Mar 18 08:58:47.627500 env[1162]: time="2025-03-18T08:58:47.627426147Z" level=info msg="StartContainer for \"4c60995d4cd85a067e536116e62b62a99d76a62bcbfd2e191a6ac1b02ecd890f\"" Mar 18 08:58:47.666615 systemd[1]: Started cri-containerd-4c60995d4cd85a067e536116e62b62a99d76a62bcbfd2e191a6ac1b02ecd890f.scope. Mar 18 08:58:47.705324 env[1162]: time="2025-03-18T08:58:47.705280105Z" level=info msg="StartContainer for \"4c60995d4cd85a067e536116e62b62a99d76a62bcbfd2e191a6ac1b02ecd890f\" returns successfully" Mar 18 08:58:48.160215 kernel: cryptd: max_cpu_qlen set to 1000 Mar 18 08:58:48.216135 kubelet[1897]: I0318 08:58:48.216031 1897 setters.go:600] "Node became not ready" node="ci-3510-3-7-7-00419dcf52.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-18T08:58:48Z","lastTransitionTime":"2025-03-18T08:58:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 18 08:58:48.225158 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 18 08:58:51.332249 systemd-networkd[982]: lxc_health: Link UP Mar 18 08:58:51.349510 systemd-networkd[982]: lxc_health: Gained carrier Mar 18 08:58:51.350398 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 18 08:58:52.043495 kubelet[1897]: I0318 08:58:52.043443 1897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqfcw" podStartSLOduration=9.043426775 podStartE2EDuration="9.043426775s" podCreationTimestamp="2025-03-18 08:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-18 08:58:48.627492641 +0000 UTC m=+225.110964165" watchObservedRunningTime="2025-03-18 08:58:52.043426775 +0000 UTC m=+228.526898239" Mar 18 08:58:52.798393 systemd-networkd[982]: lxc_health: Gained IPv6LL Mar 18 08:58:54.824947 systemd[1]: run-containerd-runc-k8s.io-4c60995d4cd85a067e536116e62b62a99d76a62bcbfd2e191a6ac1b02ecd890f-runc.eHUUXt.mount: Deactivated successfully. Mar 18 08:58:57.044684 systemd[1]: run-containerd-runc-k8s.io-4c60995d4cd85a067e536116e62b62a99d76a62bcbfd2e191a6ac1b02ecd890f-runc.t97QdY.mount: Deactivated successfully. Mar 18 08:58:57.439974 sshd[3752]: pam_unix(sshd:session): session closed for user core Mar 18 08:58:57.446350 systemd[1]: sshd@25-172.24.4.149:22-172.24.4.1:52638.service: Deactivated successfully. Mar 18 08:58:57.447974 systemd[1]: session-26.scope: Deactivated successfully. Mar 18 08:58:57.449428 systemd-logind[1148]: Session 26 logged out. Waiting for processes to exit. Mar 18 08:58:57.451379 systemd-logind[1148]: Removed session 26. Mar 18 08:59:03.659488 env[1162]: time="2025-03-18T08:59:03.659371576Z" level=info msg="StopPodSandbox for \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\"" Mar 18 08:59:03.662351 env[1162]: time="2025-03-18T08:59:03.659557524Z" level=info msg="TearDown network for sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" successfully" Mar 18 08:59:03.662351 env[1162]: time="2025-03-18T08:59:03.659634710Z" level=info msg="StopPodSandbox for \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" returns successfully" Mar 18 08:59:03.662351 env[1162]: time="2025-03-18T08:59:03.660811871Z" level=info msg="RemovePodSandbox for \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\"" Mar 18 08:59:03.662351 env[1162]: time="2025-03-18T08:59:03.660870411Z" level=info msg="Forcibly stopping sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\"" Mar 18 08:59:03.662351 env[1162]: time="2025-03-18T08:59:03.661162469Z" level=info msg="TearDown network for sandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" successfully" Mar 18 08:59:03.672430 env[1162]: time="2025-03-18T08:59:03.672265643Z" level=info msg="RemovePodSandbox \"31f3101795338014e20b50589001961dad7d70a3f1707b4f5c5c752268c1cdd1\" returns successfully" Mar 18 08:59:03.673378 env[1162]: time="2025-03-18T08:59:03.673295517Z" level=info msg="StopPodSandbox for \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\"" Mar 18 08:59:03.673556 env[1162]: time="2025-03-18T08:59:03.673452382Z" level=info msg="TearDown network for sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" successfully" Mar 18 08:59:03.673556 env[1162]: time="2025-03-18T08:59:03.673534516Z" level=info msg="StopPodSandbox for \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" returns successfully" Mar 18 08:59:03.677225 env[1162]: time="2025-03-18T08:59:03.674308289Z" level=info msg="RemovePodSandbox for \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\"" Mar 18 08:59:03.677225 env[1162]: time="2025-03-18T08:59:03.674383410Z" level=info msg="Forcibly stopping sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\"" Mar 18 08:59:03.677225 env[1162]: time="2025-03-18T08:59:03.674598885Z" level=info msg="TearDown network for sandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" successfully" Mar 18 08:59:03.680595 env[1162]: time="2025-03-18T08:59:03.680537399Z" level=info msg="RemovePodSandbox \"4f7637f3c387e0a795589f760f67999ee3ad1f6e8c743be58e8991f71f631cf1\" returns successfully" Mar 18 08:59:03.681410 env[1162]: time="2025-03-18T08:59:03.681337502Z" level=info msg="StopPodSandbox for \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\"" Mar 18 08:59:03.681569 env[1162]: time="2025-03-18T08:59:03.681489006Z" level=info msg="TearDown network for sandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" successfully" Mar 18 08:59:03.681673 env[1162]: time="2025-03-18T08:59:03.681566602Z" level=info msg="StopPodSandbox for \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" returns successfully" Mar 18 08:59:03.682213 env[1162]: time="2025-03-18T08:59:03.682108510Z" level=info msg="RemovePodSandbox for \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\"" Mar 18 08:59:03.682464 env[1162]: time="2025-03-18T08:59:03.682383005Z" level=info msg="Forcibly stopping sandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\"" Mar 18 08:59:03.682776 env[1162]: time="2025-03-18T08:59:03.682729186Z" level=info msg="TearDown network for sandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" successfully" Mar 18 08:59:03.688433 env[1162]: time="2025-03-18T08:59:03.688378626Z" level=info msg="RemovePodSandbox \"34284a9020414329ccc7bb5c6f98d24792d8d302c632615fcbce0562ec642f6c\" returns successfully"