Mar 17 20:21:30.021106 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 20:21:30.021162 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:21:30.021184 kernel: BIOS-provided physical RAM map: Mar 17 20:21:30.021203 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 20:21:30.021216 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 20:21:30.021229 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 20:21:30.021243 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 17 20:21:30.021256 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 17 20:21:30.021269 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 20:21:30.021281 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 20:21:30.021294 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 17 20:21:30.021306 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 20:21:30.021397 kernel: NX (Execute Disable) protection: active Mar 17 20:21:30.021410 kernel: SMBIOS 3.0.0 present. Mar 17 20:21:30.021426 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 17 20:21:30.021439 kernel: Hypervisor detected: KVM Mar 17 20:21:30.021452 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 20:21:30.021465 kernel: kvm-clock: cpu 0, msr 8019a001, primary cpu clock Mar 17 20:21:30.021481 kernel: kvm-clock: using sched offset of 4184234235 cycles Mar 17 20:21:30.021496 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 20:21:30.021510 kernel: tsc: Detected 1996.249 MHz processor Mar 17 20:21:30.021524 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 20:21:30.021539 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 20:21:30.021553 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 17 20:21:30.021567 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 20:21:30.021580 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 17 20:21:30.021594 kernel: ACPI: Early table checksum verification disabled Mar 17 20:21:30.021611 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 17 20:21:30.021625 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:21:30.021639 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:21:30.021653 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:21:30.021667 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 17 20:21:30.021681 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:21:30.021695 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 20:21:30.021709 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 17 20:21:30.021725 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 17 20:21:30.021739 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 17 20:21:30.021753 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 17 20:21:30.021767 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 17 20:21:30.021781 kernel: No NUMA configuration found Mar 17 20:21:30.021800 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 17 20:21:30.021814 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 17 20:21:30.021830 kernel: Zone ranges: Mar 17 20:21:30.021845 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 20:21:30.021859 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 17 20:21:30.021874 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 17 20:21:30.021888 kernel: Movable zone start for each node Mar 17 20:21:30.021902 kernel: Early memory node ranges Mar 17 20:21:30.021916 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 20:21:30.021930 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 17 20:21:30.021947 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 17 20:21:30.021961 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 17 20:21:30.021975 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 20:21:30.021989 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 20:21:30.022004 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 17 20:21:30.022018 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 20:21:30.022032 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 20:21:30.022046 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 20:21:30.022060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 20:21:30.022077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 20:21:30.022092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 20:21:30.022106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 20:21:30.022120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 20:21:30.022135 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 20:21:30.022149 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 20:21:30.022163 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 17 20:21:30.022178 kernel: Booting paravirtualized kernel on KVM Mar 17 20:21:30.022192 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 20:21:30.022209 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Mar 17 20:21:30.022224 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Mar 17 20:21:30.022238 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Mar 17 20:21:30.022252 kernel: pcpu-alloc: [0] 0 1 Mar 17 20:21:30.022266 kernel: kvm-guest: stealtime: cpu 0, msr 13bc1c0c0 Mar 17 20:21:30.022280 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 20:21:30.022294 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 17 20:21:30.022308 kernel: Policy zone: Normal Mar 17 20:21:30.022346 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:21:30.022365 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 20:21:30.022379 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 20:21:30.022394 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 20:21:30.022408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 20:21:30.022423 kernel: Memory: 3968276K/4193772K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 225236K reserved, 0K cma-reserved) Mar 17 20:21:30.022438 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 20:21:30.022452 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 20:21:30.022466 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 20:21:30.022483 kernel: rcu: Hierarchical RCU implementation. Mar 17 20:21:30.022499 kernel: rcu: RCU event tracing is enabled. Mar 17 20:21:30.022514 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 20:21:30.022528 kernel: Rude variant of Tasks RCU enabled. Mar 17 20:21:30.022543 kernel: Tracing variant of Tasks RCU enabled. Mar 17 20:21:30.022557 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 20:21:30.022572 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 20:21:30.022586 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 20:21:30.022600 kernel: Console: colour VGA+ 80x25 Mar 17 20:21:30.022617 kernel: printk: console [tty0] enabled Mar 17 20:21:30.022631 kernel: printk: console [ttyS0] enabled Mar 17 20:21:30.022645 kernel: ACPI: Core revision 20210730 Mar 17 20:21:30.022659 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 20:21:30.022823 kernel: x2apic enabled Mar 17 20:21:30.022840 kernel: Switched APIC routing to physical x2apic. Mar 17 20:21:30.022856 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 20:21:30.022872 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 20:21:30.022888 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 17 20:21:30.022908 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 20:21:30.022924 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 20:21:30.022941 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 20:21:30.022957 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 20:21:30.022973 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 20:21:30.022989 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 20:21:30.023005 kernel: Speculative Store Bypass: Vulnerable Mar 17 20:21:30.023021 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 17 20:21:30.023037 kernel: Freeing SMP alternatives memory: 32K Mar 17 20:21:30.023059 kernel: pid_max: default: 32768 minimum: 301 Mar 17 20:21:30.023081 kernel: LSM: Security Framework initializing Mar 17 20:21:30.023102 kernel: SELinux: Initializing. Mar 17 20:21:30.023124 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 20:21:30.023147 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 20:21:30.023164 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 17 20:21:30.023192 kernel: Performance Events: AMD PMU driver. Mar 17 20:21:30.023212 kernel: ... version: 0 Mar 17 20:21:30.023229 kernel: ... bit width: 48 Mar 17 20:21:30.023245 kernel: ... generic registers: 4 Mar 17 20:21:30.023262 kernel: ... value mask: 0000ffffffffffff Mar 17 20:21:30.023278 kernel: ... max period: 00007fffffffffff Mar 17 20:21:30.023298 kernel: ... fixed-purpose events: 0 Mar 17 20:21:30.023337 kernel: ... event mask: 000000000000000f Mar 17 20:21:30.023355 kernel: signal: max sigframe size: 1440 Mar 17 20:21:30.023372 kernel: rcu: Hierarchical SRCU implementation. Mar 17 20:21:30.023388 kernel: smp: Bringing up secondary CPUs ... Mar 17 20:21:30.023408 kernel: x86: Booting SMP configuration: Mar 17 20:21:30.023425 kernel: .... node #0, CPUs: #1 Mar 17 20:21:30.023442 kernel: kvm-clock: cpu 1, msr 8019a041, secondary cpu clock Mar 17 20:21:30.023459 kernel: kvm-guest: stealtime: cpu 1, msr 13bd1c0c0 Mar 17 20:21:30.023475 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 20:21:30.023492 kernel: smpboot: Max logical packages: 2 Mar 17 20:21:30.023508 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 17 20:21:30.023525 kernel: devtmpfs: initialized Mar 17 20:21:30.023541 kernel: x86/mm: Memory block size: 128MB Mar 17 20:21:30.023556 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 20:21:30.023568 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 20:21:30.023579 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 20:21:30.023590 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 20:21:30.023601 kernel: audit: initializing netlink subsys (disabled) Mar 17 20:21:30.023612 kernel: audit: type=2000 audit(1742242889.744:1): state=initialized audit_enabled=0 res=1 Mar 17 20:21:30.023621 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 20:21:30.023630 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 20:21:30.023639 kernel: cpuidle: using governor menu Mar 17 20:21:30.023649 kernel: ACPI: bus type PCI registered Mar 17 20:21:30.023658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 20:21:30.023668 kernel: dca service started, version 1.12.1 Mar 17 20:21:30.023677 kernel: PCI: Using configuration type 1 for base access Mar 17 20:21:30.023686 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 20:21:30.023695 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 20:21:30.023704 kernel: ACPI: Added _OSI(Module Device) Mar 17 20:21:30.023713 kernel: ACPI: Added _OSI(Processor Device) Mar 17 20:21:30.023722 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 20:21:30.023732 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 20:21:30.023741 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 20:21:30.023750 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 20:21:30.023759 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 20:21:30.023768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 20:21:30.023776 kernel: ACPI: Interpreter enabled Mar 17 20:21:30.023785 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 20:21:30.023794 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 20:21:30.023803 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 20:21:30.023814 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 20:21:30.023823 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 20:21:30.023972 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 20:21:30.024078 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Mar 17 20:21:30.024093 kernel: acpiphp: Slot [3] registered Mar 17 20:21:30.024102 kernel: acpiphp: Slot [4] registered Mar 17 20:21:30.024111 kernel: acpiphp: Slot [5] registered Mar 17 20:21:30.024120 kernel: acpiphp: Slot [6] registered Mar 17 20:21:30.024133 kernel: acpiphp: Slot [7] registered Mar 17 20:21:30.024142 kernel: acpiphp: Slot [8] registered Mar 17 20:21:30.024151 kernel: acpiphp: Slot [9] registered Mar 17 20:21:30.024160 kernel: acpiphp: Slot [10] registered Mar 17 20:21:30.024169 kernel: acpiphp: Slot [11] registered Mar 17 20:21:30.024178 kernel: acpiphp: Slot [12] registered Mar 17 20:21:30.024187 kernel: acpiphp: Slot [13] registered Mar 17 20:21:30.024196 kernel: acpiphp: Slot [14] registered Mar 17 20:21:30.024204 kernel: acpiphp: Slot [15] registered Mar 17 20:21:30.024215 kernel: acpiphp: Slot [16] registered Mar 17 20:21:30.024224 kernel: acpiphp: Slot [17] registered Mar 17 20:21:30.024233 kernel: acpiphp: Slot [18] registered Mar 17 20:21:30.024242 kernel: acpiphp: Slot [19] registered Mar 17 20:21:30.024250 kernel: acpiphp: Slot [20] registered Mar 17 20:21:30.024259 kernel: acpiphp: Slot [21] registered Mar 17 20:21:30.024268 kernel: acpiphp: Slot [22] registered Mar 17 20:21:30.024277 kernel: acpiphp: Slot [23] registered Mar 17 20:21:30.024285 kernel: acpiphp: Slot [24] registered Mar 17 20:21:30.024294 kernel: acpiphp: Slot [25] registered Mar 17 20:21:30.024305 kernel: acpiphp: Slot [26] registered Mar 17 20:21:30.027876 kernel: acpiphp: Slot [27] registered Mar 17 20:21:30.027892 kernel: acpiphp: Slot [28] registered Mar 17 20:21:30.027902 kernel: acpiphp: Slot [29] registered Mar 17 20:21:30.027911 kernel: acpiphp: Slot [30] registered Mar 17 20:21:30.027920 kernel: acpiphp: Slot [31] registered Mar 17 20:21:30.027929 kernel: PCI host bridge to bus 0000:00 Mar 17 20:21:30.028044 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 20:21:30.028131 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 20:21:30.028235 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 20:21:30.028338 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 20:21:30.028419 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 17 20:21:30.028496 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 20:21:30.028598 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 20:21:30.028716 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 20:21:30.028839 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 20:21:30.028932 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 17 20:21:30.029019 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 20:21:30.029104 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 20:21:30.029189 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 20:21:30.029273 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 20:21:30.029399 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 20:21:30.029489 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 20:21:30.029574 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 20:21:30.029667 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 20:21:30.029771 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 20:21:30.029900 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 17 20:21:30.029999 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 17 20:21:30.030091 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 17 20:21:30.030177 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 20:21:30.030272 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 20:21:30.030380 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 17 20:21:30.030470 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 17 20:21:30.030557 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 17 20:21:30.030645 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 17 20:21:30.030762 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 17 20:21:30.030847 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 20:21:30.030929 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 17 20:21:30.031010 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 17 20:21:30.031097 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 20:21:30.031179 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 17 20:21:30.031260 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 17 20:21:30.031368 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 20:21:30.031453 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 17 20:21:30.031534 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 17 20:21:30.031615 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 17 20:21:30.031628 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 20:21:30.031636 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 20:21:30.031645 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 20:21:30.031656 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 20:21:30.031664 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 20:21:30.031672 kernel: iommu: Default domain type: Translated Mar 17 20:21:30.031681 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 20:21:30.031760 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 20:21:30.031842 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 20:21:30.031925 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 20:21:30.031944 kernel: vgaarb: loaded Mar 17 20:21:30.031952 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 20:21:30.031963 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 20:21:30.031971 kernel: PTP clock support registered Mar 17 20:21:30.031980 kernel: PCI: Using ACPI for IRQ routing Mar 17 20:21:30.031988 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 20:21:30.031996 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 20:21:30.032005 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 17 20:21:30.032013 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 20:21:30.032021 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 20:21:30.032029 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 20:21:30.032039 kernel: pnp: PnP ACPI init Mar 17 20:21:30.032120 kernel: pnp 00:03: [dma 2] Mar 17 20:21:30.032133 kernel: pnp: PnP ACPI: found 5 devices Mar 17 20:21:30.032141 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 20:21:30.032149 kernel: NET: Registered PF_INET protocol family Mar 17 20:21:30.032157 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 20:21:30.032166 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 20:21:30.032174 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 20:21:30.032185 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 20:21:30.032193 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 20:21:30.032201 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 20:21:30.032210 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 20:21:30.032218 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 20:21:30.032226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 20:21:30.032234 kernel: NET: Registered PF_XDP protocol family Mar 17 20:21:30.032306 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 20:21:30.032402 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 20:21:30.032477 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 20:21:30.032547 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 17 20:21:30.032618 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 17 20:21:30.032700 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 20:21:30.032780 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 20:21:30.032863 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Mar 17 20:21:30.032874 kernel: PCI: CLS 0 bytes, default 64 Mar 17 20:21:30.032883 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 17 20:21:30.032894 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 17 20:21:30.032902 kernel: Initialise system trusted keyrings Mar 17 20:21:30.032911 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 20:21:30.032919 kernel: Key type asymmetric registered Mar 17 20:21:30.032927 kernel: Asymmetric key parser 'x509' registered Mar 17 20:21:30.032935 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 20:21:30.032943 kernel: io scheduler mq-deadline registered Mar 17 20:21:30.032951 kernel: io scheduler kyber registered Mar 17 20:21:30.032959 kernel: io scheduler bfq registered Mar 17 20:21:30.032969 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 20:21:30.032978 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 20:21:30.032986 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 20:21:30.032995 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 20:21:30.033003 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 20:21:30.033011 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 20:21:30.033019 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 20:21:30.033027 kernel: random: crng init done Mar 17 20:21:30.033035 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 20:21:30.033045 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 20:21:30.033053 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 20:21:30.033061 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 20:21:30.033142 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 20:21:30.033219 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 20:21:30.033292 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T20:21:29 UTC (1742242889) Mar 17 20:21:30.033390 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 20:21:30.033403 kernel: NET: Registered PF_INET6 protocol family Mar 17 20:21:30.033414 kernel: Segment Routing with IPv6 Mar 17 20:21:30.033423 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 20:21:30.033431 kernel: NET: Registered PF_PACKET protocol family Mar 17 20:21:30.033439 kernel: Key type dns_resolver registered Mar 17 20:21:30.033447 kernel: IPI shorthand broadcast: enabled Mar 17 20:21:30.033455 kernel: sched_clock: Marking stable (833672613, 166709361)->(1073947798, -73565824) Mar 17 20:21:30.033464 kernel: registered taskstats version 1 Mar 17 20:21:30.033472 kernel: Loading compiled-in X.509 certificates Mar 17 20:21:30.033480 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 20:21:30.033490 kernel: Key type .fscrypt registered Mar 17 20:21:30.033498 kernel: Key type fscrypt-provisioning registered Mar 17 20:21:30.033506 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 20:21:30.033514 kernel: ima: Allocated hash algorithm: sha1 Mar 17 20:21:30.033523 kernel: ima: No architecture policies found Mar 17 20:21:30.033531 kernel: clk: Disabling unused clocks Mar 17 20:21:30.033539 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 20:21:30.033547 kernel: Write protecting the kernel read-only data: 28672k Mar 17 20:21:30.033555 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 20:21:30.033859 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 20:21:30.033868 kernel: Run /init as init process Mar 17 20:21:30.033876 kernel: with arguments: Mar 17 20:21:30.033884 kernel: /init Mar 17 20:21:30.033892 kernel: with environment: Mar 17 20:21:30.033900 kernel: HOME=/ Mar 17 20:21:30.033907 kernel: TERM=linux Mar 17 20:21:30.033915 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 20:21:30.033927 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 20:21:30.033940 systemd[1]: Detected virtualization kvm. Mar 17 20:21:30.033949 systemd[1]: Detected architecture x86-64. Mar 17 20:21:30.033958 systemd[1]: Running in initrd. Mar 17 20:21:30.033966 systemd[1]: No hostname configured, using default hostname. Mar 17 20:21:30.033975 systemd[1]: Hostname set to . Mar 17 20:21:30.033984 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:21:30.033995 systemd[1]: Queued start job for default target initrd.target. Mar 17 20:21:30.034003 systemd[1]: Started systemd-ask-password-console.path. Mar 17 20:21:30.034012 systemd[1]: Reached target cryptsetup.target. Mar 17 20:21:30.034020 systemd[1]: Reached target paths.target. Mar 17 20:21:30.034029 systemd[1]: Reached target slices.target. Mar 17 20:21:30.034037 systemd[1]: Reached target swap.target. Mar 17 20:21:30.034046 systemd[1]: Reached target timers.target. Mar 17 20:21:30.034055 systemd[1]: Listening on iscsid.socket. Mar 17 20:21:30.034066 systemd[1]: Listening on iscsiuio.socket. Mar 17 20:21:30.034080 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 20:21:30.034091 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 20:21:30.034100 systemd[1]: Listening on systemd-journald.socket. Mar 17 20:21:30.034109 systemd[1]: Listening on systemd-networkd.socket. Mar 17 20:21:30.034118 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 20:21:30.034128 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 20:21:30.034137 systemd[1]: Reached target sockets.target. Mar 17 20:21:30.034146 systemd[1]: Starting kmod-static-nodes.service... Mar 17 20:21:30.034155 systemd[1]: Finished network-cleanup.service. Mar 17 20:21:30.034164 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 20:21:30.034173 systemd[1]: Starting systemd-journald.service... Mar 17 20:21:30.034182 systemd[1]: Starting systemd-modules-load.service... Mar 17 20:21:30.034191 systemd[1]: Starting systemd-resolved.service... Mar 17 20:21:30.034200 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 20:21:30.034210 systemd[1]: Finished kmod-static-nodes.service. Mar 17 20:21:30.034219 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 20:21:30.034231 systemd-journald[185]: Journal started Mar 17 20:21:30.034280 systemd-journald[185]: Runtime Journal (/run/log/journal/ca1000e9b273422fb8693e30e0f9ff2d) is 8.0M, max 78.4M, 70.4M free. Mar 17 20:21:30.024757 systemd-modules-load[186]: Inserted module 'overlay' Mar 17 20:21:30.054675 systemd[1]: Started systemd-resolved.service. Mar 17 20:21:30.054704 kernel: audit: type=1130 audit(1742242890.047:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.030755 systemd-resolved[187]: Positive Trust Anchors: Mar 17 20:21:30.063351 systemd[1]: Started systemd-journald.service. Mar 17 20:21:30.063376 kernel: audit: type=1130 audit(1742242890.054:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.030768 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:21:30.069162 kernel: audit: type=1130 audit(1742242890.063:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.030806 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 20:21:30.079612 kernel: audit: type=1130 audit(1742242890.069:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.079636 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 20:21:30.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.033749 systemd-resolved[187]: Defaulting to hostname 'linux'. Mar 17 20:21:30.064186 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 20:21:30.069878 systemd[1]: Reached target nss-lookup.target. Mar 17 20:21:30.080933 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 20:21:30.083171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 20:21:30.092332 kernel: Bridge firewalling registered Mar 17 20:21:30.089881 systemd-modules-load[186]: Inserted module 'br_netfilter' Mar 17 20:21:30.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.095360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 20:21:30.101218 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 20:21:30.102232 kernel: audit: type=1130 audit(1742242890.095:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.102559 systemd[1]: Starting dracut-cmdline.service... Mar 17 20:21:30.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.110332 kernel: audit: type=1130 audit(1742242890.101:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.121345 kernel: SCSI subsystem initialized Mar 17 20:21:30.121390 dracut-cmdline[203]: dracut-dracut-053 Mar 17 20:21:30.123327 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 20:21:30.141347 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 20:21:30.145135 kernel: device-mapper: uevent: version 1.0.3 Mar 17 20:21:30.145162 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 20:21:30.151594 systemd-modules-load[186]: Inserted module 'dm_multipath' Mar 17 20:21:30.152469 systemd[1]: Finished systemd-modules-load.service. Mar 17 20:21:30.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.153729 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:21:30.159429 kernel: audit: type=1130 audit(1742242890.152:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.165249 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:21:30.170810 kernel: audit: type=1130 audit(1742242890.165:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.212434 kernel: Loading iSCSI transport class v2.0-870. Mar 17 20:21:30.234388 kernel: iscsi: registered transport (tcp) Mar 17 20:21:30.261720 kernel: iscsi: registered transport (qla4xxx) Mar 17 20:21:30.261792 kernel: QLogic iSCSI HBA Driver Mar 17 20:21:30.317220 systemd[1]: Finished dracut-cmdline.service. Mar 17 20:21:30.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.320458 systemd[1]: Starting dracut-pre-udev.service... Mar 17 20:21:30.325368 kernel: audit: type=1130 audit(1742242890.318:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.386442 kernel: raid6: sse2x4 gen() 12890 MB/s Mar 17 20:21:30.404422 kernel: raid6: sse2x4 xor() 6631 MB/s Mar 17 20:21:30.422416 kernel: raid6: sse2x2 gen() 13809 MB/s Mar 17 20:21:30.440416 kernel: raid6: sse2x2 xor() 8277 MB/s Mar 17 20:21:30.458418 kernel: raid6: sse2x1 gen() 11061 MB/s Mar 17 20:21:30.480839 kernel: raid6: sse2x1 xor() 6626 MB/s Mar 17 20:21:30.480901 kernel: raid6: using algorithm sse2x2 gen() 13809 MB/s Mar 17 20:21:30.480940 kernel: raid6: .... xor() 8277 MB/s, rmw enabled Mar 17 20:21:30.482141 kernel: raid6: using ssse3x2 recovery algorithm Mar 17 20:21:30.498713 kernel: xor: measuring software checksum speed Mar 17 20:21:30.498775 kernel: prefetch64-sse : 18354 MB/sec Mar 17 20:21:30.499961 kernel: generic_sse : 13518 MB/sec Mar 17 20:21:30.500021 kernel: xor: using function: prefetch64-sse (18354 MB/sec) Mar 17 20:21:30.619385 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 20:21:30.635266 systemd[1]: Finished dracut-pre-udev.service. Mar 17 20:21:30.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.635000 audit: BPF prog-id=7 op=LOAD Mar 17 20:21:30.636000 audit: BPF prog-id=8 op=LOAD Mar 17 20:21:30.636816 systemd[1]: Starting systemd-udevd.service... Mar 17 20:21:30.650091 systemd-udevd[385]: Using default interface naming scheme 'v252'. Mar 17 20:21:30.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.654800 systemd[1]: Started systemd-udevd.service. Mar 17 20:21:30.660350 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 20:21:30.688817 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Mar 17 20:21:30.746517 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 20:21:30.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.749917 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 20:21:30.810112 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 20:21:30.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:30.888745 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 17 20:21:30.907482 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 20:21:30.907506 kernel: GPT:17805311 != 20971519 Mar 17 20:21:30.907518 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 20:21:30.907531 kernel: GPT:17805311 != 20971519 Mar 17 20:21:30.907543 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 20:21:30.907560 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:21:30.907573 kernel: libata version 3.00 loaded. Mar 17 20:21:30.912409 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 20:21:30.927206 kernel: scsi host0: ata_piix Mar 17 20:21:30.927349 kernel: scsi host1: ata_piix Mar 17 20:21:30.927487 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 17 20:21:30.927515 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 17 20:21:30.935348 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) Mar 17 20:21:30.945092 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 20:21:30.990839 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 20:21:30.998857 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 20:21:31.004618 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 20:21:31.005475 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 20:21:31.008647 systemd[1]: Starting disk-uuid.service... Mar 17 20:21:31.025588 disk-uuid[471]: Primary Header is updated. Mar 17 20:21:31.025588 disk-uuid[471]: Secondary Entries is updated. Mar 17 20:21:31.025588 disk-uuid[471]: Secondary Header is updated. Mar 17 20:21:31.034371 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:21:31.040366 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:21:32.065363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 20:21:32.065598 disk-uuid[472]: The operation has completed successfully. Mar 17 20:21:32.140470 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 20:21:32.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.140699 systemd[1]: Finished disk-uuid.service. Mar 17 20:21:32.155704 systemd[1]: Starting verity-setup.service... Mar 17 20:21:32.175381 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 17 20:21:32.284901 systemd[1]: Found device dev-mapper-usr.device. Mar 17 20:21:32.288269 systemd[1]: Mounting sysusr-usr.mount... Mar 17 20:21:32.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.290554 systemd[1]: Finished verity-setup.service. Mar 17 20:21:32.432363 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 20:21:32.434094 systemd[1]: Mounted sysusr-usr.mount. Mar 17 20:21:32.436778 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 20:21:32.440386 systemd[1]: Starting ignition-setup.service... Mar 17 20:21:32.445108 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 20:21:32.460997 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:21:32.461060 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:21:32.461076 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:21:32.483485 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 20:21:32.501810 systemd[1]: Finished ignition-setup.service. Mar 17 20:21:32.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.503303 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 20:21:32.578054 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 20:21:32.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.579000 audit: BPF prog-id=9 op=LOAD Mar 17 20:21:32.580841 systemd[1]: Starting systemd-networkd.service... Mar 17 20:21:32.609379 systemd-networkd[642]: lo: Link UP Mar 17 20:21:32.609390 systemd-networkd[642]: lo: Gained carrier Mar 17 20:21:32.610441 systemd-networkd[642]: Enumeration completed Mar 17 20:21:32.610889 systemd-networkd[642]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:21:32.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.612464 systemd[1]: Started systemd-networkd.service. Mar 17 20:21:32.612717 systemd-networkd[642]: eth0: Link UP Mar 17 20:21:32.612722 systemd-networkd[642]: eth0: Gained carrier Mar 17 20:21:32.613382 systemd[1]: Reached target network.target. Mar 17 20:21:32.615254 systemd[1]: Starting iscsiuio.service... Mar 17 20:21:32.626922 systemd[1]: Started iscsiuio.service. Mar 17 20:21:32.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.628878 systemd[1]: Starting iscsid.service... Mar 17 20:21:32.632845 iscsid[651]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 20:21:32.632845 iscsid[651]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 20:21:32.632845 iscsid[651]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 20:21:32.632845 iscsid[651]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 20:21:32.632845 iscsid[651]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 20:21:32.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.641215 iscsid[651]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 20:21:32.634380 systemd[1]: Started iscsid.service. Mar 17 20:21:32.636451 systemd-networkd[642]: eth0: DHCPv4 address 172.24.4.115/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 20:21:32.639973 systemd[1]: Starting dracut-initqueue.service... Mar 17 20:21:32.655234 systemd[1]: Finished dracut-initqueue.service. Mar 17 20:21:32.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.655912 systemd[1]: Reached target remote-fs-pre.target. Mar 17 20:21:32.657884 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 20:21:32.658948 systemd[1]: Reached target remote-fs.target. Mar 17 20:21:32.661728 systemd[1]: Starting dracut-pre-mount.service... Mar 17 20:21:32.671492 systemd[1]: Finished dracut-pre-mount.service. Mar 17 20:21:32.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.782582 ignition[563]: Ignition 2.14.0 Mar 17 20:21:32.782612 ignition[563]: Stage: fetch-offline Mar 17 20:21:32.782795 ignition[563]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:32.782847 ignition[563]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:32.787958 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 20:21:32.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:32.785229 ignition[563]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:32.790051 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.115 Mar 17 20:21:32.785504 ignition[563]: parsed url from cmdline: "" Mar 17 20:21:32.790071 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Mar 17 20:21:32.785514 ignition[563]: no config URL provided Mar 17 20:21:32.791799 systemd[1]: Starting ignition-fetch.service... Mar 17 20:21:32.785528 ignition[563]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:21:32.785549 ignition[563]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:21:32.785572 ignition[563]: failed to fetch config: resource requires networking Mar 17 20:21:32.786228 ignition[563]: Ignition finished successfully Mar 17 20:21:32.810753 ignition[665]: Ignition 2.14.0 Mar 17 20:21:32.810780 ignition[665]: Stage: fetch Mar 17 20:21:32.811067 ignition[665]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:32.811112 ignition[665]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:32.813304 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:32.813545 ignition[665]: parsed url from cmdline: "" Mar 17 20:21:32.813555 ignition[665]: no config URL provided Mar 17 20:21:32.813569 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 20:21:32.813589 ignition[665]: no config at "/usr/lib/ignition/user.ign" Mar 17 20:21:32.813853 ignition[665]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 17 20:21:32.813902 ignition[665]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 17 20:21:32.816116 ignition[665]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 17 20:21:33.110299 ignition[665]: GET result: OK Mar 17 20:21:33.110629 ignition[665]: parsing config with SHA512: e43417530164297d27521921c60ffd9da7da1dd1b06c88f2e73fc66524683863414803aef9c117f8c2d9283904f35c6a2230009d5194af6ba9224ea60b4a65ab Mar 17 20:21:33.130465 unknown[665]: fetched base config from "system" Mar 17 20:21:33.130499 unknown[665]: fetched base config from "system" Mar 17 20:21:33.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.132144 ignition[665]: fetch: fetch complete Mar 17 20:21:33.130514 unknown[665]: fetched user config from "openstack" Mar 17 20:21:33.132158 ignition[665]: fetch: fetch passed Mar 17 20:21:33.137024 systemd[1]: Finished ignition-fetch.service. Mar 17 20:21:33.132242 ignition[665]: Ignition finished successfully Mar 17 20:21:33.148159 systemd[1]: Starting ignition-kargs.service... Mar 17 20:21:33.168951 ignition[671]: Ignition 2.14.0 Mar 17 20:21:33.168980 ignition[671]: Stage: kargs Mar 17 20:21:33.169219 ignition[671]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:33.169263 ignition[671]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:33.171704 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:33.174642 ignition[671]: kargs: kargs passed Mar 17 20:21:33.174767 ignition[671]: Ignition finished successfully Mar 17 20:21:33.176782 systemd[1]: Finished ignition-kargs.service. Mar 17 20:21:33.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.180226 systemd[1]: Starting ignition-disks.service... Mar 17 20:21:33.197715 ignition[677]: Ignition 2.14.0 Mar 17 20:21:33.197747 ignition[677]: Stage: disks Mar 17 20:21:33.198072 ignition[677]: reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:33.198124 ignition[677]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:33.200846 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:33.204141 ignition[677]: disks: disks passed Mar 17 20:21:33.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.205855 systemd[1]: Finished ignition-disks.service. Mar 17 20:21:33.204247 ignition[677]: Ignition finished successfully Mar 17 20:21:33.207394 systemd[1]: Reached target initrd-root-device.target. Mar 17 20:21:33.209652 systemd[1]: Reached target local-fs-pre.target. Mar 17 20:21:33.212268 systemd[1]: Reached target local-fs.target. Mar 17 20:21:33.214762 systemd[1]: Reached target sysinit.target. Mar 17 20:21:33.217195 systemd[1]: Reached target basic.target. Mar 17 20:21:33.221428 systemd[1]: Starting systemd-fsck-root.service... Mar 17 20:21:33.283229 systemd-fsck[685]: ROOT: clean, 623/1628000 files, 124059/1617920 blocks Mar 17 20:21:33.294644 systemd[1]: Finished systemd-fsck-root.service. Mar 17 20:21:33.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.298752 systemd[1]: Mounting sysroot.mount... Mar 17 20:21:33.325373 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 20:21:33.326918 systemd[1]: Mounted sysroot.mount. Mar 17 20:21:33.329664 systemd[1]: Reached target initrd-root-fs.target. Mar 17 20:21:33.334879 systemd[1]: Mounting sysroot-usr.mount... Mar 17 20:21:33.339577 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 20:21:33.343281 systemd[1]: Starting flatcar-openstack-hostname.service... Mar 17 20:21:33.344879 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 20:21:33.344944 systemd[1]: Reached target ignition-diskful.target. Mar 17 20:21:33.348213 systemd[1]: Mounted sysroot-usr.mount. Mar 17 20:21:33.359550 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 20:21:33.365236 systemd[1]: Starting initrd-setup-root.service... Mar 17 20:21:33.391372 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (692) Mar 17 20:21:33.391702 initrd-setup-root[697]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 20:21:33.452397 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:21:33.452498 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:21:33.452527 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:21:33.464755 initrd-setup-root[721]: cut: /sysroot/etc/group: No such file or directory Mar 17 20:21:33.472886 initrd-setup-root[729]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 20:21:33.508245 initrd-setup-root[737]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 20:21:33.642295 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 20:21:33.647516 systemd-networkd[642]: eth0: Gained IPv6LL Mar 17 20:21:33.714378 systemd[1]: Finished initrd-setup-root.service. Mar 17 20:21:33.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.717666 systemd[1]: Starting ignition-mount.service... Mar 17 20:21:33.720434 systemd[1]: Starting sysroot-boot.service... Mar 17 20:21:33.744395 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Mar 17 20:21:33.744654 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Mar 17 20:21:33.784256 ignition[760]: INFO : Ignition 2.14.0 Mar 17 20:21:33.784256 ignition[760]: INFO : Stage: mount Mar 17 20:21:33.785515 ignition[760]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:33.785515 ignition[760]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:33.785515 ignition[760]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:33.788113 ignition[760]: INFO : mount: mount passed Mar 17 20:21:33.788113 ignition[760]: INFO : Ignition finished successfully Mar 17 20:21:33.790062 systemd[1]: Finished ignition-mount.service. Mar 17 20:21:33.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.796779 systemd[1]: Finished sysroot-boot.service. Mar 17 20:21:33.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.819495 coreos-metadata[691]: Mar 17 20:21:33.819 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 17 20:21:33.838609 coreos-metadata[691]: Mar 17 20:21:33.838 INFO Fetch successful Mar 17 20:21:33.839386 coreos-metadata[691]: Mar 17 20:21:33.839 INFO wrote hostname ci-3510-3-7-8-ce231ec735.novalocal to /sysroot/etc/hostname Mar 17 20:21:33.842611 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 17 20:21:33.842729 systemd[1]: Finished flatcar-openstack-hostname.service. Mar 17 20:21:33.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:33.844871 systemd[1]: Starting ignition-files.service... Mar 17 20:21:33.852637 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 20:21:33.863345 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (768) Mar 17 20:21:33.866338 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 20:21:33.866362 kernel: BTRFS info (device vda6): using free space tree Mar 17 20:21:33.869123 kernel: BTRFS info (device vda6): has skinny extents Mar 17 20:21:33.883133 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 20:21:33.905280 ignition[787]: INFO : Ignition 2.14.0 Mar 17 20:21:33.907056 ignition[787]: INFO : Stage: files Mar 17 20:21:33.908594 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:33.910406 ignition[787]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:33.913020 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:33.915162 ignition[787]: DEBUG : files: compiled without relabeling support, skipping Mar 17 20:21:33.917010 ignition[787]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 20:21:33.917010 ignition[787]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 20:21:33.922920 ignition[787]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 20:21:33.925018 ignition[787]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 20:21:33.927345 unknown[787]: wrote ssh authorized keys file for user: core Mar 17 20:21:33.929064 ignition[787]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 20:21:33.929064 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 20:21:33.929064 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 20:21:33.929064 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 20:21:33.929064 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 20:21:33.994058 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 20:21:34.462709 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 20:21:34.465262 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:21:34.465262 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 20:21:35.028818 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:21:35.483468 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 20:21:35.510045 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:21:35.510045 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 20:21:35.510045 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:21:35.510045 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:21:35.510045 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:21:35.510045 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 20:21:35.972388 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 20:21:37.690402 ignition[787]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 20:21:37.692517 ignition[787]: INFO : files: op(d): [started] processing unit "coreos-metadata-sshkeys@.service" Mar 17 20:21:37.693389 ignition[787]: INFO : files: op(d): [finished] processing unit "coreos-metadata-sshkeys@.service" Mar 17 20:21:37.694299 ignition[787]: INFO : files: op(e): [started] processing unit "containerd.service" Mar 17 20:21:37.696131 ignition[787]: INFO : files: op(e): op(f): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(e): [finished] processing unit "containerd.service" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 17 20:21:37.697427 ignition[787]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 20:21:37.713253 ignition[787]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:21:37.713253 ignition[787]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 20:21:37.713253 ignition[787]: INFO : files: files passed Mar 17 20:21:37.713253 ignition[787]: INFO : Ignition finished successfully Mar 17 20:21:37.725574 kernel: kauditd_printk_skb: 27 callbacks suppressed Mar 17 20:21:37.725600 kernel: audit: type=1130 audit(1742242897.714:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.712800 systemd[1]: Finished ignition-files.service. Mar 17 20:21:37.749395 kernel: audit: type=1130 audit(1742242897.727:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.749424 kernel: audit: type=1130 audit(1742242897.733:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.749437 kernel: audit: type=1131 audit(1742242897.733:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.716065 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 20:21:37.724335 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 20:21:37.751472 initrd-setup-root-after-ignition[811]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 20:21:37.725888 systemd[1]: Starting ignition-quench.service... Mar 17 20:21:37.727036 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 20:21:37.729543 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 20:21:37.729625 systemd[1]: Finished ignition-quench.service. Mar 17 20:21:37.734172 systemd[1]: Reached target ignition-complete.target. Mar 17 20:21:37.748776 systemd[1]: Starting initrd-parse-etc.service... Mar 17 20:21:37.769565 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 20:21:37.771053 systemd[1]: Finished initrd-parse-etc.service. Mar 17 20:21:37.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.778460 systemd[1]: Reached target initrd-fs.target. Mar 17 20:21:37.792846 kernel: audit: type=1130 audit(1742242897.772:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.792884 kernel: audit: type=1131 audit(1742242897.778:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.792000 systemd[1]: Reached target initrd.target. Mar 17 20:21:37.794116 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 20:21:37.795951 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 20:21:37.815456 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 20:21:37.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.816957 systemd[1]: Starting initrd-cleanup.service... Mar 17 20:21:37.823340 kernel: audit: type=1130 audit(1742242897.815:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.828182 systemd[1]: Stopped target nss-lookup.target. Mar 17 20:21:37.828831 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 20:21:37.829789 systemd[1]: Stopped target timers.target. Mar 17 20:21:37.830725 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 20:21:37.830853 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 20:21:37.838490 kernel: audit: type=1131 audit(1742242897.831:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.831884 systemd[1]: Stopped target initrd.target. Mar 17 20:21:37.837935 systemd[1]: Stopped target basic.target. Mar 17 20:21:37.839154 systemd[1]: Stopped target ignition-complete.target. Mar 17 20:21:37.840861 systemd[1]: Stopped target ignition-diskful.target. Mar 17 20:21:37.842118 systemd[1]: Stopped target initrd-root-device.target. Mar 17 20:21:37.843211 systemd[1]: Stopped target remote-fs.target. Mar 17 20:21:37.844363 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 20:21:37.845553 systemd[1]: Stopped target sysinit.target. Mar 17 20:21:37.846598 systemd[1]: Stopped target local-fs.target. Mar 17 20:21:37.847695 systemd[1]: Stopped target local-fs-pre.target. Mar 17 20:21:37.848755 systemd[1]: Stopped target swap.target. Mar 17 20:21:37.857137 kernel: audit: type=1131 audit(1742242897.850:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.849741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 20:21:37.849867 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 20:21:37.871016 kernel: audit: type=1131 audit(1742242897.858:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.850923 systemd[1]: Stopped target cryptsetup.target. Mar 17 20:21:37.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.857761 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 20:21:37.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.857893 systemd[1]: Stopped dracut-initqueue.service. Mar 17 20:21:37.859205 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 20:21:37.859426 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 20:21:37.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.872012 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 20:21:37.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.872175 systemd[1]: Stopped ignition-files.service. Mar 17 20:21:37.874598 systemd[1]: Stopping ignition-mount.service... Mar 17 20:21:37.876110 systemd[1]: Stopping sysroot-boot.service... Mar 17 20:21:37.879894 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 20:21:37.892037 ignition[825]: INFO : Ignition 2.14.0 Mar 17 20:21:37.892037 ignition[825]: INFO : Stage: umount Mar 17 20:21:37.892037 ignition[825]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Mar 17 20:21:37.892037 ignition[825]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Mar 17 20:21:37.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.880192 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 20:21:37.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.902710 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 17 20:21:37.902710 ignition[825]: INFO : umount: umount passed Mar 17 20:21:37.902710 ignition[825]: INFO : Ignition finished successfully Mar 17 20:21:37.882079 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 20:21:37.882273 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 20:21:37.890883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 20:21:37.890996 systemd[1]: Finished initrd-cleanup.service. Mar 17 20:21:37.898117 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 20:21:37.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.898213 systemd[1]: Stopped ignition-mount.service. Mar 17 20:21:37.899015 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 20:21:37.899061 systemd[1]: Stopped ignition-disks.service. Mar 17 20:21:37.899591 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 20:21:37.899638 systemd[1]: Stopped ignition-kargs.service. Mar 17 20:21:37.900210 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 20:21:37.900255 systemd[1]: Stopped ignition-fetch.service. Mar 17 20:21:37.900834 systemd[1]: Stopped target network.target. Mar 17 20:21:37.901382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 20:21:37.901427 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 20:21:37.901986 systemd[1]: Stopped target paths.target. Mar 17 20:21:37.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.902904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 20:21:37.906426 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 20:21:37.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.907151 systemd[1]: Stopped target slices.target. Mar 17 20:21:37.907876 systemd[1]: Stopped target sockets.target. Mar 17 20:21:37.908720 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 20:21:37.908768 systemd[1]: Closed iscsid.socket. Mar 17 20:21:37.909898 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 20:21:37.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.909934 systemd[1]: Closed iscsiuio.socket. Mar 17 20:21:37.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.911053 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 20:21:37.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.911113 systemd[1]: Stopped ignition-setup.service. Mar 17 20:21:37.912693 systemd[1]: Stopping systemd-networkd.service... Mar 17 20:21:37.914063 systemd[1]: Stopping systemd-resolved.service... Mar 17 20:21:37.917627 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 20:21:37.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.919412 systemd-networkd[642]: eth0: DHCPv6 lease lost Mar 17 20:21:37.943000 audit: BPF prog-id=9 op=UNLOAD Mar 17 20:21:37.921765 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 20:21:37.946000 audit: BPF prog-id=6 op=UNLOAD Mar 17 20:21:37.922085 systemd[1]: Stopped systemd-networkd.service. Mar 17 20:21:37.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.924200 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 20:21:37.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.924300 systemd[1]: Stopped sysroot-boot.service. Mar 17 20:21:37.924971 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 20:21:37.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.925009 systemd[1]: Closed systemd-networkd.socket. Mar 17 20:21:37.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.928205 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 20:21:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.928257 systemd[1]: Stopped initrd-setup-root.service. Mar 17 20:21:37.930585 systemd[1]: Stopping network-cleanup.service... Mar 17 20:21:37.933017 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 20:21:37.933073 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 20:21:37.934409 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:21:37.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.934464 systemd[1]: Stopped systemd-sysctl.service. Mar 17 20:21:37.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.936855 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 20:21:37.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.936947 systemd[1]: Stopped systemd-modules-load.service. Mar 17 20:21:37.938158 systemd[1]: Stopping systemd-udevd.service... Mar 17 20:21:37.940907 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 20:21:37.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:37.941564 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 20:21:37.941680 systemd[1]: Stopped systemd-resolved.service. Mar 17 20:21:37.945710 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 20:21:37.945857 systemd[1]: Stopped systemd-udevd.service. Mar 17 20:21:37.947834 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 20:21:37.947937 systemd[1]: Stopped network-cleanup.service. Mar 17 20:21:37.948749 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 20:21:37.948794 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 20:21:37.949836 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 20:21:37.949881 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 20:21:37.952762 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 20:21:37.984000 audit: BPF prog-id=8 op=UNLOAD Mar 17 20:21:37.984000 audit: BPF prog-id=7 op=UNLOAD Mar 17 20:21:37.952812 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 20:21:37.985000 audit: BPF prog-id=5 op=UNLOAD Mar 17 20:21:37.985000 audit: BPF prog-id=4 op=UNLOAD Mar 17 20:21:37.985000 audit: BPF prog-id=3 op=UNLOAD Mar 17 20:21:37.953483 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 20:21:37.953533 systemd[1]: Stopped dracut-cmdline.service. Mar 17 20:21:37.955243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 20:21:37.955287 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 20:21:37.957251 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 20:21:37.958041 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 20:21:37.958110 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 20:21:37.965192 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 20:21:37.965253 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 20:21:37.965993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 20:21:37.966035 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 20:21:37.968148 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 20:21:37.968662 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 20:21:37.968757 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 20:21:37.969774 systemd[1]: Reached target initrd-switch-root.target. Mar 17 20:21:37.971653 systemd[1]: Starting initrd-switch-root.service... Mar 17 20:21:37.983027 systemd[1]: Switching root. Mar 17 20:21:38.011000 iscsid[651]: iscsid shutting down. Mar 17 20:21:38.011753 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Mar 17 20:21:38.011829 systemd-journald[185]: Journal stopped Mar 17 20:21:43.046489 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 20:21:43.046577 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 20:21:43.046591 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 20:21:43.046605 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 20:21:43.046617 kernel: SELinux: policy capability open_perms=1 Mar 17 20:21:43.046640 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 20:21:43.046653 kernel: SELinux: policy capability always_check_network=0 Mar 17 20:21:43.046665 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 20:21:43.046676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 20:21:43.046693 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 20:21:43.046706 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 20:21:43.046722 systemd[1]: Successfully loaded SELinux policy in 114.204ms. Mar 17 20:21:43.046742 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.789ms. Mar 17 20:21:43.046756 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 20:21:43.046769 systemd[1]: Detected virtualization kvm. Mar 17 20:21:43.046781 systemd[1]: Detected architecture x86-64. Mar 17 20:21:43.046794 systemd[1]: Detected first boot. Mar 17 20:21:43.046807 systemd[1]: Hostname set to . Mar 17 20:21:43.046821 systemd[1]: Initializing machine ID from VM UUID. Mar 17 20:21:43.046834 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 20:21:43.046847 systemd[1]: Populated /etc with preset unit settings. Mar 17 20:21:43.046859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:21:43.046873 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:21:43.046887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:21:43.046901 systemd[1]: Queued start job for default target multi-user.target. Mar 17 20:21:43.046916 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 20:21:43.046929 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 20:21:43.046943 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 20:21:43.046957 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Mar 17 20:21:43.046969 systemd[1]: Created slice system-getty.slice. Mar 17 20:21:43.046982 systemd[1]: Created slice system-modprobe.slice. Mar 17 20:21:43.046994 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 20:21:43.047007 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 20:21:43.047019 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 20:21:43.047034 systemd[1]: Created slice user.slice. Mar 17 20:21:43.047047 systemd[1]: Started systemd-ask-password-console.path. Mar 17 20:21:43.047059 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 20:21:43.047071 systemd[1]: Set up automount boot.automount. Mar 17 20:21:43.047084 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 20:21:43.047096 systemd[1]: Reached target integritysetup.target. Mar 17 20:21:43.047109 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 20:21:43.047123 systemd[1]: Reached target remote-fs.target. Mar 17 20:21:43.047136 systemd[1]: Reached target slices.target. Mar 17 20:21:43.047148 systemd[1]: Reached target swap.target. Mar 17 20:21:43.047160 systemd[1]: Reached target torcx.target. Mar 17 20:21:43.047175 systemd[1]: Reached target veritysetup.target. Mar 17 20:21:43.047187 systemd[1]: Listening on systemd-coredump.socket. Mar 17 20:21:43.047200 systemd[1]: Listening on systemd-initctl.socket. Mar 17 20:21:43.047212 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 20:21:43.047227 kernel: kauditd_printk_skb: 47 callbacks suppressed Mar 17 20:21:43.047244 kernel: audit: type=1400 audit(1742242902.844:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 20:21:43.047256 kernel: audit: type=1335 audit(1742242902.844:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 20:21:43.047268 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 20:21:43.047280 systemd[1]: Listening on systemd-journald.socket. Mar 17 20:21:43.047293 systemd[1]: Listening on systemd-networkd.socket. Mar 17 20:21:43.047305 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 20:21:43.051802 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 20:21:43.051821 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 20:21:43.051837 systemd[1]: Mounting dev-hugepages.mount... Mar 17 20:21:43.051850 systemd[1]: Mounting dev-mqueue.mount... Mar 17 20:21:43.051861 systemd[1]: Mounting media.mount... Mar 17 20:21:43.051873 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:43.051888 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 20:21:43.051900 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 20:21:43.051912 systemd[1]: Mounting tmp.mount... Mar 17 20:21:43.051923 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 20:21:43.051935 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:21:43.051948 systemd[1]: Starting kmod-static-nodes.service... Mar 17 20:21:43.051960 systemd[1]: Starting modprobe@configfs.service... Mar 17 20:21:43.051971 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:21:43.051983 systemd[1]: Starting modprobe@drm.service... Mar 17 20:21:43.051995 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:21:43.052007 systemd[1]: Starting modprobe@fuse.service... Mar 17 20:21:43.052018 systemd[1]: Starting modprobe@loop.service... Mar 17 20:21:43.052033 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 20:21:43.052045 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 20:21:43.052059 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 20:21:43.052071 kernel: loop: module loaded Mar 17 20:21:43.052082 systemd[1]: Starting systemd-journald.service... Mar 17 20:21:43.052094 systemd[1]: Starting systemd-modules-load.service... Mar 17 20:21:43.052105 systemd[1]: Starting systemd-network-generator.service... Mar 17 20:21:43.052116 systemd[1]: Starting systemd-remount-fs.service... Mar 17 20:21:43.052128 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 20:21:43.052140 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:43.052151 systemd[1]: Mounted dev-hugepages.mount. Mar 17 20:21:43.052164 systemd[1]: Mounted dev-mqueue.mount. Mar 17 20:21:43.052176 systemd[1]: Mounted media.mount. Mar 17 20:21:43.052187 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 20:21:43.052199 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 20:21:43.052210 systemd[1]: Mounted tmp.mount. Mar 17 20:21:43.052221 kernel: audit: type=1305 audit(1742242903.033:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 20:21:43.052234 kernel: audit: type=1300 audit(1742242903.033:90): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff354724b0 a2=4000 a3=7fff3547254c items=0 ppid=1 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:21:43.052245 systemd[1]: Finished kmod-static-nodes.service. Mar 17 20:21:43.052257 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 20:21:43.052269 kernel: audit: type=1327 audit(1742242903.033:90): proctitle="/usr/lib/systemd/systemd-journald" Mar 17 20:21:43.052282 systemd[1]: Finished modprobe@configfs.service. Mar 17 20:21:43.052296 systemd-journald[965]: Journal started Mar 17 20:21:43.052354 systemd-journald[965]: Runtime Journal (/run/log/journal/ca1000e9b273422fb8693e30e0f9ff2d) is 8.0M, max 78.4M, 70.4M free. Mar 17 20:21:42.844000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 20:21:42.844000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 20:21:43.033000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 20:21:43.033000 audit[965]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff354724b0 a2=4000 a3=7fff3547254c items=0 ppid=1 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:21:43.033000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 20:21:43.061340 kernel: fuse: init (API version 7.34) Mar 17 20:21:43.061376 kernel: audit: type=1130 audit(1742242903.047:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.061393 systemd[1]: Started systemd-journald.service. Mar 17 20:21:43.061409 kernel: audit: type=1130 audit(1742242903.059:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.061423 kernel: audit: type=1131 audit(1742242903.059:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.081021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:21:43.081188 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:21:43.086134 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:21:43.091183 kernel: audit: type=1130 audit(1742242903.079:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.091220 kernel: audit: type=1130 audit(1742242903.085:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.086281 systemd[1]: Finished modprobe@drm.service. Mar 17 20:21:43.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.092037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:21:43.092249 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:21:43.093023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 20:21:43.093179 systemd[1]: Finished modprobe@fuse.service. Mar 17 20:21:43.093863 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:21:43.094016 systemd[1]: Finished modprobe@loop.service. Mar 17 20:21:43.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.094796 systemd[1]: Finished systemd-modules-load.service. Mar 17 20:21:43.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.095627 systemd[1]: Finished systemd-network-generator.service. Mar 17 20:21:43.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.096821 systemd[1]: Finished systemd-remount-fs.service. Mar 17 20:21:43.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.097847 systemd[1]: Reached target network-pre.target. Mar 17 20:21:43.100717 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 20:21:43.103630 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 20:21:43.104176 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 20:21:43.112738 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 20:21:43.117268 systemd[1]: Starting systemd-journal-flush.service... Mar 17 20:21:43.117847 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:21:43.119054 systemd[1]: Starting systemd-random-seed.service... Mar 17 20:21:43.120004 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:21:43.121503 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:21:43.123900 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 20:21:43.125726 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 20:21:43.136003 systemd-journald[965]: Time spent on flushing to /var/log/journal/ca1000e9b273422fb8693e30e0f9ff2d is 33.361ms for 1052 entries. Mar 17 20:21:43.136003 systemd-journald[965]: System Journal (/var/log/journal/ca1000e9b273422fb8693e30e0f9ff2d) is 8.0M, max 584.8M, 576.8M free. Mar 17 20:21:43.227033 systemd-journald[965]: Received client request to flush runtime journal. Mar 17 20:21:43.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.144200 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 20:21:43.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.146302 systemd[1]: Starting systemd-sysusers.service... Mar 17 20:21:43.229254 udevadm[1020]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 20:21:43.154550 systemd[1]: Finished systemd-random-seed.service. Mar 17 20:21:43.155190 systemd[1]: Reached target first-boot-complete.target. Mar 17 20:21:43.164235 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:21:43.182546 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 20:21:43.184381 systemd[1]: Starting systemd-udev-settle.service... Mar 17 20:21:43.228383 systemd[1]: Finished systemd-journal-flush.service. Mar 17 20:21:43.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.234792 systemd[1]: Finished systemd-sysusers.service. Mar 17 20:21:43.236696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 20:21:43.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.281296 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 20:21:43.833200 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 20:21:43.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.841002 systemd[1]: Starting systemd-udevd.service... Mar 17 20:21:43.886672 systemd-udevd[1028]: Using default interface naming scheme 'v252'. Mar 17 20:21:43.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:43.951202 systemd[1]: Started systemd-udevd.service. Mar 17 20:21:43.958187 systemd[1]: Starting systemd-networkd.service... Mar 17 20:21:43.991901 systemd[1]: Starting systemd-userdbd.service... Mar 17 20:21:44.019175 systemd[1]: Found device dev-ttyS0.device. Mar 17 20:21:44.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:44.091265 systemd[1]: Started systemd-userdbd.service. Mar 17 20:21:44.117876 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 20:21:44.117198 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 20:21:44.132349 kernel: ACPI: button: Power Button [PWRF] Mar 17 20:21:44.148000 audit[1041]: AVC avc: denied { confidentiality } for pid=1041 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 20:21:44.148000 audit[1041]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ee645d1500 a1=338ac a2=7f7a659bbbc5 a3=5 items=110 ppid=1028 pid=1041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:21:44.148000 audit: CWD cwd="/" Mar 17 20:21:44.148000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=1 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=2 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=3 name=(null) inode=14521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=4 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=5 name=(null) inode=14522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=6 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=7 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=8 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=9 name=(null) inode=14524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=10 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=11 name=(null) inode=14525 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=12 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=13 name=(null) inode=14526 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=14 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=15 name=(null) inode=14527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=16 name=(null) inode=14523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=17 name=(null) inode=14528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=18 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=19 name=(null) inode=14529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=20 name=(null) inode=14529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=21 name=(null) inode=14530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=22 name=(null) inode=14529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=23 name=(null) inode=14531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=24 name=(null) inode=14529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=25 name=(null) inode=14532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=26 name=(null) inode=14529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=27 name=(null) inode=14533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=28 name=(null) inode=14529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=29 name=(null) inode=14534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=30 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=31 name=(null) inode=14535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=32 name=(null) inode=14535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=33 name=(null) inode=14536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=34 name=(null) inode=14535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=35 name=(null) inode=14537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=36 name=(null) inode=14535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=37 name=(null) inode=14538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=38 name=(null) inode=14535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=39 name=(null) inode=14539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=40 name=(null) inode=14535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=41 name=(null) inode=14540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=42 name=(null) inode=14520 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=43 name=(null) inode=14541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=44 name=(null) inode=14541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=45 name=(null) inode=14542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=46 name=(null) inode=14541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=47 name=(null) inode=14543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=48 name=(null) inode=14541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=49 name=(null) inode=14544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=50 name=(null) inode=14541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=51 name=(null) inode=14545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=52 name=(null) inode=14541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=53 name=(null) inode=14546 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=55 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=56 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=57 name=(null) inode=14548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=58 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=59 name=(null) inode=14549 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=60 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=61 name=(null) inode=14550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=62 name=(null) inode=14550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=63 name=(null) inode=14551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=64 name=(null) inode=14550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=65 name=(null) inode=14552 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=66 name=(null) inode=14550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=67 name=(null) inode=14553 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=68 name=(null) inode=14550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=69 name=(null) inode=14554 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=70 name=(null) inode=14550 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=71 name=(null) inode=14555 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=72 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=73 name=(null) inode=14556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=74 name=(null) inode=14556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=75 name=(null) inode=14557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=76 name=(null) inode=14556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=77 name=(null) inode=14558 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=78 name=(null) inode=14556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=79 name=(null) inode=14559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=80 name=(null) inode=14556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=81 name=(null) inode=14560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=82 name=(null) inode=14556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=83 name=(null) inode=14561 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=84 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=85 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=86 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=87 name=(null) inode=14563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=88 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=89 name=(null) inode=14564 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=90 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=91 name=(null) inode=14565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=92 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=93 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=94 name=(null) inode=14562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=95 name=(null) inode=14567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=96 name=(null) inode=14547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=97 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=98 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=99 name=(null) inode=14569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=100 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=101 name=(null) inode=14570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=102 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=103 name=(null) inode=14571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=104 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=105 name=(null) inode=14572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=106 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=107 name=(null) inode=14573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PATH item=109 name=(null) inode=14575 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 20:21:44.148000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 20:21:44.184273 systemd-networkd[1038]: lo: Link UP Mar 17 20:21:44.184284 systemd-networkd[1038]: lo: Gained carrier Mar 17 20:21:44.185338 systemd-networkd[1038]: Enumeration completed Mar 17 20:21:44.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:44.185453 systemd-networkd[1038]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 20:21:44.185501 systemd[1]: Started systemd-networkd.service. Mar 17 20:21:44.187272 systemd-networkd[1038]: eth0: Link UP Mar 17 20:21:44.187282 systemd-networkd[1038]: eth0: Gained carrier Mar 17 20:21:44.193339 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 20:21:44.199495 systemd-networkd[1038]: eth0: DHCPv4 address 172.24.4.115/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 17 20:21:44.215334 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 20:21:44.219348 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 20:21:44.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:44.273849 systemd[1]: Finished systemd-udev-settle.service. Mar 17 20:21:44.276036 systemd[1]: Starting lvm2-activation-early.service... Mar 17 20:21:44.309549 lvm[1061]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:21:44.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:44.353979 systemd[1]: Finished lvm2-activation-early.service. Mar 17 20:21:44.355614 systemd[1]: Reached target cryptsetup.target. Mar 17 20:21:44.359640 systemd[1]: Starting lvm2-activation.service... Mar 17 20:21:44.372547 lvm[1064]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 20:21:44.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:44.412219 systemd[1]: Finished lvm2-activation.service. Mar 17 20:21:44.413794 systemd[1]: Reached target local-fs-pre.target. Mar 17 20:21:44.415057 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 20:21:44.415126 systemd[1]: Reached target local-fs.target. Mar 17 20:21:44.416464 systemd[1]: Reached target machines.target. Mar 17 20:21:44.420809 systemd[1]: Starting ldconfig.service... Mar 17 20:21:44.423709 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:21:44.423805 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:44.426578 systemd[1]: Starting systemd-boot-update.service... Mar 17 20:21:44.432479 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 20:21:44.437677 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 20:21:44.443202 systemd[1]: Starting systemd-sysext.service... Mar 17 20:21:44.471301 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1067 (bootctl) Mar 17 20:21:44.472881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 20:21:44.479881 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 20:21:44.484010 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 20:21:44.484256 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 20:21:44.522405 kernel: loop0: detected capacity change from 0 to 210664 Mar 17 20:21:44.549260 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 20:21:44.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.343840 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 20:21:45.345754 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 20:21:45.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.381442 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 20:21:45.494484 kernel: loop1: detected capacity change from 0 to 210664 Mar 17 20:21:45.614207 (sd-sysext)[1085]: Using extensions 'kubernetes'. Mar 17 20:21:45.618256 (sd-sysext)[1085]: Merged extensions into '/usr'. Mar 17 20:21:45.656645 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Mar 17 20:21:45.656645 systemd-fsck[1081]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 20:21:45.673550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 20:21:45.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.679750 systemd[1]: Mounting boot.mount... Mar 17 20:21:45.683543 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:45.687462 systemd[1]: Mounting usr-share-oem.mount... Mar 17 20:21:45.688166 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:21:45.689406 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:21:45.691483 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:21:45.697127 systemd[1]: Starting modprobe@loop.service... Mar 17 20:21:45.697684 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:21:45.697921 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:45.698141 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:45.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.700551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:21:45.700740 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:21:45.704180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:21:45.705423 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:21:45.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.715353 systemd[1]: Mounted boot.mount. Mar 17 20:21:45.716700 systemd[1]: Mounted usr-share-oem.mount. Mar 17 20:21:45.718394 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:21:45.718818 systemd[1]: Finished modprobe@loop.service. Mar 17 20:21:45.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.720746 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:21:45.720798 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:21:45.724118 systemd[1]: Finished systemd-sysext.service. Mar 17 20:21:45.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:45.726928 systemd[1]: Starting ensure-sysext.service... Mar 17 20:21:45.729164 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 20:21:45.743528 systemd-networkd[1038]: eth0: Gained IPv6LL Mar 17 20:21:45.749466 systemd[1]: Reloading. Mar 17 20:21:45.750680 systemd-tmpfiles[1103]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 20:21:45.751792 systemd-tmpfiles[1103]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 20:21:45.754988 systemd-tmpfiles[1103]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 20:21:45.850037 /usr/lib/systemd/system-generators/torcx-generator[1123]: time="2025-03-17T20:21:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:21:45.850492 /usr/lib/systemd/system-generators/torcx-generator[1123]: time="2025-03-17T20:21:45Z" level=info msg="torcx already run" Mar 17 20:21:46.008489 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:21:46.008516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:21:46.047165 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:21:46.115723 ldconfig[1066]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 20:21:46.118950 systemd[1]: Finished systemd-boot-update.service. Mar 17 20:21:46.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.121181 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 20:21:46.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.124720 systemd[1]: Starting audit-rules.service... Mar 17 20:21:46.126531 systemd[1]: Starting clean-ca-certificates.service... Mar 17 20:21:46.128447 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 20:21:46.131268 systemd[1]: Starting systemd-resolved.service... Mar 17 20:21:46.133881 systemd[1]: Starting systemd-timesyncd.service... Mar 17 20:21:46.135597 systemd[1]: Starting systemd-update-utmp.service... Mar 17 20:21:46.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.146653 systemd[1]: Finished clean-ca-certificates.service. Mar 17 20:21:46.147518 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:21:46.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.156040 systemd[1]: Finished ldconfig.service. Mar 17 20:21:46.158704 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:46.158951 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.160332 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:21:46.163137 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:21:46.163000 audit[1183]: SYSTEM_BOOT pid=1183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.165560 systemd[1]: Starting modprobe@loop.service... Mar 17 20:21:46.166428 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.166556 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:46.166698 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:21:46.166791 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:46.167871 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:21:46.168041 systemd[1]: Finished modprobe@loop.service. Mar 17 20:21:46.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.171848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:21:46.172006 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:21:46.174758 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:46.175030 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.178380 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:21:46.180022 systemd[1]: Starting modprobe@loop.service... Mar 17 20:21:46.183182 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.183457 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:46.183651 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:21:46.183794 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:46.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.187900 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:21:46.188082 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:21:46.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.192434 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:21:46.192587 systemd[1]: Finished modprobe@loop.service. Mar 17 20:21:46.193829 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:21:46.194151 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:21:46.205054 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:46.205449 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.209086 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 20:21:46.212402 systemd[1]: Starting modprobe@drm.service... Mar 17 20:21:46.214115 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 20:21:46.218468 systemd[1]: Starting modprobe@loop.service... Mar 17 20:21:46.219229 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.219400 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:46.220780 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 20:21:46.223430 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 20:21:46.223584 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 20:21:46.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.230703 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 20:21:46.231952 systemd[1]: Finished systemd-update-utmp.service. Mar 17 20:21:46.232979 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 20:21:46.233122 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 20:21:46.233987 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 20:21:46.234120 systemd[1]: Finished modprobe@drm.service. Mar 17 20:21:46.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.239631 systemd[1]: Starting systemd-update-done.service... Mar 17 20:21:46.240627 systemd[1]: Finished ensure-sysext.service. Mar 17 20:21:46.241541 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 20:21:46.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.254917 systemd[1]: Finished systemd-update-done.service. Mar 17 20:21:46.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 20:21:46.255899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 20:21:46.256040 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 20:21:46.256728 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 20:21:46.256862 systemd[1]: Finished modprobe@loop.service. Mar 17 20:21:46.257395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 20:21:46.257430 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.305000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 20:21:46.305000 audit[1227]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb07285d0 a2=420 a3=0 items=0 ppid=1178 pid=1227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 20:21:46.305000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 20:21:46.305893 augenrules[1227]: No rules Mar 17 20:21:46.306429 systemd[1]: Finished audit-rules.service. Mar 17 20:21:46.317066 systemd[1]: Started systemd-timesyncd.service. Mar 17 20:21:46.317760 systemd[1]: Reached target time-set.target. Mar 17 20:21:46.325584 systemd-resolved[1181]: Positive Trust Anchors: Mar 17 20:21:46.325600 systemd-resolved[1181]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 20:21:46.325639 systemd-resolved[1181]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 20:21:46.333228 systemd-resolved[1181]: Using system hostname 'ci-3510-3-7-8-ce231ec735.novalocal'. Mar 17 20:21:46.335020 systemd[1]: Started systemd-resolved.service. Mar 17 20:21:46.335628 systemd[1]: Reached target network.target. Mar 17 20:21:46.336061 systemd[1]: Reached target network-online.target. Mar 17 20:21:46.336528 systemd[1]: Reached target nss-lookup.target. Mar 17 20:21:46.336971 systemd[1]: Reached target sysinit.target. Mar 17 20:21:46.337513 systemd[1]: Started motdgen.path. Mar 17 20:21:46.337951 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 20:21:46.338595 systemd[1]: Started logrotate.timer. Mar 17 20:21:46.339136 systemd[1]: Started mdadm.timer. Mar 17 20:21:46.339569 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 20:21:46.340015 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 20:21:46.340042 systemd[1]: Reached target paths.target. Mar 17 20:21:46.340474 systemd[1]: Reached target timers.target. Mar 17 20:21:46.341146 systemd[1]: Listening on dbus.socket. Mar 17 20:21:46.342732 systemd[1]: Starting docker.socket... Mar 17 20:21:46.345155 systemd[1]: Listening on sshd.socket. Mar 17 20:21:46.345718 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:46.346032 systemd[1]: Listening on docker.socket. Mar 17 20:21:46.346535 systemd[1]: Reached target sockets.target. Mar 17 20:21:46.347027 systemd[1]: Reached target basic.target. Mar 17 20:21:46.347693 systemd[1]: System is tainted: cgroupsv1 Mar 17 20:21:46.347741 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.347766 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 20:21:46.348938 systemd[1]: Starting containerd.service... Mar 17 20:21:46.350244 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Mar 17 20:21:46.351927 systemd[1]: Starting dbus.service... Mar 17 20:21:46.353349 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 20:21:46.354946 systemd[1]: Starting extend-filesystems.service... Mar 17 20:21:46.358298 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 20:21:46.360269 systemd[1]: Starting kubelet.service... Mar 17 20:21:46.362100 systemd[1]: Starting motdgen.service... Mar 17 20:21:46.368078 systemd[1]: Starting prepare-helm.service... Mar 17 20:21:46.369906 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 20:21:46.373490 systemd[1]: Starting sshd-keygen.service... Mar 17 20:21:46.385117 systemd[1]: Starting systemd-logind.service... Mar 17 20:21:46.387532 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 20:21:46.387602 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 20:21:46.393502 systemd[1]: Starting update-engine.service... Mar 17 20:21:46.395579 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 20:21:46.403292 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 20:21:46.403619 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 20:21:46.413379 jq[1257]: true Mar 17 20:21:46.418178 jq[1239]: false Mar 17 20:21:46.418958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 20:21:46.419264 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 20:21:46.431813 extend-filesystems[1240]: Found loop1 Mar 17 20:21:46.432740 extend-filesystems[1240]: Found vda Mar 17 20:21:46.433277 extend-filesystems[1240]: Found vda1 Mar 17 20:21:46.433851 extend-filesystems[1240]: Found vda2 Mar 17 20:21:46.434744 extend-filesystems[1240]: Found vda3 Mar 17 20:21:46.434744 extend-filesystems[1240]: Found usr Mar 17 20:21:46.434744 extend-filesystems[1240]: Found vda4 Mar 17 20:21:46.434744 extend-filesystems[1240]: Found vda6 Mar 17 20:21:46.434744 extend-filesystems[1240]: Found vda7 Mar 17 20:21:46.434744 extend-filesystems[1240]: Found vda9 Mar 17 20:21:46.434744 extend-filesystems[1240]: Checking size of /dev/vda9 Mar 17 20:21:46.441890 jq[1264]: true Mar 17 20:21:46.478461 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 20:21:46.478760 systemd[1]: Finished motdgen.service. Mar 17 20:21:46.557565 tar[1262]: linux-amd64/helm Mar 17 20:21:46.578868 extend-filesystems[1240]: Resized partition /dev/vda9 Mar 17 20:21:46.587512 systemd-logind[1254]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 20:21:46.587536 systemd-logind[1254]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 20:21:46.589421 systemd-logind[1254]: New seat seat0. Mar 17 20:21:46.596803 env[1292]: time="2025-03-17T20:21:46.596741685Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 20:21:46.648250 extend-filesystems[1303]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 20:21:46.672185 env[1292]: time="2025-03-17T20:21:46.649127895Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.672901142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674258017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674288123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674675961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674697261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674712329Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674726265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.674809531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675079 env[1292]: time="2025-03-17T20:21:46.675046516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675574 env[1292]: time="2025-03-17T20:21:46.675553286Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 20:21:46.675639 env[1292]: time="2025-03-17T20:21:46.675625832Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 20:21:46.675742 env[1292]: time="2025-03-17T20:21:46.675725139Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 20:21:46.675818 env[1292]: time="2025-03-17T20:21:46.675804267Z" level=info msg="metadata content store policy set" policy=shared Mar 17 20:21:46.733341 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 17 20:21:46.742333 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 17 20:21:46.755745 dbus-daemon[1238]: [system] SELinux support is enabled Mar 17 20:21:46.755975 systemd[1]: Started dbus.service. Mar 17 20:21:46.763129 dbus-daemon[1238]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 20:21:46.762428 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 20:21:46.762453 systemd[1]: Reached target system-config.target. Mar 17 20:21:46.764486 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 20:21:46.764513 systemd[1]: Reached target user-config.target. Mar 17 20:21:46.765647 systemd[1]: Started systemd-logind.service. Mar 17 20:21:46.805508 extend-filesystems[1303]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 20:21:46.805508 extend-filesystems[1303]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 20:21:46.805508 extend-filesystems[1303]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 17 20:21:46.814187 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 20:21:46.839597 extend-filesystems[1240]: Resized filesystem in /dev/vda9 Mar 17 20:21:46.842485 update_engine[1256]: I0317 20:21:46.812924 1256 main.cc:92] Flatcar Update Engine starting Mar 17 20:21:46.842732 bash[1286]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.828653646Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.828848451Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.828925065Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829043908Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829131161Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829247640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829407670Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829452965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829530571Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829612043Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829650996Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.829778385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.830173977Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 20:21:46.842818 env[1292]: time="2025-03-17T20:21:46.830695846Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 20:21:46.814785 systemd[1]: Finished extend-filesystems.service. Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836074884Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836194098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836275641Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836664019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836749580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836789685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836862962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836935759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.836974582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.837044333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.837078827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.837155040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.837780163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.837879519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.843266 env[1292]: time="2025-03-17T20:21:46.837918502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.832407 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 20:21:46.843691 env[1292]: time="2025-03-17T20:21:46.837949340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 20:21:46.843691 env[1292]: time="2025-03-17T20:21:46.838028248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 20:21:46.843691 env[1292]: time="2025-03-17T20:21:46.838060348Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 20:21:46.843691 env[1292]: time="2025-03-17T20:21:46.838107426Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 20:21:46.843691 env[1292]: time="2025-03-17T20:21:46.838717701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 20:21:46.839308 systemd[1]: Started update-engine.service. Mar 17 20:21:46.845247 systemd[1]: Started locksmithd.service. Mar 17 20:21:46.847406 update_engine[1256]: I0317 20:21:46.846925 1256 update_check_scheduler.cc:74] Next update check in 6m50s Mar 17 20:21:46.852055 env[1292]: time="2025-03-17T20:21:46.850862489Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 20:21:46.852055 env[1292]: time="2025-03-17T20:21:46.851087290Z" level=info msg="Connect containerd service" Mar 17 20:21:46.852055 env[1292]: time="2025-03-17T20:21:46.851171588Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 20:21:46.861590 env[1292]: time="2025-03-17T20:21:46.860044119Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:21:46.861590 env[1292]: time="2025-03-17T20:21:46.860352357Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 20:21:46.861590 env[1292]: time="2025-03-17T20:21:46.860400738Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 20:21:46.861590 env[1292]: time="2025-03-17T20:21:46.860451383Z" level=info msg="containerd successfully booted in 0.293834s" Mar 17 20:21:46.860598 systemd[1]: Started containerd.service. Mar 17 20:21:46.861792 env[1292]: time="2025-03-17T20:21:46.861665920Z" level=info msg="Start subscribing containerd event" Mar 17 20:21:46.861825 env[1292]: time="2025-03-17T20:21:46.861791025Z" level=info msg="Start recovering state" Mar 17 20:21:46.861921 env[1292]: time="2025-03-17T20:21:46.861895571Z" level=info msg="Start event monitor" Mar 17 20:21:46.861921 env[1292]: time="2025-03-17T20:21:46.861917843Z" level=info msg="Start snapshots syncer" Mar 17 20:21:46.861981 env[1292]: time="2025-03-17T20:21:46.861931338Z" level=info msg="Start cni network conf syncer for default" Mar 17 20:21:46.861981 env[1292]: time="2025-03-17T20:21:46.861959872Z" level=info msg="Start streaming server" Mar 17 20:21:47.271870 tar[1262]: linux-amd64/LICENSE Mar 17 20:21:47.271870 tar[1262]: linux-amd64/README.md Mar 17 20:21:47.281017 systemd[1]: Finished prepare-helm.service. Mar 17 20:21:47.471165 locksmithd[1311]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 20:21:48.171088 systemd[1]: Started kubelet.service. Mar 17 20:21:49.330999 kubelet[1325]: E0317 20:21:49.330960 1325 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:21:49.332730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:21:49.332929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:21:50.465811 sshd_keygen[1269]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 20:21:50.493032 systemd[1]: Finished sshd-keygen.service. Mar 17 20:21:50.496285 systemd[1]: Starting issuegen.service... Mar 17 20:21:50.502767 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 20:21:50.502993 systemd[1]: Finished issuegen.service. Mar 17 20:21:50.505047 systemd[1]: Starting systemd-user-sessions.service... Mar 17 20:21:50.513754 systemd[1]: Finished systemd-user-sessions.service. Mar 17 20:21:50.516049 systemd[1]: Started getty@tty1.service. Mar 17 20:21:50.517835 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 20:21:50.518744 systemd[1]: Reached target getty.target. Mar 17 20:21:53.456904 coreos-metadata[1237]: Mar 17 20:21:53.456 WARN failed to locate config-drive, using the metadata service API instead Mar 17 20:21:53.566499 coreos-metadata[1237]: Mar 17 20:21:53.566 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 17 20:21:53.753174 coreos-metadata[1237]: Mar 17 20:21:53.752 INFO Fetch successful Mar 17 20:21:53.753588 coreos-metadata[1237]: Mar 17 20:21:53.753 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 17 20:21:53.771499 coreos-metadata[1237]: Mar 17 20:21:53.771 INFO Fetch successful Mar 17 20:21:53.776402 unknown[1237]: wrote ssh authorized keys file for user: core Mar 17 20:21:53.807122 update-ssh-keys[1353]: Updated "/home/core/.ssh/authorized_keys" Mar 17 20:21:53.808087 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Mar 17 20:21:53.810128 systemd[1]: Reached target multi-user.target. Mar 17 20:21:53.814932 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 20:21:53.834770 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 20:21:53.835283 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 20:21:53.847300 systemd[1]: Startup finished in 9.609s (kernel) + 15.556s (userspace) = 25.165s. Mar 17 20:21:56.020096 systemd[1]: Created slice system-sshd.slice. Mar 17 20:21:56.022701 systemd[1]: Started sshd@0-172.24.4.115:22-172.24.4.1:55034.service. Mar 17 20:21:56.344704 systemd-timesyncd[1182]: Timed out waiting for reply from 69.30.247.121:123 (0.flatcar.pool.ntp.org). Mar 17 20:21:57.491134 systemd-timesyncd[1182]: Contacted time server 74.208.25.46:123 (0.flatcar.pool.ntp.org). Mar 17 20:21:57.491255 systemd-timesyncd[1182]: Initial clock synchronization to Mon 2025-03-17 20:21:57.490796 UTC. Mar 17 20:21:57.491909 systemd-resolved[1181]: Clock change detected. Flushing caches. Mar 17 20:21:58.582442 sshd[1358]: Accepted publickey for core from 172.24.4.1 port 55034 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:21:58.587592 sshd[1358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:21:58.614302 systemd[1]: Created slice user-500.slice. Mar 17 20:21:58.617243 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 20:21:58.623529 systemd-logind[1254]: New session 1 of user core. Mar 17 20:21:58.645108 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 20:21:58.650207 systemd[1]: Starting user@500.service... Mar 17 20:21:58.661751 (systemd)[1363]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:21:58.765956 systemd[1363]: Queued start job for default target default.target. Mar 17 20:21:58.766652 systemd[1363]: Reached target paths.target. Mar 17 20:21:58.766671 systemd[1363]: Reached target sockets.target. Mar 17 20:21:58.766686 systemd[1363]: Reached target timers.target. Mar 17 20:21:58.766699 systemd[1363]: Reached target basic.target. Mar 17 20:21:58.766743 systemd[1363]: Reached target default.target. Mar 17 20:21:58.766769 systemd[1363]: Startup finished in 91ms. Mar 17 20:21:58.767720 systemd[1]: Started user@500.service. Mar 17 20:21:58.770191 systemd[1]: Started session-1.scope. Mar 17 20:21:59.238471 systemd[1]: Started sshd@1-172.24.4.115:22-172.24.4.1:55046.service. Mar 17 20:22:00.369199 sshd[1372]: Accepted publickey for core from 172.24.4.1 port 55046 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:22:00.375068 sshd[1372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:22:00.376963 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 20:22:00.377577 systemd[1]: Stopped kubelet.service. Mar 17 20:22:00.380529 systemd[1]: Starting kubelet.service... Mar 17 20:22:00.397994 systemd[1]: Started session-2.scope. Mar 17 20:22:00.398767 systemd-logind[1254]: New session 2 of user core. Mar 17 20:22:00.658334 systemd[1]: Started kubelet.service. Mar 17 20:22:00.878007 kubelet[1384]: E0317 20:22:00.877866 1384 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:22:00.885377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:22:00.885761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:22:00.980477 sshd[1372]: pam_unix(sshd:session): session closed for user core Mar 17 20:22:00.984986 systemd[1]: Started sshd@2-172.24.4.115:22-172.24.4.1:55052.service. Mar 17 20:22:00.992286 systemd[1]: sshd@1-172.24.4.115:22-172.24.4.1:55046.service: Deactivated successfully. Mar 17 20:22:00.997049 systemd-logind[1254]: Session 2 logged out. Waiting for processes to exit. Mar 17 20:22:00.997217 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 20:22:01.003764 systemd-logind[1254]: Removed session 2. Mar 17 20:22:02.114231 sshd[1393]: Accepted publickey for core from 172.24.4.1 port 55052 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:22:02.117789 sshd[1393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:22:02.128067 systemd-logind[1254]: New session 3 of user core. Mar 17 20:22:02.128891 systemd[1]: Started session-3.scope. Mar 17 20:22:02.755140 sshd[1393]: pam_unix(sshd:session): session closed for user core Mar 17 20:22:02.756169 systemd[1]: Started sshd@3-172.24.4.115:22-172.24.4.1:55060.service. Mar 17 20:22:02.765101 systemd[1]: sshd@2-172.24.4.115:22-172.24.4.1:55052.service: Deactivated successfully. Mar 17 20:22:02.769878 systemd-logind[1254]: Session 3 logged out. Waiting for processes to exit. Mar 17 20:22:02.769990 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 20:22:02.774659 systemd-logind[1254]: Removed session 3. Mar 17 20:22:03.964901 sshd[1400]: Accepted publickey for core from 172.24.4.1 port 55060 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:22:03.968366 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:22:03.979252 systemd-logind[1254]: New session 4 of user core. Mar 17 20:22:03.980715 systemd[1]: Started session-4.scope. Mar 17 20:22:04.757867 sshd[1400]: pam_unix(sshd:session): session closed for user core Mar 17 20:22:04.759224 systemd[1]: Started sshd@4-172.24.4.115:22-172.24.4.1:52748.service. Mar 17 20:22:04.769176 systemd[1]: sshd@3-172.24.4.115:22-172.24.4.1:55060.service: Deactivated successfully. Mar 17 20:22:04.773872 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 20:22:04.775123 systemd-logind[1254]: Session 4 logged out. Waiting for processes to exit. Mar 17 20:22:04.778817 systemd-logind[1254]: Removed session 4. Mar 17 20:22:06.101124 sshd[1407]: Accepted publickey for core from 172.24.4.1 port 52748 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:22:06.103792 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:22:06.113507 systemd-logind[1254]: New session 5 of user core. Mar 17 20:22:06.114810 systemd[1]: Started session-5.scope. Mar 17 20:22:06.731533 sudo[1413]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 20:22:06.732146 sudo[1413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 20:22:06.787534 systemd[1]: Starting docker.service... Mar 17 20:22:06.841503 env[1423]: time="2025-03-17T20:22:06.841442061Z" level=info msg="Starting up" Mar 17 20:22:06.843477 env[1423]: time="2025-03-17T20:22:06.843435029Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 20:22:06.843692 env[1423]: time="2025-03-17T20:22:06.843620867Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 20:22:06.843958 env[1423]: time="2025-03-17T20:22:06.843893118Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 20:22:06.844129 env[1423]: time="2025-03-17T20:22:06.844097812Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 20:22:06.847985 env[1423]: time="2025-03-17T20:22:06.847926703Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 20:22:06.848198 env[1423]: time="2025-03-17T20:22:06.848165220Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 20:22:06.848367 env[1423]: time="2025-03-17T20:22:06.848329949Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 20:22:06.848568 env[1423]: time="2025-03-17T20:22:06.848531738Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 20:22:06.865921 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport407427186-merged.mount: Deactivated successfully. Mar 17 20:22:07.600628 env[1423]: time="2025-03-17T20:22:07.600559726Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 20:22:07.600977 env[1423]: time="2025-03-17T20:22:07.600942263Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 20:22:07.601534 env[1423]: time="2025-03-17T20:22:07.601492225Z" level=info msg="Loading containers: start." Mar 17 20:22:07.915522 kernel: Initializing XFRM netlink socket Mar 17 20:22:07.977377 env[1423]: time="2025-03-17T20:22:07.977326111Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 20:22:08.072522 systemd-networkd[1038]: docker0: Link UP Mar 17 20:22:08.097654 env[1423]: time="2025-03-17T20:22:08.097607379Z" level=info msg="Loading containers: done." Mar 17 20:22:08.115333 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2978255839-merged.mount: Deactivated successfully. Mar 17 20:22:08.122754 env[1423]: time="2025-03-17T20:22:08.122683619Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 20:22:08.122915 env[1423]: time="2025-03-17T20:22:08.122857716Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 20:22:08.122985 env[1423]: time="2025-03-17T20:22:08.122951001Z" level=info msg="Daemon has completed initialization" Mar 17 20:22:08.161165 systemd[1]: Started docker.service. Mar 17 20:22:08.181818 env[1423]: time="2025-03-17T20:22:08.181602271Z" level=info msg="API listen on /run/docker.sock" Mar 17 20:22:10.832451 env[1292]: time="2025-03-17T20:22:10.832328718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 20:22:11.136724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 20:22:11.137160 systemd[1]: Stopped kubelet.service. Mar 17 20:22:11.141037 systemd[1]: Starting kubelet.service... Mar 17 20:22:11.447956 systemd[1]: Started kubelet.service. Mar 17 20:22:11.567200 kubelet[1560]: E0317 20:22:11.567162 1560 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:22:11.571879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:22:11.572031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:22:11.979044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1774930834.mount: Deactivated successfully. Mar 17 20:22:14.566909 env[1292]: time="2025-03-17T20:22:14.566802673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:14.569859 env[1292]: time="2025-03-17T20:22:14.569807338Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:14.574095 env[1292]: time="2025-03-17T20:22:14.574042942Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:14.577668 env[1292]: time="2025-03-17T20:22:14.577619259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:14.580042 env[1292]: time="2025-03-17T20:22:14.579981900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 20:22:14.593765 env[1292]: time="2025-03-17T20:22:14.593680702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 20:22:17.402275 env[1292]: time="2025-03-17T20:22:17.402203386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:17.405376 env[1292]: time="2025-03-17T20:22:17.405328968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:17.408539 env[1292]: time="2025-03-17T20:22:17.408492912Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:17.413843 env[1292]: time="2025-03-17T20:22:17.413819061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:17.415763 env[1292]: time="2025-03-17T20:22:17.415696362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 20:22:17.432197 env[1292]: time="2025-03-17T20:22:17.432135824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 20:22:19.351883 env[1292]: time="2025-03-17T20:22:19.351766892Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:19.354883 env[1292]: time="2025-03-17T20:22:19.354838002Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:19.357637 env[1292]: time="2025-03-17T20:22:19.357598108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:19.361196 env[1292]: time="2025-03-17T20:22:19.361152114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:19.363444 env[1292]: time="2025-03-17T20:22:19.363351860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 20:22:19.377435 env[1292]: time="2025-03-17T20:22:19.377322672Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 20:22:20.920252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144929335.mount: Deactivated successfully. Mar 17 20:22:21.636027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 20:22:21.636246 systemd[1]: Stopped kubelet.service. Mar 17 20:22:21.637768 systemd[1]: Starting kubelet.service... Mar 17 20:22:21.804032 systemd[1]: Started kubelet.service. Mar 17 20:22:22.474225 env[1292]: time="2025-03-17T20:22:22.474137587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:22.477736 env[1292]: time="2025-03-17T20:22:22.477685371Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:22.481717 env[1292]: time="2025-03-17T20:22:22.481623818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:22.485319 env[1292]: time="2025-03-17T20:22:22.485259987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:22.486561 env[1292]: time="2025-03-17T20:22:22.486440822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 20:22:22.526388 env[1292]: time="2025-03-17T20:22:22.526306390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 20:22:22.551040 kubelet[1593]: E0317 20:22:22.550930 1593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:22:22.554855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:22:22.555018 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:22:23.157271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351296767.mount: Deactivated successfully. Mar 17 20:22:25.592011 env[1292]: time="2025-03-17T20:22:25.591801405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:25.597945 env[1292]: time="2025-03-17T20:22:25.597838367Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:25.603094 env[1292]: time="2025-03-17T20:22:25.603019434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:25.607929 env[1292]: time="2025-03-17T20:22:25.607875362Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:25.610267 env[1292]: time="2025-03-17T20:22:25.610205312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 20:22:25.636742 env[1292]: time="2025-03-17T20:22:25.636639830Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 20:22:27.245916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918474006.mount: Deactivated successfully. Mar 17 20:22:27.260310 env[1292]: time="2025-03-17T20:22:27.260231716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:27.277043 env[1292]: time="2025-03-17T20:22:27.276957515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:27.284247 env[1292]: time="2025-03-17T20:22:27.284169792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:27.287585 env[1292]: time="2025-03-17T20:22:27.287515256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:27.289233 env[1292]: time="2025-03-17T20:22:27.289168627Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 20:22:27.313563 env[1292]: time="2025-03-17T20:22:27.313505050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 20:22:28.557269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474828906.mount: Deactivated successfully. Mar 17 20:22:32.636333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 17 20:22:32.636571 systemd[1]: Stopped kubelet.service. Mar 17 20:22:32.638843 systemd[1]: Starting kubelet.service... Mar 17 20:22:32.719094 env[1292]: time="2025-03-17T20:22:32.719035069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:32.747833 update_engine[1256]: I0317 20:22:32.747505 1256 update_attempter.cc:509] Updating boot flags... Mar 17 20:22:32.910624 env[1292]: time="2025-03-17T20:22:32.910168210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:32.942164 systemd[1]: Started kubelet.service. Mar 17 20:22:32.964996 env[1292]: time="2025-03-17T20:22:32.964910563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:32.969169 env[1292]: time="2025-03-17T20:22:32.967704527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:32.969169 env[1292]: time="2025-03-17T20:22:32.969001492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 20:22:33.144650 kubelet[1641]: E0317 20:22:33.144519 1641 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 20:22:33.146463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 20:22:33.146629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 20:22:38.143579 systemd[1]: Stopped kubelet.service. Mar 17 20:22:38.145975 systemd[1]: Starting kubelet.service... Mar 17 20:22:38.184873 systemd[1]: Reloading. Mar 17 20:22:38.351570 /usr/lib/systemd/system-generators/torcx-generator[1757]: time="2025-03-17T20:22:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:22:38.351600 /usr/lib/systemd/system-generators/torcx-generator[1757]: time="2025-03-17T20:22:38Z" level=info msg="torcx already run" Mar 17 20:22:38.492120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:22:38.492140 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:22:38.517493 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:22:38.609188 systemd[1]: Started kubelet.service. Mar 17 20:22:38.621446 systemd[1]: Stopping kubelet.service... Mar 17 20:22:38.626565 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:22:38.626807 systemd[1]: Stopped kubelet.service. Mar 17 20:22:38.628392 systemd[1]: Starting kubelet.service... Mar 17 20:22:38.739660 systemd[1]: Started kubelet.service. Mar 17 20:22:38.821375 kubelet[1814]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:22:38.821816 kubelet[1814]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 20:22:38.821895 kubelet[1814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:22:38.822045 kubelet[1814]: I0317 20:22:38.822001 1814 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:22:39.582062 kubelet[1814]: I0317 20:22:39.582006 1814 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 20:22:39.582225 kubelet[1814]: I0317 20:22:39.582214 1814 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:22:39.582905 kubelet[1814]: I0317 20:22:39.582887 1814 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 20:22:39.656917 kubelet[1814]: I0317 20:22:39.656869 1814 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:22:39.658945 kubelet[1814]: E0317 20:22:39.658859 1814 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.692619 kubelet[1814]: I0317 20:22:39.692559 1814 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:22:39.701063 kubelet[1814]: I0317 20:22:39.700932 1814 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:22:39.702069 kubelet[1814]: I0317 20:22:39.701320 1814 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-8-ce231ec735.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 20:22:39.704080 kubelet[1814]: I0317 20:22:39.704026 1814 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:22:39.704264 kubelet[1814]: I0317 20:22:39.704241 1814 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 20:22:39.704762 kubelet[1814]: I0317 20:22:39.704732 1814 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:22:39.707275 kubelet[1814]: I0317 20:22:39.707244 1814 kubelet.go:400] "Attempting to sync node with API server" Mar 17 20:22:39.707511 kubelet[1814]: I0317 20:22:39.707484 1814 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:22:39.707772 kubelet[1814]: I0317 20:22:39.707746 1814 kubelet.go:312] "Adding apiserver pod source" Mar 17 20:22:39.708040 kubelet[1814]: I0317 20:22:39.708003 1814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:22:39.726176 kubelet[1814]: W0317 20:22:39.725254 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-8-ce231ec735.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.726176 kubelet[1814]: E0317 20:22:39.725492 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-8-ce231ec735.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.739273 kubelet[1814]: I0317 20:22:39.739222 1814 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 20:22:39.746665 kubelet[1814]: W0317 20:22:39.746507 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.746665 kubelet[1814]: E0317 20:22:39.746656 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.749987 kubelet[1814]: I0317 20:22:39.749930 1814 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:22:39.750329 kubelet[1814]: W0317 20:22:39.750292 1814 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 20:22:39.753778 kubelet[1814]: I0317 20:22:39.753740 1814 server.go:1264] "Started kubelet" Mar 17 20:22:39.754498 kubelet[1814]: I0317 20:22:39.754392 1814 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:22:39.759593 kubelet[1814]: I0317 20:22:39.759559 1814 server.go:455] "Adding debug handlers to kubelet server" Mar 17 20:22:39.773946 kubelet[1814]: I0317 20:22:39.773869 1814 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:22:39.774288 kubelet[1814]: I0317 20:22:39.774273 1814 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:22:39.774944 kubelet[1814]: E0317 20:22:39.774765 1814 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.115:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.115:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-7-8-ce231ec735.novalocal.182db0bdb8897cd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-8-ce231ec735.novalocal,UID:ci-3510-3-7-8-ce231ec735.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-8-ce231ec735.novalocal,},FirstTimestamp:2025-03-17 20:22:39.753575639 +0000 UTC m=+1.002521514,LastTimestamp:2025-03-17 20:22:39.753575639 +0000 UTC m=+1.002521514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-8-ce231ec735.novalocal,}" Mar 17 20:22:39.779153 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 20:22:39.780117 kubelet[1814]: I0317 20:22:39.779904 1814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:22:39.784462 kubelet[1814]: E0317 20:22:39.784425 1814 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 20:22:39.784972 kubelet[1814]: E0317 20:22:39.784953 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-8-ce231ec735.novalocal\" not found" Mar 17 20:22:39.785113 kubelet[1814]: I0317 20:22:39.785098 1814 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 20:22:39.785383 kubelet[1814]: I0317 20:22:39.785367 1814 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:22:39.785562 kubelet[1814]: I0317 20:22:39.785540 1814 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:22:39.786129 kubelet[1814]: W0317 20:22:39.786088 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.786246 kubelet[1814]: E0317 20:22:39.786230 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.787699 kubelet[1814]: E0317 20:22:39.787664 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-8-ce231ec735.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="200ms" Mar 17 20:22:39.789258 kubelet[1814]: I0317 20:22:39.789239 1814 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:22:39.789373 kubelet[1814]: I0317 20:22:39.789359 1814 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:22:39.789592 kubelet[1814]: I0317 20:22:39.789568 1814 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:22:39.803919 kubelet[1814]: I0317 20:22:39.803874 1814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:22:39.804894 kubelet[1814]: I0317 20:22:39.804862 1814 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:22:39.804994 kubelet[1814]: I0317 20:22:39.804913 1814 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 20:22:39.804994 kubelet[1814]: I0317 20:22:39.804933 1814 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 20:22:39.804994 kubelet[1814]: E0317 20:22:39.804971 1814 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:22:39.810939 kubelet[1814]: W0317 20:22:39.810607 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.810939 kubelet[1814]: E0317 20:22:39.810654 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:39.842480 kubelet[1814]: I0317 20:22:39.842407 1814 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 20:22:39.842833 kubelet[1814]: I0317 20:22:39.842820 1814 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 20:22:39.842909 kubelet[1814]: I0317 20:22:39.842900 1814 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:22:39.849788 kubelet[1814]: I0317 20:22:39.849773 1814 policy_none.go:49] "None policy: Start" Mar 17 20:22:39.850585 kubelet[1814]: I0317 20:22:39.850573 1814 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 20:22:39.850689 kubelet[1814]: I0317 20:22:39.850678 1814 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:22:39.857050 kubelet[1814]: I0317 20:22:39.857002 1814 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:22:39.857352 kubelet[1814]: I0317 20:22:39.857311 1814 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:22:39.857526 kubelet[1814]: I0317 20:22:39.857515 1814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:22:39.862636 kubelet[1814]: E0317 20:22:39.862614 1814 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-7-8-ce231ec735.novalocal\" not found" Mar 17 20:22:39.887758 kubelet[1814]: I0317 20:22:39.887692 1814 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:39.888302 kubelet[1814]: E0317 20:22:39.888266 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:39.905703 kubelet[1814]: I0317 20:22:39.905651 1814 topology_manager.go:215] "Topology Admit Handler" podUID="ca8bfb58d20b25f74dad0abf7517b0ab" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:39.907972 kubelet[1814]: I0317 20:22:39.907933 1814 topology_manager.go:215] "Topology Admit Handler" podUID="f895f2cfcc8cde9cd9d606f98a9a3e5b" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:39.911010 kubelet[1814]: I0317 20:22:39.910987 1814 topology_manager.go:215] "Topology Admit Handler" podUID="125a069321936458976fa1b99d98e328" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:39.988991 kubelet[1814]: E0317 20:22:39.988932 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-8-ce231ec735.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="400ms" Mar 17 20:22:40.086417 kubelet[1814]: I0317 20:22:40.086367 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.086610 kubelet[1814]: I0317 20:22:40.086593 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f895f2cfcc8cde9cd9d606f98a9a3e5b-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"f895f2cfcc8cde9cd9d606f98a9a3e5b\") " pod="kube-system/kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.086738 kubelet[1814]: I0317 20:22:40.086724 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/125a069321936458976fa1b99d98e328-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"125a069321936458976fa1b99d98e328\") " pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.086857 kubelet[1814]: I0317 20:22:40.086842 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/125a069321936458976fa1b99d98e328-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"125a069321936458976fa1b99d98e328\") " pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.087006 kubelet[1814]: I0317 20:22:40.086989 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/125a069321936458976fa1b99d98e328-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"125a069321936458976fa1b99d98e328\") " pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.087147 kubelet[1814]: I0317 20:22:40.087133 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.087262 kubelet[1814]: I0317 20:22:40.087249 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.087371 kubelet[1814]: I0317 20:22:40.087358 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.087505 kubelet[1814]: I0317 20:22:40.087489 1814 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.090634 kubelet[1814]: I0317 20:22:40.090617 1814 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.091073 kubelet[1814]: E0317 20:22:40.091028 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.217808 env[1292]: time="2025-03-17T20:22:40.217661758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal,Uid:ca8bfb58d20b25f74dad0abf7517b0ab,Namespace:kube-system,Attempt:0,}" Mar 17 20:22:40.225242 env[1292]: time="2025-03-17T20:22:40.225168176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal,Uid:f895f2cfcc8cde9cd9d606f98a9a3e5b,Namespace:kube-system,Attempt:0,}" Mar 17 20:22:40.228755 env[1292]: time="2025-03-17T20:22:40.227844369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal,Uid:125a069321936458976fa1b99d98e328,Namespace:kube-system,Attempt:0,}" Mar 17 20:22:40.390129 kubelet[1814]: E0317 20:22:40.389966 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-8-ce231ec735.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="800ms" Mar 17 20:22:40.495021 kubelet[1814]: I0317 20:22:40.494224 1814 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.495608 kubelet[1814]: E0317 20:22:40.495511 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:40.732800 kubelet[1814]: W0317 20:22:40.732546 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:40.732800 kubelet[1814]: E0317 20:22:40.732734 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:40.823294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346765012.mount: Deactivated successfully. Mar 17 20:22:40.840502 env[1292]: time="2025-03-17T20:22:40.840382765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.846756 env[1292]: time="2025-03-17T20:22:40.846703145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.850027 env[1292]: time="2025-03-17T20:22:40.849947474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.857351 env[1292]: time="2025-03-17T20:22:40.857286936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.863278 env[1292]: time="2025-03-17T20:22:40.863223639Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.870591 env[1292]: time="2025-03-17T20:22:40.870490594Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.872586 env[1292]: time="2025-03-17T20:22:40.872534919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.881288 env[1292]: time="2025-03-17T20:22:40.881207347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.887013 env[1292]: time="2025-03-17T20:22:40.886950563Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.889390 env[1292]: time="2025-03-17T20:22:40.889336636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.891637 env[1292]: time="2025-03-17T20:22:40.891588555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.897939 env[1292]: time="2025-03-17T20:22:40.897870382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:22:40.932226 env[1292]: time="2025-03-17T20:22:40.932108269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:22:40.932226 env[1292]: time="2025-03-17T20:22:40.932206044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:22:40.932564 env[1292]: time="2025-03-17T20:22:40.932239628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:22:40.932802 env[1292]: time="2025-03-17T20:22:40.932760215Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcb035474816e469e77f21e6bad297cde7bb33914e6c5965389e1814140c925f pid=1851 runtime=io.containerd.runc.v2 Mar 17 20:22:41.003899 env[1292]: time="2025-03-17T20:22:41.001220830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:22:41.003899 env[1292]: time="2025-03-17T20:22:41.001262008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:22:41.003899 env[1292]: time="2025-03-17T20:22:41.001276034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:22:41.003899 env[1292]: time="2025-03-17T20:22:41.001426930Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/888c4ef73d4ae5fa6202c11532c1bc4c980791a110d83cc2525e74695a993537 pid=1886 runtime=io.containerd.runc.v2 Mar 17 20:22:41.010661 env[1292]: time="2025-03-17T20:22:41.010591691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:22:41.010661 env[1292]: time="2025-03-17T20:22:41.010638790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:22:41.010892 env[1292]: time="2025-03-17T20:22:41.010849599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:22:41.011123 env[1292]: time="2025-03-17T20:22:41.011088892Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c553c725c4013992d62ce29deab937f89115e0f5a83588a4dfbd12f2deaa7de pid=1880 runtime=io.containerd.runc.v2 Mar 17 20:22:41.026625 kubelet[1814]: W0317 20:22:41.023022 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:41.026625 kubelet[1814]: E0317 20:22:41.023117 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:41.094765 env[1292]: time="2025-03-17T20:22:41.093992682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal,Uid:f895f2cfcc8cde9cd9d606f98a9a3e5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcb035474816e469e77f21e6bad297cde7bb33914e6c5965389e1814140c925f\"" Mar 17 20:22:41.102125 env[1292]: time="2025-03-17T20:22:41.102085893Z" level=info msg="CreateContainer within sandbox \"bcb035474816e469e77f21e6bad297cde7bb33914e6c5965389e1814140c925f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 20:22:41.103458 env[1292]: time="2025-03-17T20:22:41.102272917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal,Uid:ca8bfb58d20b25f74dad0abf7517b0ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c553c725c4013992d62ce29deab937f89115e0f5a83588a4dfbd12f2deaa7de\"" Mar 17 20:22:41.106317 env[1292]: time="2025-03-17T20:22:41.106277652Z" level=info msg="CreateContainer within sandbox \"8c553c725c4013992d62ce29deab937f89115e0f5a83588a4dfbd12f2deaa7de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 20:22:41.138836 env[1292]: time="2025-03-17T20:22:41.138789347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal,Uid:125a069321936458976fa1b99d98e328,Namespace:kube-system,Attempt:0,} returns sandbox id \"888c4ef73d4ae5fa6202c11532c1bc4c980791a110d83cc2525e74695a993537\"" Mar 17 20:22:41.140275 env[1292]: time="2025-03-17T20:22:41.140228954Z" level=info msg="CreateContainer within sandbox \"8c553c725c4013992d62ce29deab937f89115e0f5a83588a4dfbd12f2deaa7de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6f2da4bde69ce2254207981c69215be2f1e028ac60ad3c0962a029a1cd6262c\"" Mar 17 20:22:41.141092 env[1292]: time="2025-03-17T20:22:41.140856533Z" level=info msg="StartContainer for \"f6f2da4bde69ce2254207981c69215be2f1e028ac60ad3c0962a029a1cd6262c\"" Mar 17 20:22:41.142479 env[1292]: time="2025-03-17T20:22:41.142449290Z" level=info msg="CreateContainer within sandbox \"888c4ef73d4ae5fa6202c11532c1bc4c980791a110d83cc2525e74695a993537\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 20:22:41.150864 env[1292]: time="2025-03-17T20:22:41.150821208Z" level=info msg="CreateContainer within sandbox \"bcb035474816e469e77f21e6bad297cde7bb33914e6c5965389e1814140c925f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"698bdf6e92ee21654ec7855877aa496a57d66615cd922b6945c97288bf7fb220\"" Mar 17 20:22:41.153016 env[1292]: time="2025-03-17T20:22:41.152980890Z" level=info msg="StartContainer for \"698bdf6e92ee21654ec7855877aa496a57d66615cd922b6945c97288bf7fb220\"" Mar 17 20:22:41.164240 kubelet[1814]: W0317 20:22:41.164127 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-8-ce231ec735.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:41.164240 kubelet[1814]: E0317 20:22:41.164216 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-7-8-ce231ec735.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:41.174021 env[1292]: time="2025-03-17T20:22:41.173964518Z" level=info msg="CreateContainer within sandbox \"888c4ef73d4ae5fa6202c11532c1bc4c980791a110d83cc2525e74695a993537\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a3f801c4e2abba6957a42c56fe278a28863ec6e20b17b3cb5185a65c7996faa\"" Mar 17 20:22:41.175783 env[1292]: time="2025-03-17T20:22:41.175759567Z" level=info msg="StartContainer for \"9a3f801c4e2abba6957a42c56fe278a28863ec6e20b17b3cb5185a65c7996faa\"" Mar 17 20:22:41.191497 kubelet[1814]: E0317 20:22:41.191340 1814 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-7-8-ce231ec735.novalocal?timeout=10s\": dial tcp 172.24.4.115:6443: connect: connection refused" interval="1.6s" Mar 17 20:22:41.238071 env[1292]: time="2025-03-17T20:22:41.238030774Z" level=info msg="StartContainer for \"f6f2da4bde69ce2254207981c69215be2f1e028ac60ad3c0962a029a1cd6262c\" returns successfully" Mar 17 20:22:41.267141 env[1292]: time="2025-03-17T20:22:41.267100429Z" level=info msg="StartContainer for \"698bdf6e92ee21654ec7855877aa496a57d66615cd922b6945c97288bf7fb220\" returns successfully" Mar 17 20:22:41.298691 kubelet[1814]: I0317 20:22:41.298251 1814 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:41.298691 kubelet[1814]: E0317 20:22:41.298665 1814 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.115:6443/api/v1/nodes\": dial tcp 172.24.4.115:6443: connect: connection refused" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:41.304635 kubelet[1814]: W0317 20:22:41.304527 1814 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:41.304635 kubelet[1814]: E0317 20:22:41.304610 1814 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.115:6443: connect: connection refused Mar 17 20:22:41.320367 env[1292]: time="2025-03-17T20:22:41.320324758Z" level=info msg="StartContainer for \"9a3f801c4e2abba6957a42c56fe278a28863ec6e20b17b3cb5185a65c7996faa\" returns successfully" Mar 17 20:22:42.900487 kubelet[1814]: I0317 20:22:42.900463 1814 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:43.245343 kubelet[1814]: I0317 20:22:43.245244 1814 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:43.282957 kubelet[1814]: E0317 20:22:43.282926 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-8-ce231ec735.novalocal\" not found" Mar 17 20:22:43.383538 kubelet[1814]: E0317 20:22:43.383496 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-8-ce231ec735.novalocal\" not found" Mar 17 20:22:43.484755 kubelet[1814]: E0317 20:22:43.484720 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-8-ce231ec735.novalocal\" not found" Mar 17 20:22:43.586149 kubelet[1814]: E0317 20:22:43.585750 1814 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-7-8-ce231ec735.novalocal\" not found" Mar 17 20:22:43.719103 kubelet[1814]: I0317 20:22:43.719060 1814 apiserver.go:52] "Watching apiserver" Mar 17 20:22:43.786120 kubelet[1814]: I0317 20:22:43.786018 1814 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:22:46.284695 systemd[1]: Reloading. Mar 17 20:22:46.436722 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2025-03-17T20:22:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 20:22:46.437167 /usr/lib/systemd/system-generators/torcx-generator[2096]: time="2025-03-17T20:22:46Z" level=info msg="torcx already run" Mar 17 20:22:46.539804 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 20:22:46.540257 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 20:22:46.567311 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 20:22:46.664068 systemd[1]: Stopping kubelet.service... Mar 17 20:22:46.664985 kubelet[1814]: E0317 20:22:46.664724 1814 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510-3-7-8-ce231ec735.novalocal.182db0bdb8897cd7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-7-8-ce231ec735.novalocal,UID:ci-3510-3-7-8-ce231ec735.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-7-8-ce231ec735.novalocal,},FirstTimestamp:2025-03-17 20:22:39.753575639 +0000 UTC m=+1.002521514,LastTimestamp:2025-03-17 20:22:39.753575639 +0000 UTC m=+1.002521514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-7-8-ce231ec735.novalocal,}" Mar 17 20:22:46.686934 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 20:22:46.687221 systemd[1]: Stopped kubelet.service. Mar 17 20:22:46.689214 systemd[1]: Starting kubelet.service... Mar 17 20:22:46.817523 systemd[1]: Started kubelet.service. Mar 17 20:22:47.062595 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:22:47.062595 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 20:22:47.062595 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 20:22:47.063017 kubelet[2157]: I0317 20:22:47.062693 2157 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 20:22:47.067875 kubelet[2157]: I0317 20:22:47.067848 2157 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 20:22:47.068051 kubelet[2157]: I0317 20:22:47.068036 2157 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 20:22:47.068436 kubelet[2157]: I0317 20:22:47.068412 2157 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 20:22:47.070551 kubelet[2157]: I0317 20:22:47.070257 2157 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 20:22:47.074664 kubelet[2157]: I0317 20:22:47.074605 2157 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 20:22:47.089698 kubelet[2157]: I0317 20:22:47.089665 2157 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 20:22:47.090447 kubelet[2157]: I0317 20:22:47.090414 2157 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 20:22:47.090718 kubelet[2157]: I0317 20:22:47.090507 2157 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-7-8-ce231ec735.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 20:22:47.090890 kubelet[2157]: I0317 20:22:47.090877 2157 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 20:22:47.090959 kubelet[2157]: I0317 20:22:47.090951 2157 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 20:22:47.091072 kubelet[2157]: I0317 20:22:47.091062 2157 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:22:47.091242 kubelet[2157]: I0317 20:22:47.091231 2157 kubelet.go:400] "Attempting to sync node with API server" Mar 17 20:22:47.091317 kubelet[2157]: I0317 20:22:47.091307 2157 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 20:22:47.091468 kubelet[2157]: I0317 20:22:47.091458 2157 kubelet.go:312] "Adding apiserver pod source" Mar 17 20:22:47.091577 kubelet[2157]: I0317 20:22:47.091566 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 20:22:47.094544 kubelet[2157]: I0317 20:22:47.094517 2157 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 20:22:47.094839 kubelet[2157]: I0317 20:22:47.094826 2157 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 20:22:47.095393 kubelet[2157]: I0317 20:22:47.095380 2157 server.go:1264] "Started kubelet" Mar 17 20:22:47.106211 kubelet[2157]: I0317 20:22:47.098649 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 20:22:47.106211 kubelet[2157]: I0317 20:22:47.104121 2157 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 20:22:47.106211 kubelet[2157]: I0317 20:22:47.105798 2157 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 20:22:47.106211 kubelet[2157]: I0317 20:22:47.105925 2157 reconciler.go:26] "Reconciler: start to sync state" Mar 17 20:22:47.107991 kubelet[2157]: I0317 20:22:47.107563 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 20:22:47.108471 kubelet[2157]: I0317 20:22:47.108451 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 20:22:47.108529 kubelet[2157]: I0317 20:22:47.108487 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 20:22:47.108529 kubelet[2157]: I0317 20:22:47.108507 2157 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 20:22:47.108588 kubelet[2157]: E0317 20:22:47.108555 2157 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 20:22:47.112205 kubelet[2157]: I0317 20:22:47.112157 2157 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 20:22:47.113726 kubelet[2157]: I0317 20:22:47.113712 2157 server.go:455] "Adding debug handlers to kubelet server" Mar 17 20:22:47.115373 kubelet[2157]: I0317 20:22:47.115315 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 20:22:47.115676 kubelet[2157]: I0317 20:22:47.115661 2157 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 20:22:47.119238 kubelet[2157]: I0317 20:22:47.118814 2157 factory.go:221] Registration of the systemd container factory successfully Mar 17 20:22:47.119238 kubelet[2157]: I0317 20:22:47.118896 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 20:22:47.130431 kubelet[2157]: I0317 20:22:47.127632 2157 factory.go:221] Registration of the containerd container factory successfully Mar 17 20:22:47.197479 kubelet[2157]: I0317 20:22:47.197454 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 20:22:47.197643 kubelet[2157]: I0317 20:22:47.197629 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 20:22:47.197725 kubelet[2157]: I0317 20:22:47.197716 2157 state_mem.go:36] "Initialized new in-memory state store" Mar 17 20:22:47.198003 kubelet[2157]: I0317 20:22:47.197988 2157 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 20:22:47.198105 kubelet[2157]: I0317 20:22:47.198078 2157 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 20:22:47.198181 kubelet[2157]: I0317 20:22:47.198172 2157 policy_none.go:49] "None policy: Start" Mar 17 20:22:47.199021 kubelet[2157]: I0317 20:22:47.199005 2157 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 20:22:47.199120 kubelet[2157]: I0317 20:22:47.199110 2157 state_mem.go:35] "Initializing new in-memory state store" Mar 17 20:22:47.199394 kubelet[2157]: I0317 20:22:47.199382 2157 state_mem.go:75] "Updated machine memory state" Mar 17 20:22:47.202464 kubelet[2157]: I0317 20:22:47.200861 2157 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 20:22:47.203138 kubelet[2157]: I0317 20:22:47.202697 2157 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 20:22:47.203138 kubelet[2157]: I0317 20:22:47.202844 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 20:22:47.209067 kubelet[2157]: I0317 20:22:47.209009 2157 topology_manager.go:215] "Topology Admit Handler" podUID="f895f2cfcc8cde9cd9d606f98a9a3e5b" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.209683 kubelet[2157]: I0317 20:22:47.209630 2157 topology_manager.go:215] "Topology Admit Handler" podUID="125a069321936458976fa1b99d98e328" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.209992 kubelet[2157]: I0317 20:22:47.209974 2157 topology_manager.go:215] "Topology Admit Handler" podUID="ca8bfb58d20b25f74dad0abf7517b0ab" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.220546 kubelet[2157]: I0317 20:22:47.209998 2157 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.233839 kubelet[2157]: W0317 20:22:47.233682 2157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:22:47.234068 kubelet[2157]: W0317 20:22:47.233988 2157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:22:47.234209 kubelet[2157]: W0317 20:22:47.234133 2157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:22:47.238602 kubelet[2157]: I0317 20:22:47.238568 2157 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.238864 kubelet[2157]: I0317 20:22:47.238849 2157 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.261744 sudo[2188]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 20:22:47.262025 sudo[2188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 20:22:47.407159 kubelet[2157]: I0317 20:22:47.407121 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f895f2cfcc8cde9cd9d606f98a9a3e5b-kubeconfig\") pod \"kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"f895f2cfcc8cde9cd9d606f98a9a3e5b\") " pod="kube-system/kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407287 kubelet[2157]: I0317 20:22:47.407166 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/125a069321936458976fa1b99d98e328-ca-certs\") pod \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"125a069321936458976fa1b99d98e328\") " pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407287 kubelet[2157]: I0317 20:22:47.407193 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407287 kubelet[2157]: I0317 20:22:47.407212 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407287 kubelet[2157]: I0317 20:22:47.407232 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407444 kubelet[2157]: I0317 20:22:47.407254 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/125a069321936458976fa1b99d98e328-k8s-certs\") pod \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"125a069321936458976fa1b99d98e328\") " pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407444 kubelet[2157]: I0317 20:22:47.407272 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/125a069321936458976fa1b99d98e328-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"125a069321936458976fa1b99d98e328\") " pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407444 kubelet[2157]: I0317 20:22:47.407290 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-ca-certs\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.407444 kubelet[2157]: I0317 20:22:47.407309 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca8bfb58d20b25f74dad0abf7517b0ab-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal\" (UID: \"ca8bfb58d20b25f74dad0abf7517b0ab\") " pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:47.892690 sudo[2188]: pam_unix(sudo:session): session closed for user root Mar 17 20:22:48.092878 kubelet[2157]: I0317 20:22:48.092857 2157 apiserver.go:52] "Watching apiserver" Mar 17 20:22:48.106355 kubelet[2157]: I0317 20:22:48.106331 2157 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 20:22:48.181072 kubelet[2157]: W0317 20:22:48.180962 2157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:22:48.181313 kubelet[2157]: E0317 20:22:48.181285 2157 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:48.182653 kubelet[2157]: W0317 20:22:48.182641 2157 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 17 20:22:48.182774 kubelet[2157]: E0317 20:22:48.182759 2157 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal" Mar 17 20:22:48.216061 kubelet[2157]: I0317 20:22:48.215091 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-7-8-ce231ec735.novalocal" podStartSLOduration=1.215051737 podStartE2EDuration="1.215051737s" podCreationTimestamp="2025-03-17 20:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:22:48.195008867 +0000 UTC m=+1.371388209" watchObservedRunningTime="2025-03-17 20:22:48.215051737 +0000 UTC m=+1.391431068" Mar 17 20:22:48.226269 kubelet[2157]: I0317 20:22:48.225930 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-7-8-ce231ec735.novalocal" podStartSLOduration=1.225913187 podStartE2EDuration="1.225913187s" podCreationTimestamp="2025-03-17 20:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:22:48.215033122 +0000 UTC m=+1.391412473" watchObservedRunningTime="2025-03-17 20:22:48.225913187 +0000 UTC m=+1.402292518" Mar 17 20:22:48.241364 kubelet[2157]: I0317 20:22:48.241262 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-7-8-ce231ec735.novalocal" podStartSLOduration=1.241235518 podStartE2EDuration="1.241235518s" podCreationTimestamp="2025-03-17 20:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:22:48.226242508 +0000 UTC m=+1.402621850" watchObservedRunningTime="2025-03-17 20:22:48.241235518 +0000 UTC m=+1.417614889" Mar 17 20:22:50.796975 sudo[1413]: pam_unix(sudo:session): session closed for user root Mar 17 20:22:51.067916 sshd[1407]: pam_unix(sshd:session): session closed for user core Mar 17 20:22:51.073053 systemd[1]: sshd@4-172.24.4.115:22-172.24.4.1:52748.service: Deactivated successfully. Mar 17 20:22:51.075116 systemd-logind[1254]: Session 5 logged out. Waiting for processes to exit. Mar 17 20:22:51.075174 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 20:22:51.076980 systemd-logind[1254]: Removed session 5. Mar 17 20:23:00.165534 kubelet[2157]: I0317 20:23:00.165482 2157 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 20:23:00.166869 env[1292]: time="2025-03-17T20:23:00.166805112Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 20:23:00.167296 kubelet[2157]: I0317 20:23:00.167282 2157 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 20:23:01.124967 kubelet[2157]: I0317 20:23:01.124872 2157 topology_manager.go:215] "Topology Admit Handler" podUID="1b589470-1813-44cd-9307-eaa16988a3f4" podNamespace="kube-system" podName="kube-proxy-jl94n" Mar 17 20:23:01.143913 kubelet[2157]: I0317 20:23:01.143869 2157 topology_manager.go:215] "Topology Admit Handler" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" podNamespace="kube-system" podName="cilium-bk7x9" Mar 17 20:23:01.157773 kubelet[2157]: I0317 20:23:01.157725 2157 topology_manager.go:215] "Topology Admit Handler" podUID="ad6a3a27-eff1-44f9-9000-0ff99f375262" podNamespace="kube-system" podName="cilium-operator-599987898-l5vkz" Mar 17 20:23:01.202175 kubelet[2157]: I0317 20:23:01.202138 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-bpf-maps\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202614 kubelet[2157]: I0317 20:23:01.202229 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-kernel\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202614 kubelet[2157]: I0317 20:23:01.202254 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-hostproc\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202614 kubelet[2157]: I0317 20:23:01.202306 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cni-path\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202614 kubelet[2157]: I0317 20:23:01.202326 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-xtables-lock\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202614 kubelet[2157]: I0317 20:23:01.202343 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b589470-1813-44cd-9307-eaa16988a3f4-lib-modules\") pod \"kube-proxy-jl94n\" (UID: \"1b589470-1813-44cd-9307-eaa16988a3f4\") " pod="kube-system/kube-proxy-jl94n" Mar 17 20:23:01.202614 kubelet[2157]: I0317 20:23:01.202390 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-run\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202808 kubelet[2157]: I0317 20:23:01.202443 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-config-path\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202808 kubelet[2157]: I0317 20:23:01.202509 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-lib-modules\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202808 kubelet[2157]: I0317 20:23:01.202535 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-hubble-tls\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202808 kubelet[2157]: I0317 20:23:01.202650 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdbf\" (UniqueName: \"kubernetes.io/projected/ad6a3a27-eff1-44f9-9000-0ff99f375262-kube-api-access-xwdbf\") pod \"cilium-operator-599987898-l5vkz\" (UID: \"ad6a3a27-eff1-44f9-9000-0ff99f375262\") " pod="kube-system/cilium-operator-599987898-l5vkz" Mar 17 20:23:01.202808 kubelet[2157]: I0317 20:23:01.202674 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad6a3a27-eff1-44f9-9000-0ff99f375262-cilium-config-path\") pod \"cilium-operator-599987898-l5vkz\" (UID: \"ad6a3a27-eff1-44f9-9000-0ff99f375262\") " pod="kube-system/cilium-operator-599987898-l5vkz" Mar 17 20:23:01.202960 kubelet[2157]: I0317 20:23:01.202728 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1b589470-1813-44cd-9307-eaa16988a3f4-kube-proxy\") pod \"kube-proxy-jl94n\" (UID: \"1b589470-1813-44cd-9307-eaa16988a3f4\") " pod="kube-system/kube-proxy-jl94n" Mar 17 20:23:01.202960 kubelet[2157]: I0317 20:23:01.202749 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8zwh\" (UniqueName: \"kubernetes.io/projected/1b589470-1813-44cd-9307-eaa16988a3f4-kube-api-access-g8zwh\") pod \"kube-proxy-jl94n\" (UID: \"1b589470-1813-44cd-9307-eaa16988a3f4\") " pod="kube-system/kube-proxy-jl94n" Mar 17 20:23:01.202960 kubelet[2157]: I0317 20:23:01.202767 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-cgroup\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202960 kubelet[2157]: I0317 20:23:01.202815 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-etc-cni-netd\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.202960 kubelet[2157]: I0317 20:23:01.202836 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f4ab452-f321-4924-aa28-9e67455b0b09-clustermesh-secrets\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.203122 kubelet[2157]: I0317 20:23:01.202889 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-net\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.203122 kubelet[2157]: I0317 20:23:01.202909 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjdvt\" (UniqueName: \"kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-kube-api-access-rjdvt\") pod \"cilium-bk7x9\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " pod="kube-system/cilium-bk7x9" Mar 17 20:23:01.203122 kubelet[2157]: I0317 20:23:01.202926 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b589470-1813-44cd-9307-eaa16988a3f4-xtables-lock\") pod \"kube-proxy-jl94n\" (UID: \"1b589470-1813-44cd-9307-eaa16988a3f4\") " pod="kube-system/kube-proxy-jl94n" Mar 17 20:23:01.440105 env[1292]: time="2025-03-17T20:23:01.439426989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jl94n,Uid:1b589470-1813-44cd-9307-eaa16988a3f4,Namespace:kube-system,Attempt:0,}" Mar 17 20:23:01.451447 env[1292]: time="2025-03-17T20:23:01.451367956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bk7x9,Uid:4f4ab452-f321-4924-aa28-9e67455b0b09,Namespace:kube-system,Attempt:0,}" Mar 17 20:23:01.462976 env[1292]: time="2025-03-17T20:23:01.462894955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l5vkz,Uid:ad6a3a27-eff1-44f9-9000-0ff99f375262,Namespace:kube-system,Attempt:0,}" Mar 17 20:23:01.509064 env[1292]: time="2025-03-17T20:23:01.508560615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:23:01.509064 env[1292]: time="2025-03-17T20:23:01.508688296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:23:01.509064 env[1292]: time="2025-03-17T20:23:01.508724624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:23:01.509672 env[1292]: time="2025-03-17T20:23:01.509244873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9af405bc0dceb4589a3fa93f1fa24638c7193dfa1bab2c8952325b598eb24f8c pid=2243 runtime=io.containerd.runc.v2 Mar 17 20:23:01.560410 env[1292]: time="2025-03-17T20:23:01.559917711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:23:01.560410 env[1292]: time="2025-03-17T20:23:01.559960822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:23:01.560410 env[1292]: time="2025-03-17T20:23:01.559975349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:23:01.560410 env[1292]: time="2025-03-17T20:23:01.560156279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209 pid=2270 runtime=io.containerd.runc.v2 Mar 17 20:23:01.589906 env[1292]: time="2025-03-17T20:23:01.589811591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:23:01.589906 env[1292]: time="2025-03-17T20:23:01.589882183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:23:01.590132 env[1292]: time="2025-03-17T20:23:01.590094502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:23:01.590500 env[1292]: time="2025-03-17T20:23:01.590450662Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5 pid=2301 runtime=io.containerd.runc.v2 Mar 17 20:23:01.607710 env[1292]: time="2025-03-17T20:23:01.607653255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jl94n,Uid:1b589470-1813-44cd-9307-eaa16988a3f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af405bc0dceb4589a3fa93f1fa24638c7193dfa1bab2c8952325b598eb24f8c\"" Mar 17 20:23:01.613077 env[1292]: time="2025-03-17T20:23:01.613036238Z" level=info msg="CreateContainer within sandbox \"9af405bc0dceb4589a3fa93f1fa24638c7193dfa1bab2c8952325b598eb24f8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 20:23:01.648914 env[1292]: time="2025-03-17T20:23:01.648864519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bk7x9,Uid:4f4ab452-f321-4924-aa28-9e67455b0b09,Namespace:kube-system,Attempt:0,} returns sandbox id \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\"" Mar 17 20:23:01.652972 env[1292]: time="2025-03-17T20:23:01.652931808Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 20:23:01.670834 env[1292]: time="2025-03-17T20:23:01.670794251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l5vkz,Uid:ad6a3a27-eff1-44f9-9000-0ff99f375262,Namespace:kube-system,Attempt:0,} returns sandbox id \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\"" Mar 17 20:23:01.917056 env[1292]: time="2025-03-17T20:23:01.916980492Z" level=info msg="CreateContainer within sandbox \"9af405bc0dceb4589a3fa93f1fa24638c7193dfa1bab2c8952325b598eb24f8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b3861cfeb34b3e9ba36a52e349c9aba36cf1a0b108a958f43e415c7d620efd4a\"" Mar 17 20:23:01.918599 env[1292]: time="2025-03-17T20:23:01.918549593Z" level=info msg="StartContainer for \"b3861cfeb34b3e9ba36a52e349c9aba36cf1a0b108a958f43e415c7d620efd4a\"" Mar 17 20:23:02.743533 env[1292]: time="2025-03-17T20:23:02.743375505Z" level=info msg="StartContainer for \"b3861cfeb34b3e9ba36a52e349c9aba36cf1a0b108a958f43e415c7d620efd4a\" returns successfully" Mar 17 20:23:10.745028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2372614462.mount: Deactivated successfully. Mar 17 20:23:15.254534 env[1292]: time="2025-03-17T20:23:15.254334351Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:23:15.257656 env[1292]: time="2025-03-17T20:23:15.257605143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:23:15.260126 env[1292]: time="2025-03-17T20:23:15.260072335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:23:15.262168 env[1292]: time="2025-03-17T20:23:15.262107365Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 20:23:15.268511 env[1292]: time="2025-03-17T20:23:15.266673588Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 20:23:15.270630 env[1292]: time="2025-03-17T20:23:15.270575776Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:23:15.302435 env[1292]: time="2025-03-17T20:23:15.301502192Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\"" Mar 17 20:23:15.301792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount795422749.mount: Deactivated successfully. Mar 17 20:23:15.306228 env[1292]: time="2025-03-17T20:23:15.303889976Z" level=info msg="StartContainer for \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\"" Mar 17 20:23:15.389590 env[1292]: time="2025-03-17T20:23:15.389246991Z" level=info msg="StartContainer for \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\" returns successfully" Mar 17 20:23:15.914551 kubelet[2157]: I0317 20:23:15.914355 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jl94n" podStartSLOduration=14.914240001 podStartE2EDuration="14.914240001s" podCreationTimestamp="2025-03-17 20:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:23:02.788274039 +0000 UTC m=+15.964653440" watchObservedRunningTime="2025-03-17 20:23:15.914240001 +0000 UTC m=+29.090619373" Mar 17 20:23:16.261301 env[1292]: time="2025-03-17T20:23:16.260175612Z" level=info msg="shim disconnected" id=2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a Mar 17 20:23:16.261301 env[1292]: time="2025-03-17T20:23:16.260309433Z" level=warning msg="cleaning up after shim disconnected" id=2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a namespace=k8s.io Mar 17 20:23:16.261301 env[1292]: time="2025-03-17T20:23:16.260352734Z" level=info msg="cleaning up dead shim" Mar 17 20:23:16.294590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a-rootfs.mount: Deactivated successfully. Mar 17 20:23:16.304862 env[1292]: time="2025-03-17T20:23:16.304757923Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:23:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T20:23:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Mar 17 20:23:16.824297 env[1292]: time="2025-03-17T20:23:16.823851959Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:23:16.889222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019364856.mount: Deactivated successfully. Mar 17 20:23:16.904285 env[1292]: time="2025-03-17T20:23:16.904039238Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\"" Mar 17 20:23:16.906621 env[1292]: time="2025-03-17T20:23:16.906561764Z" level=info msg="StartContainer for \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\"" Mar 17 20:23:16.968597 env[1292]: time="2025-03-17T20:23:16.968431752Z" level=info msg="StartContainer for \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\" returns successfully" Mar 17 20:23:16.974000 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 20:23:16.974289 systemd[1]: Stopped systemd-sysctl.service. Mar 17 20:23:16.974724 systemd[1]: Stopping systemd-sysctl.service... Mar 17 20:23:16.978536 systemd[1]: Starting systemd-sysctl.service... Mar 17 20:23:16.986249 systemd[1]: Finished systemd-sysctl.service. Mar 17 20:23:17.009938 env[1292]: time="2025-03-17T20:23:17.009892271Z" level=info msg="shim disconnected" id=4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc Mar 17 20:23:17.010176 env[1292]: time="2025-03-17T20:23:17.010158300Z" level=warning msg="cleaning up after shim disconnected" id=4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc namespace=k8s.io Mar 17 20:23:17.010252 env[1292]: time="2025-03-17T20:23:17.010237990Z" level=info msg="cleaning up dead shim" Mar 17 20:23:17.018044 env[1292]: time="2025-03-17T20:23:17.018003428Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:23:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2634 runtime=io.containerd.runc.v2\n" Mar 17 20:23:17.833705 env[1292]: time="2025-03-17T20:23:17.833669977Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:23:17.864296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535193691.mount: Deactivated successfully. Mar 17 20:23:17.897623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620720528.mount: Deactivated successfully. Mar 17 20:23:17.989138 env[1292]: time="2025-03-17T20:23:17.989065607Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\"" Mar 17 20:23:17.992141 env[1292]: time="2025-03-17T20:23:17.990670891Z" level=info msg="StartContainer for \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\"" Mar 17 20:23:18.101016 env[1292]: time="2025-03-17T20:23:18.100918000Z" level=info msg="StartContainer for \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\" returns successfully" Mar 17 20:23:18.130418 env[1292]: time="2025-03-17T20:23:18.130355219Z" level=info msg="shim disconnected" id=d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f Mar 17 20:23:18.130591 env[1292]: time="2025-03-17T20:23:18.130423417Z" level=warning msg="cleaning up after shim disconnected" id=d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f namespace=k8s.io Mar 17 20:23:18.130591 env[1292]: time="2025-03-17T20:23:18.130436792Z" level=info msg="cleaning up dead shim" Mar 17 20:23:18.147126 env[1292]: time="2025-03-17T20:23:18.147081388Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:23:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2696 runtime=io.containerd.runc.v2\n" Mar 17 20:23:18.834244 env[1292]: time="2025-03-17T20:23:18.834188129Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:23:18.851532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649442536.mount: Deactivated successfully. Mar 17 20:23:18.860758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000463145.mount: Deactivated successfully. Mar 17 20:23:18.869852 env[1292]: time="2025-03-17T20:23:18.869805611Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\"" Mar 17 20:23:18.871858 env[1292]: time="2025-03-17T20:23:18.870705049Z" level=info msg="StartContainer for \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\"" Mar 17 20:23:18.960069 env[1292]: time="2025-03-17T20:23:18.960021206Z" level=info msg="StartContainer for \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\" returns successfully" Mar 17 20:23:19.175217 env[1292]: time="2025-03-17T20:23:19.175173385Z" level=info msg="shim disconnected" id=8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac Mar 17 20:23:19.175506 env[1292]: time="2025-03-17T20:23:19.175480431Z" level=warning msg="cleaning up after shim disconnected" id=8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac namespace=k8s.io Mar 17 20:23:19.175637 env[1292]: time="2025-03-17T20:23:19.175621335Z" level=info msg="cleaning up dead shim" Mar 17 20:23:19.203337 env[1292]: time="2025-03-17T20:23:19.203288971Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:23:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2752 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T20:23:19Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Mar 17 20:23:19.372467 env[1292]: time="2025-03-17T20:23:19.372377078Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:23:19.375929 env[1292]: time="2025-03-17T20:23:19.375871888Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:23:19.378699 env[1292]: time="2025-03-17T20:23:19.378651185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 20:23:19.379385 env[1292]: time="2025-03-17T20:23:19.379328867Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 20:23:19.387157 env[1292]: time="2025-03-17T20:23:19.387097760Z" level=info msg="CreateContainer within sandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 20:23:19.402016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346579018.mount: Deactivated successfully. Mar 17 20:23:19.412091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249247939.mount: Deactivated successfully. Mar 17 20:23:19.421156 env[1292]: time="2025-03-17T20:23:19.421082414Z" level=info msg="CreateContainer within sandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\"" Mar 17 20:23:19.426902 env[1292]: time="2025-03-17T20:23:19.426690170Z" level=info msg="StartContainer for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\"" Mar 17 20:23:19.538441 env[1292]: time="2025-03-17T20:23:19.536133558Z" level=info msg="StartContainer for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" returns successfully" Mar 17 20:23:19.838258 env[1292]: time="2025-03-17T20:23:19.831650435Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:23:19.865567 env[1292]: time="2025-03-17T20:23:19.865477834Z" level=info msg="CreateContainer within sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\"" Mar 17 20:23:19.867004 env[1292]: time="2025-03-17T20:23:19.866960206Z" level=info msg="StartContainer for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\"" Mar 17 20:23:20.011931 env[1292]: time="2025-03-17T20:23:20.011885947Z" level=info msg="StartContainer for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" returns successfully" Mar 17 20:23:20.437449 kubelet[2157]: I0317 20:23:20.437371 2157 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 20:23:20.581646 kubelet[2157]: I0317 20:23:20.581564 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l5vkz" podStartSLOduration=1.871959859 podStartE2EDuration="19.581513584s" podCreationTimestamp="2025-03-17 20:23:01 +0000 UTC" firstStartedPulling="2025-03-17 20:23:01.672147085 +0000 UTC m=+14.848526416" lastFinishedPulling="2025-03-17 20:23:19.381700759 +0000 UTC m=+32.558080141" observedRunningTime="2025-03-17 20:23:19.917932678 +0000 UTC m=+33.094312019" watchObservedRunningTime="2025-03-17 20:23:20.581513584 +0000 UTC m=+33.757892916" Mar 17 20:23:20.582243 kubelet[2157]: I0317 20:23:20.582214 2157 topology_manager.go:215] "Topology Admit Handler" podUID="b5278a8d-ad3b-4b22-a9bd-7ac955885432" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7l7xq" Mar 17 20:23:20.586130 kubelet[2157]: I0317 20:23:20.586103 2157 topology_manager.go:215] "Topology Admit Handler" podUID="8f33f76c-9191-4960-91ce-3c0493ef877d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hfz8" Mar 17 20:23:20.755174 kubelet[2157]: I0317 20:23:20.755083 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t6nz\" (UniqueName: \"kubernetes.io/projected/8f33f76c-9191-4960-91ce-3c0493ef877d-kube-api-access-8t6nz\") pod \"coredns-7db6d8ff4d-9hfz8\" (UID: \"8f33f76c-9191-4960-91ce-3c0493ef877d\") " pod="kube-system/coredns-7db6d8ff4d-9hfz8" Mar 17 20:23:20.755342 kubelet[2157]: I0317 20:23:20.755326 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5278a8d-ad3b-4b22-a9bd-7ac955885432-config-volume\") pod \"coredns-7db6d8ff4d-7l7xq\" (UID: \"b5278a8d-ad3b-4b22-a9bd-7ac955885432\") " pod="kube-system/coredns-7db6d8ff4d-7l7xq" Mar 17 20:23:20.755452 kubelet[2157]: I0317 20:23:20.755434 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f33f76c-9191-4960-91ce-3c0493ef877d-config-volume\") pod \"coredns-7db6d8ff4d-9hfz8\" (UID: \"8f33f76c-9191-4960-91ce-3c0493ef877d\") " pod="kube-system/coredns-7db6d8ff4d-9hfz8" Mar 17 20:23:20.755635 kubelet[2157]: I0317 20:23:20.755608 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rklrk\" (UniqueName: \"kubernetes.io/projected/b5278a8d-ad3b-4b22-a9bd-7ac955885432-kube-api-access-rklrk\") pod \"coredns-7db6d8ff4d-7l7xq\" (UID: \"b5278a8d-ad3b-4b22-a9bd-7ac955885432\") " pod="kube-system/coredns-7db6d8ff4d-7l7xq" Mar 17 20:23:20.888167 kubelet[2157]: I0317 20:23:20.888115 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bk7x9" podStartSLOduration=6.276008404 podStartE2EDuration="19.888097275s" podCreationTimestamp="2025-03-17 20:23:01 +0000 UTC" firstStartedPulling="2025-03-17 20:23:01.652272327 +0000 UTC m=+14.828651658" lastFinishedPulling="2025-03-17 20:23:15.264361147 +0000 UTC m=+28.440740529" observedRunningTime="2025-03-17 20:23:20.887826187 +0000 UTC m=+34.064205528" watchObservedRunningTime="2025-03-17 20:23:20.888097275 +0000 UTC m=+34.064476606" Mar 17 20:23:20.889053 env[1292]: time="2025-03-17T20:23:20.889012504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hfz8,Uid:8f33f76c-9191-4960-91ce-3c0493ef877d,Namespace:kube-system,Attempt:0,}" Mar 17 20:23:21.186934 env[1292]: time="2025-03-17T20:23:21.186862103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7l7xq,Uid:b5278a8d-ad3b-4b22-a9bd-7ac955885432,Namespace:kube-system,Attempt:0,}" Mar 17 20:23:23.447708 systemd-networkd[1038]: cilium_host: Link UP Mar 17 20:23:23.448025 systemd-networkd[1038]: cilium_net: Link UP Mar 17 20:23:23.448035 systemd-networkd[1038]: cilium_net: Gained carrier Mar 17 20:23:23.448511 systemd-networkd[1038]: cilium_host: Gained carrier Mar 17 20:23:23.453517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 20:23:23.456912 systemd-networkd[1038]: cilium_host: Gained IPv6LL Mar 17 20:23:23.485186 systemd-networkd[1038]: cilium_net: Gained IPv6LL Mar 17 20:23:23.584508 systemd-networkd[1038]: cilium_vxlan: Link UP Mar 17 20:23:23.584516 systemd-networkd[1038]: cilium_vxlan: Gained carrier Mar 17 20:23:23.860467 kernel: NET: Registered PF_ALG protocol family Mar 17 20:23:24.575660 systemd-networkd[1038]: lxc_health: Link UP Mar 17 20:23:24.605845 systemd-networkd[1038]: lxc_health: Gained carrier Mar 17 20:23:24.606433 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 20:23:24.829887 systemd-networkd[1038]: lxce86a25f78f42: Link UP Mar 17 20:23:24.836431 kernel: eth0: renamed from tmpb83da Mar 17 20:23:24.862724 systemd-networkd[1038]: cilium_vxlan: Gained IPv6LL Mar 17 20:23:24.864507 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce86a25f78f42: link becomes ready Mar 17 20:23:24.867748 systemd-networkd[1038]: lxce86a25f78f42: Gained carrier Mar 17 20:23:24.986887 systemd-networkd[1038]: lxc42f9c186c156: Link UP Mar 17 20:23:25.004750 kernel: eth0: renamed from tmp3fbd6 Mar 17 20:23:25.008475 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc42f9c186c156: link becomes ready Mar 17 20:23:25.009558 systemd-networkd[1038]: lxc42f9c186c156: Gained carrier Mar 17 20:23:26.573780 systemd-networkd[1038]: lxc_health: Gained IPv6LL Mar 17 20:23:26.816938 systemd-networkd[1038]: lxce86a25f78f42: Gained IPv6LL Mar 17 20:23:27.008614 systemd-networkd[1038]: lxc42f9c186c156: Gained IPv6LL Mar 17 20:23:29.391945 env[1292]: time="2025-03-17T20:23:29.391882250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:23:29.392315 env[1292]: time="2025-03-17T20:23:29.391949656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:23:29.392315 env[1292]: time="2025-03-17T20:23:29.391978490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:23:29.392315 env[1292]: time="2025-03-17T20:23:29.392141907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fbd6d51ad4d4c01a8b98631359c34837ee023e217e4a2644c8ff7a53080fb4d pid=3328 runtime=io.containerd.runc.v2 Mar 17 20:23:29.438861 systemd[1]: run-containerd-runc-k8s.io-3fbd6d51ad4d4c01a8b98631359c34837ee023e217e4a2644c8ff7a53080fb4d-runc.Y1Hz4A.mount: Deactivated successfully. Mar 17 20:23:29.479507 env[1292]: time="2025-03-17T20:23:29.478837268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:23:29.479507 env[1292]: time="2025-03-17T20:23:29.478876141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:23:29.479507 env[1292]: time="2025-03-17T20:23:29.478888804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:23:29.479507 env[1292]: time="2025-03-17T20:23:29.479012506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b83daab6687dfdf1c95a61535722753783668b93b5528e5abccd75c5c58c77bc pid=3362 runtime=io.containerd.runc.v2 Mar 17 20:23:29.540615 env[1292]: time="2025-03-17T20:23:29.540564188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hfz8,Uid:8f33f76c-9191-4960-91ce-3c0493ef877d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fbd6d51ad4d4c01a8b98631359c34837ee023e217e4a2644c8ff7a53080fb4d\"" Mar 17 20:23:29.545664 env[1292]: time="2025-03-17T20:23:29.545614194Z" level=info msg="CreateContainer within sandbox \"3fbd6d51ad4d4c01a8b98631359c34837ee023e217e4a2644c8ff7a53080fb4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:23:29.580914 env[1292]: time="2025-03-17T20:23:29.580865748Z" level=info msg="CreateContainer within sandbox \"3fbd6d51ad4d4c01a8b98631359c34837ee023e217e4a2644c8ff7a53080fb4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b17f92d52d068216bbedbf399ebdb743a02cc427a63ceed2b9bac2d8a9486ea\"" Mar 17 20:23:29.583256 env[1292]: time="2025-03-17T20:23:29.583216789Z" level=info msg="StartContainer for \"5b17f92d52d068216bbedbf399ebdb743a02cc427a63ceed2b9bac2d8a9486ea\"" Mar 17 20:23:29.590188 env[1292]: time="2025-03-17T20:23:29.590141332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7l7xq,Uid:b5278a8d-ad3b-4b22-a9bd-7ac955885432,Namespace:kube-system,Attempt:0,} returns sandbox id \"b83daab6687dfdf1c95a61535722753783668b93b5528e5abccd75c5c58c77bc\"" Mar 17 20:23:29.595321 env[1292]: time="2025-03-17T20:23:29.595270817Z" level=info msg="CreateContainer within sandbox \"b83daab6687dfdf1c95a61535722753783668b93b5528e5abccd75c5c58c77bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 20:23:29.618739 env[1292]: time="2025-03-17T20:23:29.618685110Z" level=info msg="CreateContainer within sandbox \"b83daab6687dfdf1c95a61535722753783668b93b5528e5abccd75c5c58c77bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f3061146c66d46d3fef9d56b6284ae7bed1d176c34da57a415b044aa9f4939f\"" Mar 17 20:23:29.620339 env[1292]: time="2025-03-17T20:23:29.620257230Z" level=info msg="StartContainer for \"4f3061146c66d46d3fef9d56b6284ae7bed1d176c34da57a415b044aa9f4939f\"" Mar 17 20:23:29.650362 env[1292]: time="2025-03-17T20:23:29.649140645Z" level=info msg="StartContainer for \"5b17f92d52d068216bbedbf399ebdb743a02cc427a63ceed2b9bac2d8a9486ea\" returns successfully" Mar 17 20:23:29.719281 env[1292]: time="2025-03-17T20:23:29.719232682Z" level=info msg="StartContainer for \"4f3061146c66d46d3fef9d56b6284ae7bed1d176c34da57a415b044aa9f4939f\" returns successfully" Mar 17 20:23:29.910870 kubelet[2157]: I0317 20:23:29.910527 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7l7xq" podStartSLOduration=28.910475069 podStartE2EDuration="28.910475069s" podCreationTimestamp="2025-03-17 20:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:23:29.908785819 +0000 UTC m=+43.085165170" watchObservedRunningTime="2025-03-17 20:23:29.910475069 +0000 UTC m=+43.086854400" Mar 17 20:23:29.948501 kubelet[2157]: I0317 20:23:29.948433 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hfz8" podStartSLOduration=28.948412192 podStartE2EDuration="28.948412192s" podCreationTimestamp="2025-03-17 20:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:23:29.947447903 +0000 UTC m=+43.123827234" watchObservedRunningTime="2025-03-17 20:23:29.948412192 +0000 UTC m=+43.124791523" Mar 17 20:25:22.327224 systemd[1]: Started sshd@5-172.24.4.115:22-172.24.4.1:56390.service. Mar 17 20:25:23.882327 sshd[3494]: Accepted publickey for core from 172.24.4.1 port 56390 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:23.885700 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:23.894864 systemd-logind[1254]: New session 6 of user core. Mar 17 20:25:23.897521 systemd[1]: Started session-6.scope. Mar 17 20:25:24.656770 sshd[3494]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:24.662059 systemd[1]: sshd@5-172.24.4.115:22-172.24.4.1:56390.service: Deactivated successfully. Mar 17 20:25:24.664821 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 20:25:24.664928 systemd-logind[1254]: Session 6 logged out. Waiting for processes to exit. Mar 17 20:25:24.668126 systemd-logind[1254]: Removed session 6. Mar 17 20:25:29.665845 systemd[1]: Started sshd@6-172.24.4.115:22-172.24.4.1:39912.service. Mar 17 20:25:30.867115 sshd[3508]: Accepted publickey for core from 172.24.4.1 port 39912 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:30.868121 sshd[3508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:30.879465 systemd-logind[1254]: New session 7 of user core. Mar 17 20:25:30.880071 systemd[1]: Started session-7.scope. Mar 17 20:25:31.598084 sshd[3508]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:31.601536 systemd[1]: sshd@6-172.24.4.115:22-172.24.4.1:39912.service: Deactivated successfully. Mar 17 20:25:31.604154 systemd-logind[1254]: Session 7 logged out. Waiting for processes to exit. Mar 17 20:25:31.604457 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 20:25:31.607999 systemd-logind[1254]: Removed session 7. Mar 17 20:25:36.602834 systemd[1]: Started sshd@7-172.24.4.115:22-172.24.4.1:38814.service. Mar 17 20:25:37.727219 sshd[3524]: Accepted publickey for core from 172.24.4.1 port 38814 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:37.730392 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:37.742461 systemd[1]: Started session-8.scope. Mar 17 20:25:37.742799 systemd-logind[1254]: New session 8 of user core. Mar 17 20:25:38.541349 sshd[3524]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:38.545200 systemd[1]: sshd@7-172.24.4.115:22-172.24.4.1:38814.service: Deactivated successfully. Mar 17 20:25:38.546562 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 20:25:38.547144 systemd-logind[1254]: Session 8 logged out. Waiting for processes to exit. Mar 17 20:25:38.548100 systemd-logind[1254]: Removed session 8. Mar 17 20:25:43.548635 systemd[1]: Started sshd@8-172.24.4.115:22-172.24.4.1:34562.service. Mar 17 20:25:44.757385 sshd[3538]: Accepted publickey for core from 172.24.4.1 port 34562 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:44.760287 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:44.772095 systemd[1]: Started session-9.scope. Mar 17 20:25:44.772579 systemd-logind[1254]: New session 9 of user core. Mar 17 20:25:45.571891 sshd[3538]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:45.578787 systemd[1]: Started sshd@9-172.24.4.115:22-172.24.4.1:34576.service. Mar 17 20:25:45.583083 systemd[1]: sshd@8-172.24.4.115:22-172.24.4.1:34562.service: Deactivated successfully. Mar 17 20:25:45.585291 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 20:25:45.597610 systemd-logind[1254]: Session 9 logged out. Waiting for processes to exit. Mar 17 20:25:45.602229 systemd-logind[1254]: Removed session 9. Mar 17 20:25:46.959123 sshd[3550]: Accepted publickey for core from 172.24.4.1 port 34576 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:46.961511 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:46.969602 systemd-logind[1254]: New session 10 of user core. Mar 17 20:25:46.969603 systemd[1]: Started session-10.scope. Mar 17 20:25:47.870764 sshd[3550]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:47.876362 systemd[1]: Started sshd@10-172.24.4.115:22-172.24.4.1:34580.service. Mar 17 20:25:47.878267 systemd[1]: sshd@9-172.24.4.115:22-172.24.4.1:34576.service: Deactivated successfully. Mar 17 20:25:47.881723 systemd-logind[1254]: Session 10 logged out. Waiting for processes to exit. Mar 17 20:25:47.883109 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 20:25:47.887543 systemd-logind[1254]: Removed session 10. Mar 17 20:25:49.331207 sshd[3562]: Accepted publickey for core from 172.24.4.1 port 34580 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:49.333802 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:49.344159 systemd-logind[1254]: New session 11 of user core. Mar 17 20:25:49.345163 systemd[1]: Started session-11.scope. Mar 17 20:25:50.093544 sshd[3562]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:50.098815 systemd-logind[1254]: Session 11 logged out. Waiting for processes to exit. Mar 17 20:25:50.099242 systemd[1]: sshd@10-172.24.4.115:22-172.24.4.1:34580.service: Deactivated successfully. Mar 17 20:25:50.100932 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 20:25:50.102053 systemd-logind[1254]: Removed session 11. Mar 17 20:25:55.098994 systemd[1]: Started sshd@11-172.24.4.115:22-172.24.4.1:48790.service. Mar 17 20:25:56.433557 sshd[3577]: Accepted publickey for core from 172.24.4.1 port 48790 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:25:56.436377 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:25:56.447728 systemd[1]: Started session-12.scope. Mar 17 20:25:56.448157 systemd-logind[1254]: New session 12 of user core. Mar 17 20:25:57.206700 sshd[3577]: pam_unix(sshd:session): session closed for user core Mar 17 20:25:57.211964 systemd-logind[1254]: Session 12 logged out. Waiting for processes to exit. Mar 17 20:25:57.212386 systemd[1]: sshd@11-172.24.4.115:22-172.24.4.1:48790.service: Deactivated successfully. Mar 17 20:25:57.214718 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 20:25:57.216946 systemd-logind[1254]: Removed session 12. Mar 17 20:26:02.211890 systemd[1]: Started sshd@12-172.24.4.115:22-172.24.4.1:48804.service. Mar 17 20:26:03.571197 sshd[3592]: Accepted publickey for core from 172.24.4.1 port 48804 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:03.573898 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:03.583397 systemd-logind[1254]: New session 13 of user core. Mar 17 20:26:03.585138 systemd[1]: Started session-13.scope. Mar 17 20:26:04.346200 systemd[1]: Started sshd@13-172.24.4.115:22-172.24.4.1:40406.service. Mar 17 20:26:04.349543 sshd[3592]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:04.354914 systemd[1]: sshd@12-172.24.4.115:22-172.24.4.1:48804.service: Deactivated successfully. Mar 17 20:26:04.357029 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 20:26:04.358019 systemd-logind[1254]: Session 13 logged out. Waiting for processes to exit. Mar 17 20:26:04.359645 systemd-logind[1254]: Removed session 13. Mar 17 20:26:05.713268 sshd[3603]: Accepted publickey for core from 172.24.4.1 port 40406 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:05.716208 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:05.727792 systemd-logind[1254]: New session 14 of user core. Mar 17 20:26:05.728321 systemd[1]: Started session-14.scope. Mar 17 20:26:06.488507 sshd[3603]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:06.493462 systemd[1]: Started sshd@14-172.24.4.115:22-172.24.4.1:40420.service. Mar 17 20:26:06.496704 systemd[1]: sshd@13-172.24.4.115:22-172.24.4.1:40406.service: Deactivated successfully. Mar 17 20:26:06.499877 systemd-logind[1254]: Session 14 logged out. Waiting for processes to exit. Mar 17 20:26:06.501013 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 20:26:06.510203 systemd-logind[1254]: Removed session 14. Mar 17 20:26:07.659459 sshd[3613]: Accepted publickey for core from 172.24.4.1 port 40420 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:07.662138 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:07.672518 systemd-logind[1254]: New session 15 of user core. Mar 17 20:26:07.674156 systemd[1]: Started session-15.scope. Mar 17 20:26:10.280146 sshd[3613]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:10.286799 systemd[1]: Started sshd@15-172.24.4.115:22-172.24.4.1:40422.service. Mar 17 20:26:10.292124 systemd[1]: sshd@14-172.24.4.115:22-172.24.4.1:40420.service: Deactivated successfully. Mar 17 20:26:10.297027 systemd-logind[1254]: Session 15 logged out. Waiting for processes to exit. Mar 17 20:26:10.297155 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 20:26:10.306825 systemd-logind[1254]: Removed session 15. Mar 17 20:26:11.601743 sshd[3631]: Accepted publickey for core from 172.24.4.1 port 40422 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:11.604762 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:11.616911 systemd[1]: Started session-16.scope. Mar 17 20:26:11.617622 systemd-logind[1254]: New session 16 of user core. Mar 17 20:26:12.639774 sshd[3631]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:12.645042 systemd[1]: Started sshd@16-172.24.4.115:22-172.24.4.1:40438.service. Mar 17 20:26:12.651188 systemd[1]: sshd@15-172.24.4.115:22-172.24.4.1:40422.service: Deactivated successfully. Mar 17 20:26:12.657253 systemd-logind[1254]: Session 16 logged out. Waiting for processes to exit. Mar 17 20:26:12.657383 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 20:26:12.660696 systemd-logind[1254]: Removed session 16. Mar 17 20:26:13.827102 sshd[3641]: Accepted publickey for core from 172.24.4.1 port 40438 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:13.830677 sshd[3641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:13.840899 systemd-logind[1254]: New session 17 of user core. Mar 17 20:26:13.842166 systemd[1]: Started session-17.scope. Mar 17 20:26:14.601511 sshd[3641]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:14.605127 systemd-logind[1254]: Session 17 logged out. Waiting for processes to exit. Mar 17 20:26:14.606490 systemd[1]: sshd@16-172.24.4.115:22-172.24.4.1:40438.service: Deactivated successfully. Mar 17 20:26:14.607321 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 20:26:14.608786 systemd-logind[1254]: Removed session 17. Mar 17 20:26:19.609521 systemd[1]: Started sshd@17-172.24.4.115:22-172.24.4.1:42468.service. Mar 17 20:26:20.951006 sshd[3659]: Accepted publickey for core from 172.24.4.1 port 42468 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:20.953530 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:20.963887 systemd-logind[1254]: New session 18 of user core. Mar 17 20:26:20.965253 systemd[1]: Started session-18.scope. Mar 17 20:26:21.765186 sshd[3659]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:21.770688 systemd[1]: sshd@17-172.24.4.115:22-172.24.4.1:42468.service: Deactivated successfully. Mar 17 20:26:21.773261 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 20:26:21.773350 systemd-logind[1254]: Session 18 logged out. Waiting for processes to exit. Mar 17 20:26:21.776678 systemd-logind[1254]: Removed session 18. Mar 17 20:26:26.772793 systemd[1]: Started sshd@18-172.24.4.115:22-172.24.4.1:34132.service. Mar 17 20:26:27.964294 sshd[3672]: Accepted publickey for core from 172.24.4.1 port 34132 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:27.967754 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:27.983870 systemd-logind[1254]: New session 19 of user core. Mar 17 20:26:27.985705 systemd[1]: Started session-19.scope. Mar 17 20:26:28.526838 sshd[3672]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:28.533390 systemd[1]: sshd@18-172.24.4.115:22-172.24.4.1:34132.service: Deactivated successfully. Mar 17 20:26:28.537671 systemd-logind[1254]: Session 19 logged out. Waiting for processes to exit. Mar 17 20:26:28.539254 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 20:26:28.544069 systemd-logind[1254]: Removed session 19. Mar 17 20:26:33.533081 systemd[1]: Started sshd@19-172.24.4.115:22-172.24.4.1:49748.service. Mar 17 20:26:34.875156 sshd[3687]: Accepted publickey for core from 172.24.4.1 port 49748 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:34.878074 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:34.888820 systemd-logind[1254]: New session 20 of user core. Mar 17 20:26:34.890057 systemd[1]: Started session-20.scope. Mar 17 20:26:35.651280 sshd[3687]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:35.656208 systemd[1]: Started sshd@20-172.24.4.115:22-172.24.4.1:49762.service. Mar 17 20:26:35.663804 systemd[1]: sshd@19-172.24.4.115:22-172.24.4.1:49748.service: Deactivated successfully. Mar 17 20:26:35.669715 systemd-logind[1254]: Session 20 logged out. Waiting for processes to exit. Mar 17 20:26:35.669836 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 20:26:35.674064 systemd-logind[1254]: Removed session 20. Mar 17 20:26:36.878383 sshd[3698]: Accepted publickey for core from 172.24.4.1 port 49762 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:36.882130 sshd[3698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:36.892150 systemd-logind[1254]: New session 21 of user core. Mar 17 20:26:36.892991 systemd[1]: Started session-21.scope. Mar 17 20:26:39.547481 env[1292]: time="2025-03-17T20:26:39.547444229Z" level=info msg="StopContainer for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" with timeout 30 (s)" Mar 17 20:26:39.552831 env[1292]: time="2025-03-17T20:26:39.552788131Z" level=info msg="Stop container \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" with signal terminated" Mar 17 20:26:39.557032 systemd[1]: run-containerd-runc-k8s.io-e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9-runc.Xfpp4O.mount: Deactivated successfully. Mar 17 20:26:39.581322 env[1292]: time="2025-03-17T20:26:39.581268689Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 20:26:39.587083 env[1292]: time="2025-03-17T20:26:39.587033754Z" level=info msg="StopContainer for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" with timeout 2 (s)" Mar 17 20:26:39.587507 env[1292]: time="2025-03-17T20:26:39.587384938Z" level=info msg="Stop container \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" with signal terminated" Mar 17 20:26:39.595744 systemd-networkd[1038]: lxc_health: Link DOWN Mar 17 20:26:39.595753 systemd-networkd[1038]: lxc_health: Lost carrier Mar 17 20:26:39.598762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce-rootfs.mount: Deactivated successfully. Mar 17 20:26:39.627770 env[1292]: time="2025-03-17T20:26:39.626670748Z" level=info msg="shim disconnected" id=1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce Mar 17 20:26:39.627770 env[1292]: time="2025-03-17T20:26:39.626736209Z" level=warning msg="cleaning up after shim disconnected" id=1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce namespace=k8s.io Mar 17 20:26:39.627770 env[1292]: time="2025-03-17T20:26:39.626750166Z" level=info msg="cleaning up dead shim" Mar 17 20:26:39.646701 env[1292]: time="2025-03-17T20:26:39.646658299Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3757 runtime=io.containerd.runc.v2\n" Mar 17 20:26:39.651975 env[1292]: time="2025-03-17T20:26:39.651778003Z" level=info msg="StopContainer for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" returns successfully" Mar 17 20:26:39.653474 env[1292]: time="2025-03-17T20:26:39.653430560Z" level=info msg="StopPodSandbox for \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\"" Mar 17 20:26:39.653609 env[1292]: time="2025-03-17T20:26:39.653515068Z" level=info msg="Container to stop \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:39.657730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209-shm.mount: Deactivated successfully. Mar 17 20:26:39.670286 env[1292]: time="2025-03-17T20:26:39.670229604Z" level=info msg="shim disconnected" id=e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9 Mar 17 20:26:39.670598 env[1292]: time="2025-03-17T20:26:39.670565450Z" level=warning msg="cleaning up after shim disconnected" id=e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9 namespace=k8s.io Mar 17 20:26:39.670693 env[1292]: time="2025-03-17T20:26:39.670676496Z" level=info msg="cleaning up dead shim" Mar 17 20:26:39.684242 env[1292]: time="2025-03-17T20:26:39.684187117Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3792 runtime=io.containerd.runc.v2\n" Mar 17 20:26:39.702277 env[1292]: time="2025-03-17T20:26:39.702238363Z" level=info msg="StopContainer for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" returns successfully" Mar 17 20:26:39.702859 env[1292]: time="2025-03-17T20:26:39.702836416Z" level=info msg="StopPodSandbox for \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\"" Mar 17 20:26:39.703028 env[1292]: time="2025-03-17T20:26:39.702997456Z" level=info msg="Container to stop \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:39.703137 env[1292]: time="2025-03-17T20:26:39.703117479Z" level=info msg="Container to stop \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:39.703239 env[1292]: time="2025-03-17T20:26:39.703219189Z" level=info msg="Container to stop \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:39.703333 env[1292]: time="2025-03-17T20:26:39.703314736Z" level=info msg="Container to stop \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:39.703516 env[1292]: time="2025-03-17T20:26:39.703484132Z" level=info msg="Container to stop \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:39.704611 env[1292]: time="2025-03-17T20:26:39.704580795Z" level=info msg="shim disconnected" id=88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209 Mar 17 20:26:39.705284 env[1292]: time="2025-03-17T20:26:39.705264318Z" level=warning msg="cleaning up after shim disconnected" id=88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209 namespace=k8s.io Mar 17 20:26:39.705434 env[1292]: time="2025-03-17T20:26:39.705387246Z" level=info msg="cleaning up dead shim" Mar 17 20:26:39.721717 env[1292]: time="2025-03-17T20:26:39.721681750Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3820 runtime=io.containerd.runc.v2\n" Mar 17 20:26:39.722208 env[1292]: time="2025-03-17T20:26:39.722182904Z" level=info msg="TearDown network for sandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" successfully" Mar 17 20:26:39.722306 env[1292]: time="2025-03-17T20:26:39.722287067Z" level=info msg="StopPodSandbox for \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" returns successfully" Mar 17 20:26:39.769781 env[1292]: time="2025-03-17T20:26:39.769735508Z" level=info msg="shim disconnected" id=85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5 Mar 17 20:26:39.770130 env[1292]: time="2025-03-17T20:26:39.770109905Z" level=warning msg="cleaning up after shim disconnected" id=85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5 namespace=k8s.io Mar 17 20:26:39.770203 env[1292]: time="2025-03-17T20:26:39.770188462Z" level=info msg="cleaning up dead shim" Mar 17 20:26:39.777470 env[1292]: time="2025-03-17T20:26:39.777437981Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3853 runtime=io.containerd.runc.v2\n" Mar 17 20:26:39.777931 env[1292]: time="2025-03-17T20:26:39.777908027Z" level=info msg="TearDown network for sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" successfully" Mar 17 20:26:39.778018 env[1292]: time="2025-03-17T20:26:39.777999167Z" level=info msg="StopPodSandbox for \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" returns successfully" Mar 17 20:26:39.869872 kubelet[2157]: I0317 20:26:39.866766 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdbf\" (UniqueName: \"kubernetes.io/projected/ad6a3a27-eff1-44f9-9000-0ff99f375262-kube-api-access-xwdbf\") pod \"ad6a3a27-eff1-44f9-9000-0ff99f375262\" (UID: \"ad6a3a27-eff1-44f9-9000-0ff99f375262\") " Mar 17 20:26:39.869872 kubelet[2157]: I0317 20:26:39.866841 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad6a3a27-eff1-44f9-9000-0ff99f375262-cilium-config-path\") pod \"ad6a3a27-eff1-44f9-9000-0ff99f375262\" (UID: \"ad6a3a27-eff1-44f9-9000-0ff99f375262\") " Mar 17 20:26:39.869872 kubelet[2157]: I0317 20:26:39.869263 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad6a3a27-eff1-44f9-9000-0ff99f375262-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad6a3a27-eff1-44f9-9000-0ff99f375262" (UID: "ad6a3a27-eff1-44f9-9000-0ff99f375262"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 20:26:39.872613 kubelet[2157]: I0317 20:26:39.872488 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6a3a27-eff1-44f9-9000-0ff99f375262-kube-api-access-xwdbf" (OuterVolumeSpecName: "kube-api-access-xwdbf") pod "ad6a3a27-eff1-44f9-9000-0ff99f375262" (UID: "ad6a3a27-eff1-44f9-9000-0ff99f375262"). InnerVolumeSpecName "kube-api-access-xwdbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 20:26:39.967890 kubelet[2157]: I0317 20:26:39.967819 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-xtables-lock\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.968352 kubelet[2157]: I0317 20:26:39.968272 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-kernel\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.968714 kubelet[2157]: I0317 20:26:39.968678 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-bpf-maps\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.968979 kubelet[2157]: I0317 20:26:39.968946 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-run\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.969207 kubelet[2157]: I0317 20:26:39.968509 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.969207 kubelet[2157]: I0317 20:26:39.968551 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.969450 kubelet[2157]: I0317 20:26:39.968724 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.969450 kubelet[2157]: I0317 20:26:39.968999 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.969450 kubelet[2157]: I0317 20:26:39.969162 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-config-path\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.969450 kubelet[2157]: I0317 20:26:39.969265 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-cgroup\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.969450 kubelet[2157]: I0317 20:26:39.969293 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjdvt\" (UniqueName: \"kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-kube-api-access-rjdvt\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.969450 kubelet[2157]: I0317 20:26:39.969313 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cni-path\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970031 kubelet[2157]: I0317 20:26:39.969330 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-lib-modules\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970031 kubelet[2157]: I0317 20:26:39.969354 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-hubble-tls\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970031 kubelet[2157]: I0317 20:26:39.969372 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-etc-cni-netd\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970031 kubelet[2157]: I0317 20:26:39.969394 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f4ab452-f321-4924-aa28-9e67455b0b09-clustermesh-secrets\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970031 kubelet[2157]: I0317 20:26:39.969429 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-hostproc\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970031 kubelet[2157]: I0317 20:26:39.969450 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-net\") pod \"4f4ab452-f321-4924-aa28-9e67455b0b09\" (UID: \"4f4ab452-f321-4924-aa28-9e67455b0b09\") " Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969502 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad6a3a27-eff1-44f9-9000-0ff99f375262-cilium-config-path\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969517 2157 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-kernel\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969528 2157 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-xtables-lock\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969539 2157 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-bpf-maps\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969550 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-run\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969561 2157 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xwdbf\" (UniqueName: \"kubernetes.io/projected/ad6a3a27-eff1-44f9-9000-0ff99f375262-kube-api-access-xwdbf\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:39.970664 kubelet[2157]: I0317 20:26:39.969583 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.971136 kubelet[2157]: I0317 20:26:39.969603 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.972080 kubelet[2157]: I0317 20:26:39.972034 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-kube-api-access-rjdvt" (OuterVolumeSpecName: "kube-api-access-rjdvt") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "kube-api-access-rjdvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 20:26:39.972080 kubelet[2157]: I0317 20:26:39.972081 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cni-path" (OuterVolumeSpecName: "cni-path") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.972320 kubelet[2157]: I0317 20:26:39.972102 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.974086 kubelet[2157]: I0317 20:26:39.974044 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 20:26:39.974086 kubelet[2157]: I0317 20:26:39.974082 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.976257 kubelet[2157]: I0317 20:26:39.976216 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4ab452-f321-4924-aa28-9e67455b0b09-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 20:26:39.976395 kubelet[2157]: I0317 20:26:39.976277 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-hostproc" (OuterVolumeSpecName: "hostproc") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:39.978497 kubelet[2157]: I0317 20:26:39.978391 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4f4ab452-f321-4924-aa28-9e67455b0b09" (UID: "4f4ab452-f321-4924-aa28-9e67455b0b09"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 20:26:40.069893 kubelet[2157]: I0317 20:26:40.069787 2157 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-host-proc-sys-net\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.069923 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-config-path\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.069951 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cilium-cgroup\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.069965 2157 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rjdvt\" (UniqueName: \"kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-kube-api-access-rjdvt\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.069977 2157 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-lib-modules\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.069988 2157 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4f4ab452-f321-4924-aa28-9e67455b0b09-hubble-tls\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.069999 2157 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-etc-cni-netd\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.070189 kubelet[2157]: I0317 20:26:40.070010 2157 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4f4ab452-f321-4924-aa28-9e67455b0b09-clustermesh-secrets\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.071035 kubelet[2157]: I0317 20:26:40.070021 2157 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-hostproc\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.071035 kubelet[2157]: I0317 20:26:40.070030 2157 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4f4ab452-f321-4924-aa28-9e67455b0b09-cni-path\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:40.546197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9-rootfs.mount: Deactivated successfully. Mar 17 20:26:40.546587 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5-rootfs.mount: Deactivated successfully. Mar 17 20:26:40.546912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5-shm.mount: Deactivated successfully. Mar 17 20:26:40.547164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209-rootfs.mount: Deactivated successfully. Mar 17 20:26:40.547441 systemd[1]: var-lib-kubelet-pods-ad6a3a27\x2deff1\x2d44f9\x2d9000\x2d0ff99f375262-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwdbf.mount: Deactivated successfully. Mar 17 20:26:40.547691 systemd[1]: var-lib-kubelet-pods-4f4ab452\x2df321\x2d4924\x2daa28\x2d9e67455b0b09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drjdvt.mount: Deactivated successfully. Mar 17 20:26:40.547939 systemd[1]: var-lib-kubelet-pods-4f4ab452\x2df321\x2d4924\x2daa28\x2d9e67455b0b09-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 20:26:40.548174 systemd[1]: var-lib-kubelet-pods-4f4ab452\x2df321\x2d4924\x2daa28\x2d9e67455b0b09-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 20:26:40.564584 kubelet[2157]: I0317 20:26:40.564517 2157 scope.go:117] "RemoveContainer" containerID="e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9" Mar 17 20:26:40.571538 env[1292]: time="2025-03-17T20:26:40.571459040Z" level=info msg="RemoveContainer for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\"" Mar 17 20:26:40.584043 env[1292]: time="2025-03-17T20:26:40.583948499Z" level=info msg="RemoveContainer for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" returns successfully" Mar 17 20:26:40.586243 kubelet[2157]: I0317 20:26:40.586156 2157 scope.go:117] "RemoveContainer" containerID="8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac" Mar 17 20:26:40.590459 env[1292]: time="2025-03-17T20:26:40.590367101Z" level=info msg="RemoveContainer for \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\"" Mar 17 20:26:40.611445 env[1292]: time="2025-03-17T20:26:40.599639859Z" level=info msg="RemoveContainer for \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\" returns successfully" Mar 17 20:26:40.611445 env[1292]: time="2025-03-17T20:26:40.602128874Z" level=info msg="RemoveContainer for \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\"" Mar 17 20:26:40.611445 env[1292]: time="2025-03-17T20:26:40.608310927Z" level=info msg="RemoveContainer for \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\" returns successfully" Mar 17 20:26:40.611445 env[1292]: time="2025-03-17T20:26:40.610085932Z" level=info msg="RemoveContainer for \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\"" Mar 17 20:26:40.611658 kubelet[2157]: I0317 20:26:40.599981 2157 scope.go:117] "RemoveContainer" containerID="d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f" Mar 17 20:26:40.611658 kubelet[2157]: I0317 20:26:40.608626 2157 scope.go:117] "RemoveContainer" containerID="4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc" Mar 17 20:26:40.614082 env[1292]: time="2025-03-17T20:26:40.614055946Z" level=info msg="RemoveContainer for \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\" returns successfully" Mar 17 20:26:40.614321 kubelet[2157]: I0317 20:26:40.614305 2157 scope.go:117] "RemoveContainer" containerID="2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a" Mar 17 20:26:40.615804 env[1292]: time="2025-03-17T20:26:40.615781910Z" level=info msg="RemoveContainer for \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\"" Mar 17 20:26:40.627927 env[1292]: time="2025-03-17T20:26:40.627798778Z" level=info msg="RemoveContainer for \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\" returns successfully" Mar 17 20:26:40.628639 kubelet[2157]: I0317 20:26:40.628619 2157 scope.go:117] "RemoveContainer" containerID="e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9" Mar 17 20:26:40.629185 env[1292]: time="2025-03-17T20:26:40.629096835Z" level=error msg="ContainerStatus for \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\": not found" Mar 17 20:26:40.629475 kubelet[2157]: E0317 20:26:40.629448 2157 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\": not found" containerID="e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9" Mar 17 20:26:40.629736 kubelet[2157]: I0317 20:26:40.629591 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9"} err="failed to get container status \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3435d365d895843c32299d2adb2baa4bfc66c77dc22587c053ede82095889e9\": not found" Mar 17 20:26:40.629944 kubelet[2157]: I0317 20:26:40.629901 2157 scope.go:117] "RemoveContainer" containerID="8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac" Mar 17 20:26:40.630279 env[1292]: time="2025-03-17T20:26:40.630204608Z" level=error msg="ContainerStatus for \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\": not found" Mar 17 20:26:40.631643 kubelet[2157]: E0317 20:26:40.631068 2157 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\": not found" containerID="8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac" Mar 17 20:26:40.631643 kubelet[2157]: I0317 20:26:40.631130 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac"} err="failed to get container status \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f41bcb318c4f4ce8405dde9d915736bfba4d7bffdd8346febd3c64efa0dc4ac\": not found" Mar 17 20:26:40.631643 kubelet[2157]: I0317 20:26:40.631172 2157 scope.go:117] "RemoveContainer" containerID="d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f" Mar 17 20:26:40.633519 env[1292]: time="2025-03-17T20:26:40.632220402Z" level=error msg="ContainerStatus for \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\": not found" Mar 17 20:26:40.633965 kubelet[2157]: E0317 20:26:40.633909 2157 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\": not found" containerID="d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f" Mar 17 20:26:40.634032 kubelet[2157]: I0317 20:26:40.633967 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f"} err="failed to get container status \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2607b9b083d1c7f927e647b85227897698f80ac44ac1cad1c75021bdaf8697f\": not found" Mar 17 20:26:40.634032 kubelet[2157]: I0317 20:26:40.634005 2157 scope.go:117] "RemoveContainer" containerID="4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc" Mar 17 20:26:40.634350 env[1292]: time="2025-03-17T20:26:40.634255311Z" level=error msg="ContainerStatus for \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\": not found" Mar 17 20:26:40.634571 kubelet[2157]: E0317 20:26:40.634530 2157 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\": not found" containerID="4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc" Mar 17 20:26:40.634639 kubelet[2157]: I0317 20:26:40.634584 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc"} err="failed to get container status \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4852406c40b78af1b9a20690e15f0dcdb3dbc8d912feb0807a6bb4157b4141fc\": not found" Mar 17 20:26:40.634639 kubelet[2157]: I0317 20:26:40.634619 2157 scope.go:117] "RemoveContainer" containerID="2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a" Mar 17 20:26:40.634916 env[1292]: time="2025-03-17T20:26:40.634821125Z" level=error msg="ContainerStatus for \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\": not found" Mar 17 20:26:40.635227 kubelet[2157]: E0317 20:26:40.635185 2157 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\": not found" containerID="2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a" Mar 17 20:26:40.635280 kubelet[2157]: I0317 20:26:40.635242 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a"} err="failed to get container status \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\": rpc error: code = NotFound desc = an error occurred when try to find container \"2148a914f48e4814f1ef89a525c8d6d1f6ac1177cccbd51c9e3756b2e8e0925a\": not found" Mar 17 20:26:40.635330 kubelet[2157]: I0317 20:26:40.635279 2157 scope.go:117] "RemoveContainer" containerID="1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce" Mar 17 20:26:40.637213 env[1292]: time="2025-03-17T20:26:40.637189475Z" level=info msg="RemoveContainer for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\"" Mar 17 20:26:40.642989 env[1292]: time="2025-03-17T20:26:40.642952648Z" level=info msg="RemoveContainer for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" returns successfully" Mar 17 20:26:40.643454 kubelet[2157]: I0317 20:26:40.643341 2157 scope.go:117] "RemoveContainer" containerID="1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce" Mar 17 20:26:40.643849 env[1292]: time="2025-03-17T20:26:40.643795739Z" level=error msg="ContainerStatus for \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\": not found" Mar 17 20:26:40.644065 kubelet[2157]: E0317 20:26:40.644034 2157 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\": not found" containerID="1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce" Mar 17 20:26:40.644173 kubelet[2157]: I0317 20:26:40.644149 2157 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce"} err="failed to get container status \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d433df3ffded03215e71ebd615d3442317051369b23f0020e24e9ef70cdf4ce\": not found" Mar 17 20:26:41.113707 kubelet[2157]: I0317 20:26:41.112618 2157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" path="/var/lib/kubelet/pods/4f4ab452-f321-4924-aa28-9e67455b0b09/volumes" Mar 17 20:26:41.113707 kubelet[2157]: I0317 20:26:41.113326 2157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6a3a27-eff1-44f9-9000-0ff99f375262" path="/var/lib/kubelet/pods/ad6a3a27-eff1-44f9-9000-0ff99f375262/volumes" Mar 17 20:26:41.665589 sshd[3698]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:41.669766 systemd[1]: Started sshd@21-172.24.4.115:22-172.24.4.1:49772.service. Mar 17 20:26:41.677989 systemd[1]: sshd@20-172.24.4.115:22-172.24.4.1:49762.service: Deactivated successfully. Mar 17 20:26:41.681895 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 20:26:41.681962 systemd-logind[1254]: Session 21 logged out. Waiting for processes to exit. Mar 17 20:26:41.694890 systemd-logind[1254]: Removed session 21. Mar 17 20:26:42.276011 kubelet[2157]: E0317 20:26:42.275868 2157 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:26:43.064794 sshd[3871]: Accepted publickey for core from 172.24.4.1 port 49772 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:43.067523 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:43.077537 systemd-logind[1254]: New session 22 of user core. Mar 17 20:26:43.078882 systemd[1]: Started session-22.scope. Mar 17 20:26:44.269181 kubelet[2157]: I0317 20:26:44.269053 2157 topology_manager.go:215] "Topology Admit Handler" podUID="bacc2fc2-1f3a-40db-8841-e009d36ed437" podNamespace="kube-system" podName="cilium-fxcnr" Mar 17 20:26:44.269181 kubelet[2157]: E0317 20:26:44.269142 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" containerName="mount-cgroup" Mar 17 20:26:44.269181 kubelet[2157]: E0317 20:26:44.269155 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" containerName="cilium-agent" Mar 17 20:26:44.269181 kubelet[2157]: E0317 20:26:44.269162 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ad6a3a27-eff1-44f9-9000-0ff99f375262" containerName="cilium-operator" Mar 17 20:26:44.283648 kubelet[2157]: E0317 20:26:44.269446 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" containerName="apply-sysctl-overwrites" Mar 17 20:26:44.283648 kubelet[2157]: E0317 20:26:44.269460 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" containerName="mount-bpf-fs" Mar 17 20:26:44.283648 kubelet[2157]: E0317 20:26:44.269466 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" containerName="clean-cilium-state" Mar 17 20:26:44.283648 kubelet[2157]: I0317 20:26:44.269500 2157 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4ab452-f321-4924-aa28-9e67455b0b09" containerName="cilium-agent" Mar 17 20:26:44.283648 kubelet[2157]: I0317 20:26:44.269513 2157 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6a3a27-eff1-44f9-9000-0ff99f375262" containerName="cilium-operator" Mar 17 20:26:44.396611 sshd[3871]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:44.404463 kubelet[2157]: I0317 20:26:44.404351 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-hubble-tls\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.404706 kubelet[2157]: I0317 20:26:44.404482 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-ipsec-secrets\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.404706 kubelet[2157]: I0317 20:26:44.404540 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-clustermesh-secrets\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.404706 kubelet[2157]: I0317 20:26:44.404588 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86g8\" (UniqueName: \"kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-kube-api-access-w86g8\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.404706 kubelet[2157]: I0317 20:26:44.404633 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-cgroup\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.404706 kubelet[2157]: I0317 20:26:44.404677 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cni-path\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405161 kubelet[2157]: I0317 20:26:44.404724 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-kernel\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405161 kubelet[2157]: I0317 20:26:44.404765 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-bpf-maps\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405161 kubelet[2157]: I0317 20:26:44.404817 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-etc-cni-netd\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405161 kubelet[2157]: I0317 20:26:44.404890 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-lib-modules\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405161 kubelet[2157]: I0317 20:26:44.404948 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-run\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405161 kubelet[2157]: I0317 20:26:44.405025 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-hostproc\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405705 kubelet[2157]: I0317 20:26:44.405083 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-xtables-lock\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405705 kubelet[2157]: I0317 20:26:44.405126 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-config-path\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.405705 kubelet[2157]: I0317 20:26:44.405173 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-net\") pod \"cilium-fxcnr\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " pod="kube-system/cilium-fxcnr" Mar 17 20:26:44.407691 systemd[1]: Started sshd@22-172.24.4.115:22-172.24.4.1:44782.service. Mar 17 20:26:44.408992 systemd[1]: sshd@21-172.24.4.115:22-172.24.4.1:49772.service: Deactivated successfully. Mar 17 20:26:44.414582 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 20:26:44.415633 systemd-logind[1254]: Session 22 logged out. Waiting for processes to exit. Mar 17 20:26:44.419231 systemd-logind[1254]: Removed session 22. Mar 17 20:26:44.581567 env[1292]: time="2025-03-17T20:26:44.578456422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxcnr,Uid:bacc2fc2-1f3a-40db-8841-e009d36ed437,Namespace:kube-system,Attempt:0,}" Mar 17 20:26:44.608771 env[1292]: time="2025-03-17T20:26:44.608697699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:26:44.608964 env[1292]: time="2025-03-17T20:26:44.608933668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:26:44.609059 env[1292]: time="2025-03-17T20:26:44.609036330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:26:44.609337 env[1292]: time="2025-03-17T20:26:44.609304870Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae pid=3898 runtime=io.containerd.runc.v2 Mar 17 20:26:44.652318 env[1292]: time="2025-03-17T20:26:44.652252019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fxcnr,Uid:bacc2fc2-1f3a-40db-8841-e009d36ed437,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\"" Mar 17 20:26:44.656795 env[1292]: time="2025-03-17T20:26:44.656758212Z" level=info msg="CreateContainer within sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:26:44.671837 env[1292]: time="2025-03-17T20:26:44.671769346Z" level=info msg="CreateContainer within sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8\"" Mar 17 20:26:44.672639 env[1292]: time="2025-03-17T20:26:44.672607758Z" level=info msg="StartContainer for \"816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8\"" Mar 17 20:26:44.732642 env[1292]: time="2025-03-17T20:26:44.732598898Z" level=info msg="StartContainer for \"816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8\" returns successfully" Mar 17 20:26:44.768335 env[1292]: time="2025-03-17T20:26:44.768286848Z" level=info msg="shim disconnected" id=816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8 Mar 17 20:26:44.768600 env[1292]: time="2025-03-17T20:26:44.768581497Z" level=warning msg="cleaning up after shim disconnected" id=816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8 namespace=k8s.io Mar 17 20:26:44.768698 env[1292]: time="2025-03-17T20:26:44.768682796Z" level=info msg="cleaning up dead shim" Mar 17 20:26:44.776036 env[1292]: time="2025-03-17T20:26:44.775997898Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3986 runtime=io.containerd.runc.v2\n" Mar 17 20:26:45.594794 env[1292]: time="2025-03-17T20:26:45.594758812Z" level=info msg="CreateContainer within sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:26:45.612021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136655138.mount: Deactivated successfully. Mar 17 20:26:45.631701 env[1292]: time="2025-03-17T20:26:45.631590843Z" level=info msg="CreateContainer within sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a\"" Mar 17 20:26:45.632488 env[1292]: time="2025-03-17T20:26:45.632445224Z" level=info msg="StartContainer for \"a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a\"" Mar 17 20:26:45.688951 env[1292]: time="2025-03-17T20:26:45.688671123Z" level=info msg="StartContainer for \"a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a\" returns successfully" Mar 17 20:26:45.712166 env[1292]: time="2025-03-17T20:26:45.712107187Z" level=info msg="shim disconnected" id=a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a Mar 17 20:26:45.712166 env[1292]: time="2025-03-17T20:26:45.712157801Z" level=warning msg="cleaning up after shim disconnected" id=a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a namespace=k8s.io Mar 17 20:26:45.712166 env[1292]: time="2025-03-17T20:26:45.712169103Z" level=info msg="cleaning up dead shim" Mar 17 20:26:45.719553 env[1292]: time="2025-03-17T20:26:45.719503802Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4049 runtime=io.containerd.runc.v2\n" Mar 17 20:26:45.922291 sshd[3885]: Accepted publickey for core from 172.24.4.1 port 44782 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:45.923600 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:45.934608 systemd[1]: Started session-23.scope. Mar 17 20:26:45.935366 systemd-logind[1254]: New session 23 of user core. Mar 17 20:26:46.520667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a-rootfs.mount: Deactivated successfully. Mar 17 20:26:46.625641 env[1292]: time="2025-03-17T20:26:46.625548249Z" level=info msg="CreateContainer within sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:26:46.671794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970879793.mount: Deactivated successfully. Mar 17 20:26:46.680709 env[1292]: time="2025-03-17T20:26:46.680661957Z" level=info msg="CreateContainer within sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999\"" Mar 17 20:26:46.681453 env[1292]: time="2025-03-17T20:26:46.681422382Z" level=info msg="StartContainer for \"4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999\"" Mar 17 20:26:46.769674 env[1292]: time="2025-03-17T20:26:46.769632856Z" level=info msg="StartContainer for \"4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999\" returns successfully" Mar 17 20:26:46.799730 env[1292]: time="2025-03-17T20:26:46.799287031Z" level=info msg="shim disconnected" id=4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999 Mar 17 20:26:46.799730 env[1292]: time="2025-03-17T20:26:46.799333397Z" level=warning msg="cleaning up after shim disconnected" id=4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999 namespace=k8s.io Mar 17 20:26:46.799730 env[1292]: time="2025-03-17T20:26:46.799345460Z" level=info msg="cleaning up dead shim" Mar 17 20:26:46.808672 env[1292]: time="2025-03-17T20:26:46.808594484Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4117 runtime=io.containerd.runc.v2\n" Mar 17 20:26:46.893827 sshd[3885]: pam_unix(sshd:session): session closed for user core Mar 17 20:26:46.895414 systemd[1]: Started sshd@23-172.24.4.115:22-172.24.4.1:44798.service. Mar 17 20:26:46.898767 systemd[1]: sshd@22-172.24.4.115:22-172.24.4.1:44782.service: Deactivated successfully. Mar 17 20:26:46.899997 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 20:26:46.900688 systemd-logind[1254]: Session 23 logged out. Waiting for processes to exit. Mar 17 20:26:46.903526 systemd-logind[1254]: Removed session 23. Mar 17 20:26:47.163130 env[1292]: time="2025-03-17T20:26:47.163069270Z" level=info msg="StopPodSandbox for \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\"" Mar 17 20:26:47.163385 env[1292]: time="2025-03-17T20:26:47.163156894Z" level=info msg="TearDown network for sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" successfully" Mar 17 20:26:47.163385 env[1292]: time="2025-03-17T20:26:47.163197440Z" level=info msg="StopPodSandbox for \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" returns successfully" Mar 17 20:26:47.164466 env[1292]: time="2025-03-17T20:26:47.164379571Z" level=info msg="RemovePodSandbox for \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\"" Mar 17 20:26:47.164466 env[1292]: time="2025-03-17T20:26:47.164426638Z" level=info msg="Forcibly stopping sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\"" Mar 17 20:26:47.164716 env[1292]: time="2025-03-17T20:26:47.164484416Z" level=info msg="TearDown network for sandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" successfully" Mar 17 20:26:47.169159 env[1292]: time="2025-03-17T20:26:47.168453117Z" level=info msg="RemovePodSandbox \"85be2c8dfaefafff1fde9438feadd2ed36303e4e25f8bf08d0af9280bf7ea0b5\" returns successfully" Mar 17 20:26:47.169159 env[1292]: time="2025-03-17T20:26:47.168838214Z" level=info msg="StopPodSandbox for \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\"" Mar 17 20:26:47.169159 env[1292]: time="2025-03-17T20:26:47.168909767Z" level=info msg="TearDown network for sandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" successfully" Mar 17 20:26:47.169159 env[1292]: time="2025-03-17T20:26:47.168942479Z" level=info msg="StopPodSandbox for \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" returns successfully" Mar 17 20:26:47.169159 env[1292]: time="2025-03-17T20:26:47.169144344Z" level=info msg="RemovePodSandbox for \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\"" Mar 17 20:26:47.169869 env[1292]: time="2025-03-17T20:26:47.169166797Z" level=info msg="Forcibly stopping sandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\"" Mar 17 20:26:47.169869 env[1292]: time="2025-03-17T20:26:47.169225876Z" level=info msg="TearDown network for sandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" successfully" Mar 17 20:26:47.172898 env[1292]: time="2025-03-17T20:26:47.172844325Z" level=info msg="RemovePodSandbox \"88e4dec6953df7c50126bbfdc211f28090316a76ae97913b4b7818b37bb52209\" returns successfully" Mar 17 20:26:47.277911 kubelet[2157]: E0317 20:26:47.277806 2157 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:26:47.521011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999-rootfs.mount: Deactivated successfully. Mar 17 20:26:47.610934 env[1292]: time="2025-03-17T20:26:47.610839713Z" level=info msg="StopPodSandbox for \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\"" Mar 17 20:26:47.613635 env[1292]: time="2025-03-17T20:26:47.613514845Z" level=info msg="Container to stop \"816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:47.613818 env[1292]: time="2025-03-17T20:26:47.613623888Z" level=info msg="Container to stop \"a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:47.613818 env[1292]: time="2025-03-17T20:26:47.613696613Z" level=info msg="Container to stop \"4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 20:26:47.620926 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae-shm.mount: Deactivated successfully. Mar 17 20:26:47.695984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae-rootfs.mount: Deactivated successfully. Mar 17 20:26:47.697149 env[1292]: time="2025-03-17T20:26:47.696972698Z" level=info msg="shim disconnected" id=dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae Mar 17 20:26:47.697149 env[1292]: time="2025-03-17T20:26:47.697012893Z" level=warning msg="cleaning up after shim disconnected" id=dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae namespace=k8s.io Mar 17 20:26:47.697149 env[1292]: time="2025-03-17T20:26:47.697026999Z" level=info msg="cleaning up dead shim" Mar 17 20:26:47.705965 env[1292]: time="2025-03-17T20:26:47.705925953Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4157 runtime=io.containerd.runc.v2\n" Mar 17 20:26:47.706549 env[1292]: time="2025-03-17T20:26:47.706510040Z" level=info msg="TearDown network for sandbox \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" successfully" Mar 17 20:26:47.706653 env[1292]: time="2025-03-17T20:26:47.706632768Z" level=info msg="StopPodSandbox for \"dd8f6a54a6514464daecf115fdcb36704b59188e8b07dbbb3031b6584fc990ae\" returns successfully" Mar 17 20:26:47.841280 kubelet[2157]: I0317 20:26:47.841115 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cni-path\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.841833 kubelet[2157]: I0317 20:26:47.841796 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86g8\" (UniqueName: \"kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-kube-api-access-w86g8\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.842772 kubelet[2157]: I0317 20:26:47.842737 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-bpf-maps\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.842992 kubelet[2157]: I0317 20:26:47.842960 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-run\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.843225 kubelet[2157]: I0317 20:26:47.843159 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-hostproc\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.843462 kubelet[2157]: I0317 20:26:47.843390 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-net\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.843663 kubelet[2157]: I0317 20:26:47.843631 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-config-path\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.843944 kubelet[2157]: I0317 20:26:47.843841 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-clustermesh-secrets\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.844226 kubelet[2157]: I0317 20:26:47.841654 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cni-path" (OuterVolumeSpecName: "cni-path") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.844488 kubelet[2157]: I0317 20:26:47.844450 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.846468 kubelet[2157]: I0317 20:26:47.844610 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.846833 kubelet[2157]: I0317 20:26:47.844637 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-hostproc" (OuterVolumeSpecName: "hostproc") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.846998 kubelet[2157]: I0317 20:26:47.844663 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.847143 kubelet[2157]: I0317 20:26:47.846012 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.847289 kubelet[2157]: I0317 20:26:47.845918 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-kernel\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.847552 kubelet[2157]: I0317 20:26:47.847516 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-lib-modules\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.847783 kubelet[2157]: I0317 20:26:47.847749 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-ipsec-secrets\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.847964 kubelet[2157]: I0317 20:26:47.847347 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 20:26:47.847964 kubelet[2157]: I0317 20:26:47.847552 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.848144 kubelet[2157]: I0317 20:26:47.848043 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.848270 kubelet[2157]: I0317 20:26:47.848238 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-etc-cni-netd\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.848540 kubelet[2157]: I0317 20:26:47.848506 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-hubble-tls\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.848745 kubelet[2157]: I0317 20:26:47.848713 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-cgroup\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.848957 kubelet[2157]: I0317 20:26:47.848912 2157 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-xtables-lock\") pod \"bacc2fc2-1f3a-40db-8841-e009d36ed437\" (UID: \"bacc2fc2-1f3a-40db-8841-e009d36ed437\") " Mar 17 20:26:47.849390 kubelet[2157]: I0317 20:26:47.849359 2157 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-etc-cni-netd\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.850548 kubelet[2157]: I0317 20:26:47.850514 2157 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cni-path\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.856644 systemd[1]: var-lib-kubelet-pods-bacc2fc2\x2d1f3a\x2d40db\x2d8841\x2de009d36ed437-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw86g8.mount: Deactivated successfully. Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859380 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-run\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859432 2157 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-bpf-maps\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859444 2157 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-net\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859455 2157 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-hostproc\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859467 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-config-path\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859481 2157 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-host-proc-sys-kernel\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.859641 kubelet[2157]: I0317 20:26:47.859491 2157 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-lib-modules\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.860208 kubelet[2157]: I0317 20:26:47.849242 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.860208 kubelet[2157]: I0317 20:26:47.849277 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 20:26:47.860208 kubelet[2157]: I0317 20:26:47.858618 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 20:26:47.860208 kubelet[2157]: I0317 20:26:47.859327 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-kube-api-access-w86g8" (OuterVolumeSpecName: "kube-api-access-w86g8") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "kube-api-access-w86g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 20:26:47.863947 kubelet[2157]: I0317 20:26:47.862493 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 20:26:47.867583 systemd[1]: var-lib-kubelet-pods-bacc2fc2\x2d1f3a\x2d40db\x2d8841\x2de009d36ed437-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 20:26:47.867910 systemd[1]: var-lib-kubelet-pods-bacc2fc2\x2d1f3a\x2d40db\x2d8841\x2de009d36ed437-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 20:26:47.870353 kubelet[2157]: I0317 20:26:47.870113 2157 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bacc2fc2-1f3a-40db-8841-e009d36ed437" (UID: "bacc2fc2-1f3a-40db-8841-e009d36ed437"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 20:26:47.960640 kubelet[2157]: I0317 20:26:47.960518 2157 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-clustermesh-secrets\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.960640 kubelet[2157]: I0317 20:26:47.960550 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-ipsec-secrets\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.960640 kubelet[2157]: I0317 20:26:47.960561 2157 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-cilium-cgroup\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.960640 kubelet[2157]: I0317 20:26:47.960572 2157 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bacc2fc2-1f3a-40db-8841-e009d36ed437-xtables-lock\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.960640 kubelet[2157]: I0317 20:26:47.960582 2157 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-hubble-tls\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:47.960640 kubelet[2157]: I0317 20:26:47.960593 2157 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w86g8\" (UniqueName: \"kubernetes.io/projected/bacc2fc2-1f3a-40db-8841-e009d36ed437-kube-api-access-w86g8\") on node \"ci-3510-3-7-8-ce231ec735.novalocal\" DevicePath \"\"" Mar 17 20:26:48.288642 sshd[4131]: Accepted publickey for core from 172.24.4.1 port 44798 ssh2: RSA SHA256:askbAj8fH1AR/YVu3rDeIrUX52bWj3xTcp0VaHaV6dY Mar 17 20:26:48.289937 sshd[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 20:26:48.304659 systemd-logind[1254]: New session 24 of user core. Mar 17 20:26:48.306476 systemd[1]: Started session-24.scope. Mar 17 20:26:48.519941 systemd[1]: var-lib-kubelet-pods-bacc2fc2\x2d1f3a\x2d40db\x2d8841\x2de009d36ed437-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 20:26:48.615559 kubelet[2157]: I0317 20:26:48.614977 2157 scope.go:117] "RemoveContainer" containerID="4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999" Mar 17 20:26:48.620286 env[1292]: time="2025-03-17T20:26:48.620213874Z" level=info msg="RemoveContainer for \"4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999\"" Mar 17 20:26:48.628142 env[1292]: time="2025-03-17T20:26:48.627718840Z" level=info msg="RemoveContainer for \"4cde9c4464beda3659c1f1b86d568a1e6667984e88eeb9f62e063eaf21936999\" returns successfully" Mar 17 20:26:48.630346 kubelet[2157]: I0317 20:26:48.628761 2157 scope.go:117] "RemoveContainer" containerID="a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a" Mar 17 20:26:48.642495 env[1292]: time="2025-03-17T20:26:48.633216679Z" level=info msg="RemoveContainer for \"a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a\"" Mar 17 20:26:48.642495 env[1292]: time="2025-03-17T20:26:48.638541645Z" level=info msg="RemoveContainer for \"a531262588af00b1f8004c053942b3b54d7b69963e112fd0e25c42659b6eeb4a\" returns successfully" Mar 17 20:26:48.642495 env[1292]: time="2025-03-17T20:26:48.640912781Z" level=info msg="RemoveContainer for \"816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8\"" Mar 17 20:26:48.642881 kubelet[2157]: I0317 20:26:48.638954 2157 scope.go:117] "RemoveContainer" containerID="816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8" Mar 17 20:26:48.653918 env[1292]: time="2025-03-17T20:26:48.653845616Z" level=info msg="RemoveContainer for \"816c877b840c828957facc0bc7eaeb0360a96041790aff061bec8f58f1fa27c8\" returns successfully" Mar 17 20:26:48.685271 kubelet[2157]: I0317 20:26:48.685230 2157 topology_manager.go:215] "Topology Admit Handler" podUID="a4ef37ef-94fc-40f4-97d3-db4b86335ffc" podNamespace="kube-system" podName="cilium-g67tx" Mar 17 20:26:48.685535 kubelet[2157]: E0317 20:26:48.685522 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bacc2fc2-1f3a-40db-8841-e009d36ed437" containerName="apply-sysctl-overwrites" Mar 17 20:26:48.685626 kubelet[2157]: E0317 20:26:48.685616 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bacc2fc2-1f3a-40db-8841-e009d36ed437" containerName="mount-bpf-fs" Mar 17 20:26:48.685732 kubelet[2157]: E0317 20:26:48.685721 2157 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bacc2fc2-1f3a-40db-8841-e009d36ed437" containerName="mount-cgroup" Mar 17 20:26:48.685836 kubelet[2157]: I0317 20:26:48.685824 2157 memory_manager.go:354] "RemoveStaleState removing state" podUID="bacc2fc2-1f3a-40db-8841-e009d36ed437" containerName="mount-bpf-fs" Mar 17 20:26:48.794938 kubelet[2157]: I0317 20:26:48.794902 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-hostproc\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795182 kubelet[2157]: I0317 20:26:48.795165 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-cilium-cgroup\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795276 kubelet[2157]: I0317 20:26:48.795262 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-lib-modules\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795363 kubelet[2157]: I0317 20:26:48.795349 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-xtables-lock\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795467 kubelet[2157]: I0317 20:26:48.795452 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-cilium-run\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795559 kubelet[2157]: I0317 20:26:48.795544 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-clustermesh-secrets\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795642 kubelet[2157]: I0317 20:26:48.795629 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-cilium-ipsec-secrets\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795751 kubelet[2157]: I0317 20:26:48.795724 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-hubble-tls\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795858 kubelet[2157]: I0317 20:26:48.795842 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-bpf-maps\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.795943 kubelet[2157]: I0317 20:26:48.795930 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-cilium-config-path\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.796025 kubelet[2157]: I0317 20:26:48.796013 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-host-proc-sys-net\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.796121 kubelet[2157]: I0317 20:26:48.796089 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-host-proc-sys-kernel\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.796203 kubelet[2157]: I0317 20:26:48.796190 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxvxq\" (UniqueName: \"kubernetes.io/projected/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-kube-api-access-sxvxq\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.796286 kubelet[2157]: I0317 20:26:48.796272 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-cni-path\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.796365 kubelet[2157]: I0317 20:26:48.796352 2157 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a4ef37ef-94fc-40f4-97d3-db4b86335ffc-etc-cni-netd\") pod \"cilium-g67tx\" (UID: \"a4ef37ef-94fc-40f4-97d3-db4b86335ffc\") " pod="kube-system/cilium-g67tx" Mar 17 20:26:48.999074 env[1292]: time="2025-03-17T20:26:48.998762597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g67tx,Uid:a4ef37ef-94fc-40f4-97d3-db4b86335ffc,Namespace:kube-system,Attempt:0,}" Mar 17 20:26:49.025848 env[1292]: time="2025-03-17T20:26:49.025739146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 20:26:49.026047 env[1292]: time="2025-03-17T20:26:49.025891780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 20:26:49.026123 env[1292]: time="2025-03-17T20:26:49.025994451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 20:26:49.026536 env[1292]: time="2025-03-17T20:26:49.026389497Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66 pid=4196 runtime=io.containerd.runc.v2 Mar 17 20:26:49.098149 env[1292]: time="2025-03-17T20:26:49.098113039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g67tx,Uid:a4ef37ef-94fc-40f4-97d3-db4b86335ffc,Namespace:kube-system,Attempt:0,} returns sandbox id \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\"" Mar 17 20:26:49.102342 env[1292]: time="2025-03-17T20:26:49.102280089Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 20:26:49.113317 kubelet[2157]: I0317 20:26:49.113292 2157 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bacc2fc2-1f3a-40db-8841-e009d36ed437" path="/var/lib/kubelet/pods/bacc2fc2-1f3a-40db-8841-e009d36ed437/volumes" Mar 17 20:26:49.164243 env[1292]: time="2025-03-17T20:26:49.164179264Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6d07e513f4741199174945914e02a386b8bbd877f545677afb0c552ef1f70c38\"" Mar 17 20:26:49.165637 env[1292]: time="2025-03-17T20:26:49.165605812Z" level=info msg="StartContainer for \"6d07e513f4741199174945914e02a386b8bbd877f545677afb0c552ef1f70c38\"" Mar 17 20:26:49.217044 env[1292]: time="2025-03-17T20:26:49.217007667Z" level=info msg="StartContainer for \"6d07e513f4741199174945914e02a386b8bbd877f545677afb0c552ef1f70c38\" returns successfully" Mar 17 20:26:49.256253 env[1292]: time="2025-03-17T20:26:49.255188492Z" level=info msg="shim disconnected" id=6d07e513f4741199174945914e02a386b8bbd877f545677afb0c552ef1f70c38 Mar 17 20:26:49.256787 env[1292]: time="2025-03-17T20:26:49.256739441Z" level=warning msg="cleaning up after shim disconnected" id=6d07e513f4741199174945914e02a386b8bbd877f545677afb0c552ef1f70c38 namespace=k8s.io Mar 17 20:26:49.256967 env[1292]: time="2025-03-17T20:26:49.256933232Z" level=info msg="cleaning up dead shim" Mar 17 20:26:49.266825 env[1292]: time="2025-03-17T20:26:49.266775962Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4282 runtime=io.containerd.runc.v2\n" Mar 17 20:26:49.632679 env[1292]: time="2025-03-17T20:26:49.632169329Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 20:26:49.676232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058345593.mount: Deactivated successfully. Mar 17 20:26:49.677751 env[1292]: time="2025-03-17T20:26:49.677585528Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6df462294aa38c8379b795c84cdd0611377dd1765f45d0ad55df2f8ddf7454ed\"" Mar 17 20:26:49.684541 env[1292]: time="2025-03-17T20:26:49.684337823Z" level=info msg="StartContainer for \"6df462294aa38c8379b795c84cdd0611377dd1765f45d0ad55df2f8ddf7454ed\"" Mar 17 20:26:49.767166 env[1292]: time="2025-03-17T20:26:49.767131210Z" level=info msg="StartContainer for \"6df462294aa38c8379b795c84cdd0611377dd1765f45d0ad55df2f8ddf7454ed\" returns successfully" Mar 17 20:26:49.792873 env[1292]: time="2025-03-17T20:26:49.792813949Z" level=info msg="shim disconnected" id=6df462294aa38c8379b795c84cdd0611377dd1765f45d0ad55df2f8ddf7454ed Mar 17 20:26:49.792873 env[1292]: time="2025-03-17T20:26:49.792866017Z" level=warning msg="cleaning up after shim disconnected" id=6df462294aa38c8379b795c84cdd0611377dd1765f45d0ad55df2f8ddf7454ed namespace=k8s.io Mar 17 20:26:49.793089 env[1292]: time="2025-03-17T20:26:49.792878850Z" level=info msg="cleaning up dead shim" Mar 17 20:26:49.801047 env[1292]: time="2025-03-17T20:26:49.800996347Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4347 runtime=io.containerd.runc.v2\n" Mar 17 20:26:50.521353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6df462294aa38c8379b795c84cdd0611377dd1765f45d0ad55df2f8ddf7454ed-rootfs.mount: Deactivated successfully. Mar 17 20:26:50.644136 env[1292]: time="2025-03-17T20:26:50.641577809Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 20:26:50.667711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736067149.mount: Deactivated successfully. Mar 17 20:26:50.676344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015811047.mount: Deactivated successfully. Mar 17 20:26:50.685239 env[1292]: time="2025-03-17T20:26:50.685138132Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e50a210fe858ef0a7ef30d6790ef7e142b3da2c0ac077dff3c7fee891b868dc8\"" Mar 17 20:26:50.685898 env[1292]: time="2025-03-17T20:26:50.685853114Z" level=info msg="StartContainer for \"e50a210fe858ef0a7ef30d6790ef7e142b3da2c0ac077dff3c7fee891b868dc8\"" Mar 17 20:26:50.751872 env[1292]: time="2025-03-17T20:26:50.750445355Z" level=info msg="StartContainer for \"e50a210fe858ef0a7ef30d6790ef7e142b3da2c0ac077dff3c7fee891b868dc8\" returns successfully" Mar 17 20:26:50.777163 env[1292]: time="2025-03-17T20:26:50.776837145Z" level=info msg="shim disconnected" id=e50a210fe858ef0a7ef30d6790ef7e142b3da2c0ac077dff3c7fee891b868dc8 Mar 17 20:26:50.777163 env[1292]: time="2025-03-17T20:26:50.776881548Z" level=warning msg="cleaning up after shim disconnected" id=e50a210fe858ef0a7ef30d6790ef7e142b3da2c0ac077dff3c7fee891b868dc8 namespace=k8s.io Mar 17 20:26:50.777163 env[1292]: time="2025-03-17T20:26:50.776892338Z" level=info msg="cleaning up dead shim" Mar 17 20:26:50.785286 env[1292]: time="2025-03-17T20:26:50.785235215Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4405 runtime=io.containerd.runc.v2\n" Mar 17 20:26:51.699735 env[1292]: time="2025-03-17T20:26:51.697024402Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 20:26:51.728220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914480231.mount: Deactivated successfully. Mar 17 20:26:51.755339 env[1292]: time="2025-03-17T20:26:51.755280324Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b\"" Mar 17 20:26:51.756545 env[1292]: time="2025-03-17T20:26:51.756516475Z" level=info msg="StartContainer for \"a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b\"" Mar 17 20:26:51.850765 env[1292]: time="2025-03-17T20:26:51.850724772Z" level=info msg="StartContainer for \"a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b\" returns successfully" Mar 17 20:26:51.871418 env[1292]: time="2025-03-17T20:26:51.871364680Z" level=info msg="shim disconnected" id=a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b Mar 17 20:26:51.871581 env[1292]: time="2025-03-17T20:26:51.871478352Z" level=warning msg="cleaning up after shim disconnected" id=a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b namespace=k8s.io Mar 17 20:26:51.871581 env[1292]: time="2025-03-17T20:26:51.871492879Z" level=info msg="cleaning up dead shim" Mar 17 20:26:51.878926 env[1292]: time="2025-03-17T20:26:51.878885627Z" level=warning msg="cleanup warnings time=\"2025-03-17T20:26:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4460 runtime=io.containerd.runc.v2\n" Mar 17 20:26:52.047063 kubelet[2157]: I0317 20:26:52.044336 2157 setters.go:580] "Node became not ready" node="ci-3510-3-7-8-ce231ec735.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T20:26:52Z","lastTransitionTime":"2025-03-17T20:26:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 20:26:52.280180 kubelet[2157]: E0317 20:26:52.279995 2157 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 20:26:52.522503 systemd[1]: run-containerd-runc-k8s.io-a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b-runc.1gbC1L.mount: Deactivated successfully. Mar 17 20:26:52.523241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a45a8d10342b304a77bcb1ccfe2d5a74ef509e27ad352c6c7a87e92123399d0b-rootfs.mount: Deactivated successfully. Mar 17 20:26:52.669207 env[1292]: time="2025-03-17T20:26:52.669048742Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 20:26:52.714976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582813444.mount: Deactivated successfully. Mar 17 20:26:52.735451 env[1292]: time="2025-03-17T20:26:52.735179340Z" level=info msg="CreateContainer within sandbox \"1460d2fc33a8116480a75cddf710fe0a1d84473577e82945117c0dec9d934c66\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b\"" Mar 17 20:26:52.736657 env[1292]: time="2025-03-17T20:26:52.736629039Z" level=info msg="StartContainer for \"f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b\"" Mar 17 20:26:52.797952 env[1292]: time="2025-03-17T20:26:52.797854491Z" level=info msg="StartContainer for \"f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b\" returns successfully" Mar 17 20:26:53.187426 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 20:26:53.239439 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 17 20:26:55.013284 systemd[1]: run-containerd-runc-k8s.io-f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b-runc.sKzRq1.mount: Deactivated successfully. Mar 17 20:26:56.219459 systemd-networkd[1038]: lxc_health: Link UP Mar 17 20:26:56.227460 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 20:26:56.227722 systemd-networkd[1038]: lxc_health: Gained carrier Mar 17 20:26:57.024024 kubelet[2157]: I0317 20:26:57.023955 2157 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g67tx" podStartSLOduration=9.023905242 podStartE2EDuration="9.023905242s" podCreationTimestamp="2025-03-17 20:26:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 20:26:53.706326823 +0000 UTC m=+246.882706205" watchObservedRunningTime="2025-03-17 20:26:57.023905242 +0000 UTC m=+250.200284583" Mar 17 20:26:57.211975 systemd[1]: run-containerd-runc-k8s.io-f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b-runc.wtKzgd.mount: Deactivated successfully. Mar 17 20:26:57.696827 systemd-networkd[1038]: lxc_health: Gained IPv6LL Mar 17 20:26:59.480922 systemd[1]: run-containerd-runc-k8s.io-f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b-runc.Ph0rEa.mount: Deactivated successfully. Mar 17 20:27:01.665718 systemd[1]: run-containerd-runc-k8s.io-f0cba24c9231d67561071eaf37b7e289993b7ddfff1636ac5bfc1942edf0869b-runc.Ph8lvJ.mount: Deactivated successfully. Mar 17 20:27:02.039167 sshd[4131]: pam_unix(sshd:session): session closed for user core Mar 17 20:27:02.042364 systemd-logind[1254]: Session 24 logged out. Waiting for processes to exit. Mar 17 20:27:02.043583 systemd[1]: sshd@23-172.24.4.115:22-172.24.4.1:44798.service: Deactivated successfully. Mar 17 20:27:02.044344 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 20:27:02.045766 systemd-logind[1254]: Removed session 24.