Feb 8 23:26:39.968176 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:26:39.968224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:26:39.968252 kernel: BIOS-provided physical RAM map: Feb 8 23:26:39.968270 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:26:39.968286 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:26:39.968303 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:26:39.968323 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 8 23:26:39.968341 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 8 23:26:39.968360 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:26:39.968377 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:26:39.968393 kernel: NX (Execute Disable) protection: active Feb 8 23:26:39.968409 kernel: SMBIOS 2.8 present. Feb 8 23:26:39.968426 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 8 23:26:39.968442 kernel: Hypervisor detected: KVM Feb 8 23:26:39.968463 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:26:39.968484 kernel: kvm-clock: cpu 0, msr 64faa001, primary cpu clock Feb 8 23:26:39.968502 kernel: kvm-clock: using sched offset of 5195979737 cycles Feb 8 23:26:39.968521 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:26:39.968540 kernel: tsc: Detected 1996.249 MHz processor Feb 8 23:26:39.968559 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:26:39.968578 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:26:39.968596 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 8 23:26:39.968615 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:26:39.968636 kernel: ACPI: Early table checksum verification disabled Feb 8 23:26:39.968655 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 8 23:26:39.968673 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:39.968692 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:39.968710 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:39.968729 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 8 23:26:39.968747 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:39.968765 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:26:39.968783 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 8 23:26:39.968804 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 8 23:26:39.968822 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 8 23:26:39.968840 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 8 23:26:39.984913 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 8 23:26:39.984934 kernel: No NUMA configuration found Feb 8 23:26:39.984953 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 8 23:26:39.984971 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 8 23:26:39.984991 kernel: Zone ranges: Feb 8 23:26:39.985024 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:26:39.985044 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 8 23:26:39.985063 kernel: Normal empty Feb 8 23:26:39.985082 kernel: Movable zone start for each node Feb 8 23:26:39.985101 kernel: Early memory node ranges Feb 8 23:26:39.985120 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:26:39.985143 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 8 23:26:39.985181 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 8 23:26:39.985202 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:26:39.985221 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:26:39.985241 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 8 23:26:39.985259 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:26:39.985278 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:26:39.985298 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:26:39.985317 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:26:39.985339 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:26:39.985359 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:26:39.985378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:26:39.985397 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:26:39.985416 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:26:39.985434 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:26:39.985454 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 8 23:26:39.985473 kernel: Booting paravirtualized kernel on KVM Feb 8 23:26:39.985493 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:26:39.985512 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:26:39.985536 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:26:39.985555 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:26:39.985573 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:26:39.985592 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 8 23:26:39.985611 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 8 23:26:39.985630 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 8 23:26:39.985649 kernel: Policy zone: DMA32 Feb 8 23:26:39.985672 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:26:39.985696 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:26:39.985715 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:26:39.985735 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:26:39.985754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:26:39.985774 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 8 23:26:39.985793 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:26:39.985812 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:26:39.985831 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:26:39.985891 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:26:39.985913 kernel: rcu: RCU event tracing is enabled. Feb 8 23:26:39.985933 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:26:39.985953 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:26:39.985972 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:26:39.985991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:26:39.986010 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:26:39.986029 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 8 23:26:39.986048 kernel: Console: colour VGA+ 80x25 Feb 8 23:26:39.986070 kernel: printk: console [tty0] enabled Feb 8 23:26:39.986089 kernel: printk: console [ttyS0] enabled Feb 8 23:26:39.986108 kernel: ACPI: Core revision 20210730 Feb 8 23:26:39.986128 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:26:39.986147 kernel: x2apic enabled Feb 8 23:26:39.986165 kernel: Switched APIC routing to physical x2apic. Feb 8 23:26:39.986184 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:26:39.986204 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:26:39.986223 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 8 23:26:39.986242 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 8 23:26:39.986265 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 8 23:26:39.986284 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:26:39.986303 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:26:39.986323 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:26:39.986342 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:26:39.986361 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:26:39.986380 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 8 23:26:39.986398 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:26:39.986417 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:26:39.986439 kernel: LSM: Security Framework initializing Feb 8 23:26:39.986458 kernel: SELinux: Initializing. Feb 8 23:26:39.986477 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:26:39.986496 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:26:39.986516 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 8 23:26:39.986535 kernel: Performance Events: AMD PMU driver. Feb 8 23:26:39.986553 kernel: ... version: 0 Feb 8 23:26:39.986572 kernel: ... bit width: 48 Feb 8 23:26:39.986592 kernel: ... generic registers: 4 Feb 8 23:26:39.986625 kernel: ... value mask: 0000ffffffffffff Feb 8 23:26:39.986645 kernel: ... max period: 00007fffffffffff Feb 8 23:26:39.986668 kernel: ... fixed-purpose events: 0 Feb 8 23:26:39.986688 kernel: ... event mask: 000000000000000f Feb 8 23:26:39.986708 kernel: signal: max sigframe size: 1440 Feb 8 23:26:39.986728 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:26:39.986747 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:26:39.986767 kernel: x86: Booting SMP configuration: Feb 8 23:26:39.986790 kernel: .... node #0, CPUs: #1 Feb 8 23:26:39.986810 kernel: kvm-clock: cpu 1, msr 64faa041, secondary cpu clock Feb 8 23:26:39.986830 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 8 23:26:39.986911 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:26:39.986933 kernel: smpboot: Max logical packages: 2 Feb 8 23:26:39.986954 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 8 23:26:39.986973 kernel: devtmpfs: initialized Feb 8 23:26:39.986993 kernel: x86/mm: Memory block size: 128MB Feb 8 23:26:39.987013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:26:39.987039 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:26:39.987059 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:26:39.987079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:26:39.987099 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:26:39.987119 kernel: audit: type=2000 audit(1707434799.436:1): state=initialized audit_enabled=0 res=1 Feb 8 23:26:39.987139 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:26:39.987159 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:26:39.987178 kernel: cpuidle: using governor menu Feb 8 23:26:39.987198 kernel: ACPI: bus type PCI registered Feb 8 23:26:39.987222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:26:39.987242 kernel: dca service started, version 1.12.1 Feb 8 23:26:39.987261 kernel: PCI: Using configuration type 1 for base access Feb 8 23:26:39.987281 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:26:39.987302 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:26:39.987322 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:26:39.987342 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:26:39.987362 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:26:39.987381 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:26:39.987405 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:26:39.987425 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:26:39.987445 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:26:39.987464 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:26:39.987484 kernel: ACPI: Interpreter enabled Feb 8 23:26:39.987504 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:26:39.987524 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:26:39.987544 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:26:39.987565 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:26:39.987588 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:26:39.987959 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:26:39.988177 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 8 23:26:39.988213 kernel: acpiphp: Slot [3] registered Feb 8 23:26:39.988233 kernel: acpiphp: Slot [4] registered Feb 8 23:26:39.988253 kernel: acpiphp: Slot [5] registered Feb 8 23:26:39.988273 kernel: acpiphp: Slot [6] registered Feb 8 23:26:39.988300 kernel: acpiphp: Slot [7] registered Feb 8 23:26:39.988320 kernel: acpiphp: Slot [8] registered Feb 8 23:26:39.988339 kernel: acpiphp: Slot [9] registered Feb 8 23:26:39.988359 kernel: acpiphp: Slot [10] registered Feb 8 23:26:39.988379 kernel: acpiphp: Slot [11] registered Feb 8 23:26:39.988398 kernel: acpiphp: Slot [12] registered Feb 8 23:26:39.988417 kernel: acpiphp: Slot [13] registered Feb 8 23:26:39.988437 kernel: acpiphp: Slot [14] registered Feb 8 23:26:39.988457 kernel: acpiphp: Slot [15] registered Feb 8 23:26:39.988476 kernel: acpiphp: Slot [16] registered Feb 8 23:26:39.988499 kernel: acpiphp: Slot [17] registered Feb 8 23:26:39.988519 kernel: acpiphp: Slot [18] registered Feb 8 23:26:39.988538 kernel: acpiphp: Slot [19] registered Feb 8 23:26:39.988558 kernel: acpiphp: Slot [20] registered Feb 8 23:26:39.988577 kernel: acpiphp: Slot [21] registered Feb 8 23:26:39.988597 kernel: acpiphp: Slot [22] registered Feb 8 23:26:39.988617 kernel: acpiphp: Slot [23] registered Feb 8 23:26:39.988636 kernel: acpiphp: Slot [24] registered Feb 8 23:26:39.988656 kernel: acpiphp: Slot [25] registered Feb 8 23:26:39.988679 kernel: acpiphp: Slot [26] registered Feb 8 23:26:39.988699 kernel: acpiphp: Slot [27] registered Feb 8 23:26:39.988718 kernel: acpiphp: Slot [28] registered Feb 8 23:26:39.988738 kernel: acpiphp: Slot [29] registered Feb 8 23:26:39.988757 kernel: acpiphp: Slot [30] registered Feb 8 23:26:39.988777 kernel: acpiphp: Slot [31] registered Feb 8 23:26:39.988796 kernel: PCI host bridge to bus 0000:00 Feb 8 23:26:39.999026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:26:39.999179 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:26:39.999326 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:26:39.999462 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 8 23:26:39.999596 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:26:39.999729 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:26:39.999959 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:26:40.000147 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:26:40.000340 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:26:40.000500 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 8 23:26:40.000654 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:26:40.000809 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:26:40.001002 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:26:40.001160 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:26:40.001352 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:26:40.001516 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:26:40.001671 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:26:40.001899 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 8 23:26:40.002065 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 8 23:26:40.002420 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 8 23:26:40.002529 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 8 23:26:40.002622 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 8 23:26:40.002709 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:26:40.004425 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:26:40.004530 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 8 23:26:40.004624 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 8 23:26:40.004714 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 8 23:26:40.004804 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 8 23:26:40.004929 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:26:40.005017 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:26:40.005102 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 8 23:26:40.005199 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 8 23:26:40.005298 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 8 23:26:40.005391 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 8 23:26:40.005484 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 8 23:26:40.005594 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:26:40.005690 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 8 23:26:40.005783 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 8 23:26:40.005796 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:26:40.005805 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:26:40.005814 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:26:40.005823 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:26:40.005832 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:26:40.005859 kernel: iommu: Default domain type: Translated Feb 8 23:26:40.005868 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:26:40.005966 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:26:40.006060 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:26:40.006153 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:26:40.006166 kernel: vgaarb: loaded Feb 8 23:26:40.006175 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:26:40.006184 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:26:40.006193 kernel: PTP clock support registered Feb 8 23:26:40.006205 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:26:40.006214 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:26:40.006223 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:26:40.006232 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 8 23:26:40.006241 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:26:40.006249 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:26:40.006258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:26:40.006266 kernel: pnp: PnP ACPI init Feb 8 23:26:40.006378 kernel: pnp 00:03: [dma 2] Feb 8 23:26:40.006396 kernel: pnp: PnP ACPI: found 5 devices Feb 8 23:26:40.006405 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:26:40.006413 kernel: NET: Registered PF_INET protocol family Feb 8 23:26:40.006422 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:26:40.006431 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 8 23:26:40.006441 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:26:40.006451 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:26:40.006459 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 8 23:26:40.006469 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 8 23:26:40.006477 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:26:40.006485 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:26:40.006493 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:26:40.006502 kernel: NET: Registered PF_XDP protocol family Feb 8 23:26:40.006581 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:26:40.006663 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:26:40.006742 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:26:40.006820 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 8 23:26:40.006918 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:26:40.007008 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:26:40.007097 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:26:40.007184 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:26:40.007196 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:26:40.007205 kernel: Initialise system trusted keyrings Feb 8 23:26:40.007213 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 8 23:26:40.007225 kernel: Key type asymmetric registered Feb 8 23:26:40.007233 kernel: Asymmetric key parser 'x509' registered Feb 8 23:26:40.007241 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:26:40.007249 kernel: io scheduler mq-deadline registered Feb 8 23:26:40.007258 kernel: io scheduler kyber registered Feb 8 23:26:40.007266 kernel: io scheduler bfq registered Feb 8 23:26:40.007274 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:26:40.007283 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 8 23:26:40.007291 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:26:40.007300 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 8 23:26:40.007310 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:26:40.007318 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:26:40.007326 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:26:40.007335 kernel: random: crng init done Feb 8 23:26:40.007343 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:26:40.007351 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:26:40.007360 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:26:40.007455 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 8 23:26:40.007472 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:26:40.007559 kernel: rtc_cmos 00:04: registered as rtc0 Feb 8 23:26:40.007641 kernel: rtc_cmos 00:04: setting system clock to 2024-02-08T23:26:39 UTC (1707434799) Feb 8 23:26:40.007720 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 8 23:26:40.007732 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:26:40.007741 kernel: Segment Routing with IPv6 Feb 8 23:26:40.007749 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:26:40.007757 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:26:40.007766 kernel: Key type dns_resolver registered Feb 8 23:26:40.007777 kernel: IPI shorthand broadcast: enabled Feb 8 23:26:40.007785 kernel: sched_clock: Marking stable (714006055, 115291952)->(892511565, -63213558) Feb 8 23:26:40.007793 kernel: registered taskstats version 1 Feb 8 23:26:40.007802 kernel: Loading compiled-in X.509 certificates Feb 8 23:26:40.007810 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:26:40.007819 kernel: Key type .fscrypt registered Feb 8 23:26:40.007827 kernel: Key type fscrypt-provisioning registered Feb 8 23:26:40.007835 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:26:40.007876 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:26:40.007885 kernel: ima: No architecture policies found Feb 8 23:26:40.007893 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:26:40.007901 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:26:40.007909 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:26:40.007918 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:26:40.007926 kernel: Run /init as init process Feb 8 23:26:40.007934 kernel: with arguments: Feb 8 23:26:40.007942 kernel: /init Feb 8 23:26:40.007952 kernel: with environment: Feb 8 23:26:40.007960 kernel: HOME=/ Feb 8 23:26:40.007968 kernel: TERM=linux Feb 8 23:26:40.007976 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:26:40.007987 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:26:40.007998 systemd[1]: Detected virtualization kvm. Feb 8 23:26:40.008008 systemd[1]: Detected architecture x86-64. Feb 8 23:26:40.008016 systemd[1]: Running in initrd. Feb 8 23:26:40.008027 systemd[1]: No hostname configured, using default hostname. Feb 8 23:26:40.008036 systemd[1]: Hostname set to . Feb 8 23:26:40.008045 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:26:40.008054 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:26:40.008063 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:26:40.008071 systemd[1]: Reached target cryptsetup.target. Feb 8 23:26:40.008080 systemd[1]: Reached target paths.target. Feb 8 23:26:40.008089 systemd[1]: Reached target slices.target. Feb 8 23:26:40.008099 systemd[1]: Reached target swap.target. Feb 8 23:26:40.008108 systemd[1]: Reached target timers.target. Feb 8 23:26:40.008117 systemd[1]: Listening on iscsid.socket. Feb 8 23:26:40.008125 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:26:40.008134 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:26:40.008143 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:26:40.008152 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:26:40.008161 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:26:40.008171 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:26:40.008180 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:26:40.008189 systemd[1]: Reached target sockets.target. Feb 8 23:26:40.008197 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:26:40.008213 systemd[1]: Finished network-cleanup.service. Feb 8 23:26:40.008224 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:26:40.008234 systemd[1]: Starting systemd-journald.service... Feb 8 23:26:40.008243 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:26:40.008252 systemd[1]: Starting systemd-resolved.service... Feb 8 23:26:40.008261 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:26:40.008270 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:26:40.008279 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:26:40.008288 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:26:40.008297 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:26:40.008306 kernel: audit: type=1130 audit(1707434799.972:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.008320 systemd-journald[185]: Journal started Feb 8 23:26:40.008364 systemd-journald[185]: Runtime Journal (/run/log/journal/20afcd259fab4e9084b4603b5b049e18) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:26:39.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:39.964483 systemd-modules-load[186]: Inserted module 'overlay' Feb 8 23:26:40.055493 systemd[1]: Started systemd-journald.service. Feb 8 23:26:40.055537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:26:40.055552 kernel: Bridge firewalling registered Feb 8 23:26:40.055563 kernel: SCSI subsystem initialized Feb 8 23:26:40.055574 kernel: audit: type=1130 audit(1707434800.050:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.055585 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:26:40.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.017633 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 8 23:26:40.060912 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:26:40.060928 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:26:40.026352 systemd-resolved[187]: Positive Trust Anchors: Feb 8 23:26:40.026361 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:26:40.026399 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:26:40.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.028999 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 8 23:26:40.073123 kernel: audit: type=1130 audit(1707434800.064:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.073155 kernel: audit: type=1130 audit(1707434800.067:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.051560 systemd[1]: Started systemd-resolved.service. Feb 8 23:26:40.065277 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:26:40.068225 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 8 23:26:40.068768 systemd[1]: Reached target nss-lookup.target. Feb 8 23:26:40.080797 kernel: audit: type=1130 audit(1707434800.074:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.073543 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:26:40.074856 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:26:40.076334 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:26:40.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.087430 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:26:40.092877 kernel: audit: type=1130 audit(1707434800.087:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.094126 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:26:40.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.095443 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:26:40.099892 kernel: audit: type=1130 audit(1707434800.093:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.106706 dracut-cmdline[209]: dracut-dracut-053 Feb 8 23:26:40.109676 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:26:40.175915 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:26:40.189898 kernel: iscsi: registered transport (tcp) Feb 8 23:26:40.213608 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:26:40.213719 kernel: QLogic iSCSI HBA Driver Feb 8 23:26:40.267098 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:26:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.268579 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:26:40.274495 kernel: audit: type=1130 audit(1707434800.266:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.345005 kernel: raid6: sse2x4 gen() 13335 MB/s Feb 8 23:26:40.361945 kernel: raid6: sse2x4 xor() 7232 MB/s Feb 8 23:26:40.378952 kernel: raid6: sse2x2 gen() 14469 MB/s Feb 8 23:26:40.395940 kernel: raid6: sse2x2 xor() 8794 MB/s Feb 8 23:26:40.412942 kernel: raid6: sse2x1 gen() 11427 MB/s Feb 8 23:26:40.430720 kernel: raid6: sse2x1 xor() 7002 MB/s Feb 8 23:26:40.430788 kernel: raid6: using algorithm sse2x2 gen() 14469 MB/s Feb 8 23:26:40.430819 kernel: raid6: .... xor() 8794 MB/s, rmw enabled Feb 8 23:26:40.431535 kernel: raid6: using ssse3x2 recovery algorithm Feb 8 23:26:40.445898 kernel: xor: measuring software checksum speed Feb 8 23:26:40.446891 kernel: prefetch64-sse : 18464 MB/sec Feb 8 23:26:40.449186 kernel: generic_sse : 16818 MB/sec Feb 8 23:26:40.449242 kernel: xor: using function: prefetch64-sse (18464 MB/sec) Feb 8 23:26:40.561908 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:26:40.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.578467 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:26:40.584957 kernel: audit: type=1130 audit(1707434800.578:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.585482 systemd[1]: Starting systemd-udevd.service... Feb 8 23:26:40.583000 audit: BPF prog-id=7 op=LOAD Feb 8 23:26:40.583000 audit: BPF prog-id=8 op=LOAD Feb 8 23:26:40.618160 systemd-udevd[386]: Using default interface naming scheme 'v252'. Feb 8 23:26:40.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.629698 systemd[1]: Started systemd-udevd.service. Feb 8 23:26:40.637428 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:26:40.660115 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Feb 8 23:26:40.712153 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:26:40.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.714264 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:26:40.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:40.751876 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:26:40.810193 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 8 23:26:40.818489 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:26:40.818516 kernel: GPT:17805311 != 41943039 Feb 8 23:26:40.818527 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:26:40.819511 kernel: GPT:17805311 != 41943039 Feb 8 23:26:40.820199 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:26:40.821975 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:40.858903 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (439) Feb 8 23:26:40.869337 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:26:40.909811 kernel: libata version 3.00 loaded. Feb 8 23:26:40.909863 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:26:40.910090 kernel: scsi host0: ata_piix Feb 8 23:26:40.910250 kernel: scsi host1: ata_piix Feb 8 23:26:40.910363 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 8 23:26:40.910377 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 8 23:26:40.914240 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:26:40.917378 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:26:40.917931 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:26:40.923574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:26:40.924957 systemd[1]: Starting disk-uuid.service... Feb 8 23:26:40.936499 disk-uuid[458]: Primary Header is updated. Feb 8 23:26:40.936499 disk-uuid[458]: Secondary Entries is updated. Feb 8 23:26:40.936499 disk-uuid[458]: Secondary Header is updated. Feb 8 23:26:40.942869 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:40.947864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:41.960920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:26:41.962646 disk-uuid[459]: The operation has completed successfully. Feb 8 23:26:42.036838 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:26:42.037098 systemd[1]: Finished disk-uuid.service. Feb 8 23:26:42.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.048332 systemd[1]: Starting verity-setup.service... Feb 8 23:26:42.076882 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 8 23:26:42.151610 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:26:42.155616 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:26:42.162762 systemd[1]: Finished verity-setup.service. Feb 8 23:26:42.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.298961 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:26:42.299387 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:26:42.300043 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:26:42.300798 systemd[1]: Starting ignition-setup.service... Feb 8 23:26:42.301952 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:26:42.321741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:26:42.321808 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:26:42.321837 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:26:42.345558 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:26:42.361780 systemd[1]: Finished ignition-setup.service. Feb 8 23:26:42.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.363311 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:26:42.462402 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:26:42.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.464000 audit: BPF prog-id=9 op=LOAD Feb 8 23:26:42.466934 systemd[1]: Starting systemd-networkd.service... Feb 8 23:26:42.513602 systemd-networkd[629]: lo: Link UP Feb 8 23:26:42.513615 systemd-networkd[629]: lo: Gained carrier Feb 8 23:26:42.514225 systemd-networkd[629]: Enumeration completed Feb 8 23:26:42.514493 systemd-networkd[629]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:26:42.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.516953 systemd[1]: Started systemd-networkd.service. Feb 8 23:26:42.517525 systemd-networkd[629]: eth0: Link UP Feb 8 23:26:42.517529 systemd-networkd[629]: eth0: Gained carrier Feb 8 23:26:42.518900 systemd[1]: Reached target network.target. Feb 8 23:26:42.521810 systemd[1]: Starting iscsiuio.service... Feb 8 23:26:42.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.531564 systemd[1]: Started iscsiuio.service. Feb 8 23:26:42.533668 systemd[1]: Starting iscsid.service... Feb 8 23:26:42.534962 systemd-networkd[629]: eth0: DHCPv4 address 172.24.4.40/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:26:42.540740 iscsid[638]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:26:42.540740 iscsid[638]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:26:42.540740 iscsid[638]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:26:42.540740 iscsid[638]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:26:42.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.551524 iscsid[638]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:26:42.551524 iscsid[638]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:26:42.543777 systemd[1]: Started iscsid.service. Feb 8 23:26:42.545664 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:26:42.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.557350 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:26:42.557962 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:26:42.559179 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:26:42.559619 systemd[1]: Reached target remote-fs.target. Feb 8 23:26:42.560900 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:26:42.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.569446 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:26:42.681820 ignition[557]: Ignition 2.14.0 Feb 8 23:26:42.683064 ignition[557]: Stage: fetch-offline Feb 8 23:26:42.683251 ignition[557]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:42.683296 ignition[557]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:42.685604 ignition[557]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:42.685833 ignition[557]: parsed url from cmdline: "" Feb 8 23:26:42.685875 ignition[557]: no config URL provided Feb 8 23:26:42.685892 ignition[557]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:26:42.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:42.688892 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:26:42.685913 ignition[557]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:26:42.692063 systemd[1]: Starting ignition-fetch.service... Feb 8 23:26:42.685937 ignition[557]: failed to fetch config: resource requires networking Feb 8 23:26:42.686752 ignition[557]: Ignition finished successfully Feb 8 23:26:42.710833 ignition[652]: Ignition 2.14.0 Feb 8 23:26:42.710954 ignition[652]: Stage: fetch Feb 8 23:26:42.711202 ignition[652]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:42.711243 ignition[652]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:42.713737 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:42.713994 ignition[652]: parsed url from cmdline: "" Feb 8 23:26:42.714004 ignition[652]: no config URL provided Feb 8 23:26:42.714018 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:26:42.714036 ignition[652]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:26:42.724346 ignition[652]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 8 23:26:42.724420 ignition[652]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 8 23:26:42.726270 ignition[652]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 8 23:26:42.900148 ignition[652]: GET result: OK Feb 8 23:26:42.900413 ignition[652]: parsing config with SHA512: 5905c995334a422604d20ac9db6b61e47a337c724c0e81203d45972ade07fa5d642688bdee3ca288bb7df9924e5ee4bcae286e03d0bd8a0006462383048315f3 Feb 8 23:26:43.016381 unknown[652]: fetched base config from "system" Feb 8 23:26:43.016413 unknown[652]: fetched base config from "system" Feb 8 23:26:43.016427 unknown[652]: fetched user config from "openstack" Feb 8 23:26:43.019610 ignition[652]: fetch: fetch complete Feb 8 23:26:43.019625 ignition[652]: fetch: fetch passed Feb 8 23:26:43.019726 ignition[652]: Ignition finished successfully Feb 8 23:26:43.026314 systemd[1]: Finished ignition-fetch.service. Feb 8 23:26:43.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.029273 systemd[1]: Starting ignition-kargs.service... Feb 8 23:26:43.048437 ignition[658]: Ignition 2.14.0 Feb 8 23:26:43.048468 ignition[658]: Stage: kargs Feb 8 23:26:43.048681 ignition[658]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:43.048726 ignition[658]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:43.050816 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:43.053587 ignition[658]: kargs: kargs passed Feb 8 23:26:43.055117 systemd[1]: Finished ignition-kargs.service. Feb 8 23:26:43.053663 ignition[658]: Ignition finished successfully Feb 8 23:26:43.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.067341 systemd[1]: Starting ignition-disks.service... Feb 8 23:26:43.075239 ignition[664]: Ignition 2.14.0 Feb 8 23:26:43.075252 ignition[664]: Stage: disks Feb 8 23:26:43.075361 ignition[664]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:43.075380 ignition[664]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:43.076510 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:43.078324 ignition[664]: disks: disks passed Feb 8 23:26:43.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.081106 systemd[1]: Finished ignition-disks.service. Feb 8 23:26:43.078391 ignition[664]: Ignition finished successfully Feb 8 23:26:43.082011 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:26:43.083348 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:26:43.084874 systemd[1]: Reached target local-fs.target. Feb 8 23:26:43.086397 systemd[1]: Reached target sysinit.target. Feb 8 23:26:43.087998 systemd[1]: Reached target basic.target. Feb 8 23:26:43.090514 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:26:43.116005 systemd-fsck[671]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 8 23:26:43.130493 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:26:43.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.133367 systemd[1]: Mounting sysroot.mount... Feb 8 23:26:43.154918 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:26:43.155610 systemd[1]: Mounted sysroot.mount. Feb 8 23:26:43.157103 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:26:43.161461 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:26:43.163436 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:26:43.164930 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 8 23:26:43.171191 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:26:43.171256 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:26:43.184446 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:26:43.188658 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:26:43.200706 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:26:43.207790 initrd-setup-root[682]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:26:43.225912 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (683) Feb 8 23:26:43.226115 initrd-setup-root[691]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:26:43.235605 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:26:43.235644 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:26:43.235656 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:26:43.237455 initrd-setup-root[699]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:26:43.244197 initrd-setup-root[723]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:26:43.251213 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:26:43.322627 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:26:43.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.326021 systemd[1]: Starting ignition-mount.service... Feb 8 23:26:43.328105 systemd[1]: Starting sysroot-boot.service... Feb 8 23:26:43.346737 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:26:43.346955 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:26:43.373653 ignition[746]: INFO : Ignition 2.14.0 Feb 8 23:26:43.373653 ignition[746]: INFO : Stage: mount Feb 8 23:26:43.374890 ignition[746]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:43.374890 ignition[746]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:43.377485 ignition[746]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:43.378811 ignition[746]: INFO : mount: mount passed Feb 8 23:26:43.381022 ignition[746]: INFO : Ignition finished successfully Feb 8 23:26:43.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.381203 systemd[1]: Finished ignition-mount.service. Feb 8 23:26:43.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.387619 systemd[1]: Finished sysroot-boot.service. Feb 8 23:26:43.407942 coreos-metadata[677]: Feb 08 23:26:43.407 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:26:43.426155 coreos-metadata[677]: Feb 08 23:26:43.426 INFO Fetch successful Feb 8 23:26:43.426863 coreos-metadata[677]: Feb 08 23:26:43.426 INFO wrote hostname ci-3510-3-2-4-bfb6381473.novalocal to /sysroot/etc/hostname Feb 8 23:26:43.431216 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 8 23:26:43.431395 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 8 23:26:43.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:43.434745 systemd[1]: Starting ignition-files.service... Feb 8 23:26:43.448305 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:26:43.462699 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (756) Feb 8 23:26:43.467606 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:26:43.467661 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:26:43.467698 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:26:43.483111 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:26:43.503556 ignition[775]: INFO : Ignition 2.14.0 Feb 8 23:26:43.503556 ignition[775]: INFO : Stage: files Feb 8 23:26:43.506124 ignition[775]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:43.506124 ignition[775]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:43.506124 ignition[775]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:43.514267 ignition[775]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:26:43.517712 ignition[775]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:26:43.519466 ignition[775]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:26:43.528937 ignition[775]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:26:43.531142 ignition[775]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:26:43.534120 unknown[775]: wrote ssh authorized keys file for user: core Feb 8 23:26:43.535655 ignition[775]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:26:43.537285 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:26:43.537285 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:26:44.112650 systemd-networkd[629]: eth0: Gained IPv6LL Feb 8 23:26:44.320234 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:26:44.655036 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:26:44.657936 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:26:44.657936 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:26:45.190436 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:26:45.655871 ignition[775]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:26:45.655871 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:26:45.655871 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:26:45.665232 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:26:46.152726 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:26:47.044062 ignition[775]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:26:47.045717 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:26:47.050293 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:26:47.050293 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:26:47.050293 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:26:47.050293 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:26:47.186903 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:26:48.105451 ignition[775]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 8 23:26:48.105451 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:26:48.111513 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:26:48.111513 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:26:48.216270 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:26:50.194653 ignition[775]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 8 23:26:50.196442 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:26:50.197283 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:26:50.198149 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:26:50.306193 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:26:51.281119 ignition[775]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 8 23:26:51.282959 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:26:51.283785 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:26:51.284659 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:26:51.872613 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:26:52.358190 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:26:52.358190 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:26:52.362774 ignition[775]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:26:52.362774 ignition[775]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:26:52.433218 kernel: kauditd_printk_skb: 26 callbacks suppressed Feb 8 23:26:52.433240 kernel: audit: type=1130 audit(1707434812.368:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.433254 kernel: audit: type=1130 audit(1707434812.389:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.433266 kernel: audit: type=1130 audit(1707434812.400:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.433281 kernel: audit: type=1131 audit(1707434812.400:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(14): op(15): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:26:52.433418 ignition[775]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:26:52.457866 kernel: audit: type=1130 audit(1707434812.440:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.457894 kernel: audit: type=1131 audit(1707434812.440:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.367485 systemd[1]: Finished ignition-files.service. Feb 8 23:26:52.458823 ignition[775]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:26:52.458823 ignition[775]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:26:52.458823 ignition[775]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:26:52.458823 ignition[775]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:26:52.458823 ignition[775]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:26:52.458823 ignition[775]: INFO : files: files passed Feb 8 23:26:52.458823 ignition[775]: INFO : Ignition finished successfully Feb 8 23:26:52.471544 kernel: audit: type=1130 audit(1707434812.460:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.370235 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:26:52.472167 initrd-setup-root-after-ignition[800]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:26:52.383392 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:26:52.384160 systemd[1]: Starting ignition-quench.service... Feb 8 23:26:52.389050 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:26:52.390105 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:26:52.390184 systemd[1]: Finished ignition-quench.service. Feb 8 23:26:52.401149 systemd[1]: Reached target ignition-complete.target. Feb 8 23:26:52.420118 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:26:52.440172 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:26:52.440267 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:26:52.483204 kernel: audit: type=1131 audit(1707434812.478:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.441228 systemd[1]: Reached target initrd-fs.target. Feb 8 23:26:52.448708 systemd[1]: Reached target initrd.target. Feb 8 23:26:52.449884 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:26:52.450640 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:26:52.461102 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:26:52.465071 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:26:52.476095 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:26:52.476883 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:26:52.477883 systemd[1]: Stopped target timers.target. Feb 8 23:26:52.478675 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:26:52.478820 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:26:52.479715 systemd[1]: Stopped target initrd.target. Feb 8 23:26:52.496995 kernel: audit: type=1131 audit(1707434812.492:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.483716 systemd[1]: Stopped target basic.target. Feb 8 23:26:52.484522 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:26:52.501992 kernel: audit: type=1131 audit(1707434812.497:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.485363 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:26:52.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.486462 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:26:52.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.487374 systemd[1]: Stopped target remote-fs.target. Feb 8 23:26:52.488259 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:26:52.489172 systemd[1]: Stopped target sysinit.target. Feb 8 23:26:52.490059 systemd[1]: Stopped target local-fs.target. Feb 8 23:26:52.506167 iscsid[638]: iscsid shutting down. Feb 8 23:26:52.490884 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:26:52.491686 systemd[1]: Stopped target swap.target. Feb 8 23:26:52.492562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:26:52.492707 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:26:52.493638 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:26:52.497516 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:26:52.497663 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:26:52.498549 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:26:52.498695 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:26:52.502623 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:26:52.502759 systemd[1]: Stopped ignition-files.service. Feb 8 23:26:52.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.504300 systemd[1]: Stopping ignition-mount.service... Feb 8 23:26:52.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.507604 systemd[1]: Stopping iscsid.service... Feb 8 23:26:52.513373 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:26:52.513603 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:26:52.515301 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:26:52.515865 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:26:52.516052 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:26:52.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.516969 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:26:52.517113 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:26:52.523128 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:26:52.523249 systemd[1]: Stopped iscsid.service. Feb 8 23:26:52.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.529304 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:26:52.535457 ignition[813]: INFO : Ignition 2.14.0 Feb 8 23:26:52.535457 ignition[813]: INFO : Stage: umount Feb 8 23:26:52.535457 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:26:52.535457 ignition[813]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:26:52.535457 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:26:52.535457 ignition[813]: INFO : umount: umount passed Feb 8 23:26:52.535457 ignition[813]: INFO : Ignition finished successfully Feb 8 23:26:52.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.529405 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:26:52.530762 systemd[1]: Stopping iscsiuio.service... Feb 8 23:26:52.533287 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:26:52.533368 systemd[1]: Stopped iscsiuio.service. Feb 8 23:26:52.534810 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:26:52.534906 systemd[1]: Stopped ignition-mount.service. Feb 8 23:26:52.535979 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:26:52.536018 systemd[1]: Stopped ignition-disks.service. Feb 8 23:26:52.536818 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:26:52.536871 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:26:52.537819 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:26:52.537894 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:26:52.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.538916 systemd[1]: Stopped target network.target. Feb 8 23:26:52.540295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:26:52.540340 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:26:52.541561 systemd[1]: Stopped target paths.target. Feb 8 23:26:52.542515 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:26:52.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.545939 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:26:52.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.546793 systemd[1]: Stopped target slices.target. Feb 8 23:26:52.547798 systemd[1]: Stopped target sockets.target. Feb 8 23:26:52.548768 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:26:52.548812 systemd[1]: Closed iscsid.socket. Feb 8 23:26:52.549878 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:26:52.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.549913 systemd[1]: Closed iscsiuio.socket. Feb 8 23:26:52.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.550910 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:26:52.550951 systemd[1]: Stopped ignition-setup.service. Feb 8 23:26:52.551991 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:26:52.553069 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:26:52.555118 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:26:52.555941 systemd-networkd[629]: eth0: DHCPv6 lease lost Feb 8 23:26:52.569000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:26:52.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.557323 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:26:52.557421 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:26:52.558161 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:26:52.573000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:26:52.558270 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:26:52.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.559734 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:26:52.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.559776 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:26:52.560293 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:26:52.560368 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:26:52.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.561641 systemd[1]: Stopping network-cleanup.service... Feb 8 23:26:52.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.564143 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:26:52.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.564252 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:26:52.565197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:26:52.565567 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:26:52.566764 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:26:52.566815 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:26:52.567577 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:26:52.569883 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:26:52.570381 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:26:52.570474 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:26:52.574943 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:26:52.575048 systemd[1]: Stopped network-cleanup.service. Feb 8 23:26:52.576374 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:26:52.576507 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:26:52.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.578141 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:26:52.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:52.578192 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:26:52.582429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:26:52.582469 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:26:52.583363 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:26:52.583409 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:26:52.584378 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:26:52.584413 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:26:52.585323 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:26:52.585370 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:26:52.587094 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:26:52.593127 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:26:52.593196 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:26:52.594485 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:26:52.594566 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:26:52.595307 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:26:52.596941 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:26:52.616936 systemd[1]: Switching root. Feb 8 23:26:52.638125 systemd-journald[185]: Journal stopped Feb 8 23:26:56.728705 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Feb 8 23:26:56.728754 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:26:56.728771 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:26:56.728786 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:26:56.728798 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:26:56.728811 kernel: SELinux: policy capability open_perms=1 Feb 8 23:26:56.728822 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:26:56.728835 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:26:56.728869 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:26:56.728881 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:26:56.728900 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:26:56.728911 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:26:56.728924 systemd[1]: Successfully loaded SELinux policy in 92.292ms. Feb 8 23:26:56.728944 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.946ms. Feb 8 23:26:56.728958 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:26:56.728971 systemd[1]: Detected virtualization kvm. Feb 8 23:26:56.728985 systemd[1]: Detected architecture x86-64. Feb 8 23:26:56.728999 systemd[1]: Detected first boot. Feb 8 23:26:56.729011 systemd[1]: Hostname set to . Feb 8 23:26:56.729030 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:26:56.729042 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:26:56.729054 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:26:56.729067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:26:56.729080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:26:56.729094 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:26:56.729109 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:26:56.729121 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:26:56.729133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:26:56.729146 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:26:56.729178 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:26:56.729191 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 8 23:26:56.729207 systemd[1]: Created slice system-getty.slice. Feb 8 23:26:56.729219 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:26:56.729233 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:26:56.729245 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:26:56.729257 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:26:56.729269 systemd[1]: Created slice user.slice. Feb 8 23:26:56.729281 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:26:56.729293 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:26:56.729305 systemd[1]: Set up automount boot.automount. Feb 8 23:26:56.729319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:26:56.729331 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:26:56.729345 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:26:56.729357 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:26:56.729369 systemd[1]: Reached target integritysetup.target. Feb 8 23:26:56.729381 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:26:56.729393 systemd[1]: Reached target remote-fs.target. Feb 8 23:26:56.729405 systemd[1]: Reached target slices.target. Feb 8 23:26:56.729419 systemd[1]: Reached target swap.target. Feb 8 23:26:56.729431 systemd[1]: Reached target torcx.target. Feb 8 23:26:56.729443 systemd[1]: Reached target veritysetup.target. Feb 8 23:26:56.729456 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:26:56.729468 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:26:56.729480 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:26:56.729492 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:26:56.729504 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:26:56.729516 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:26:56.729528 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:26:56.729541 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:26:56.729554 systemd[1]: Mounting media.mount... Feb 8 23:26:56.729566 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:26:56.729579 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:26:56.729590 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:26:56.729602 systemd[1]: Mounting tmp.mount... Feb 8 23:26:56.729614 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:26:56.729626 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:26:56.729640 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:26:56.729652 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:26:56.729664 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:26:56.729676 systemd[1]: Starting modprobe@drm.service... Feb 8 23:26:56.729688 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:26:56.729700 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:26:56.729712 systemd[1]: Starting modprobe@loop.service... Feb 8 23:26:56.729724 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:26:56.729736 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:26:56.729750 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:26:56.729762 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:26:56.729774 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:26:56.729786 systemd[1]: Stopped systemd-journald.service. Feb 8 23:26:56.729798 systemd[1]: Starting systemd-journald.service... Feb 8 23:26:56.729809 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:26:56.729821 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:26:56.729833 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:26:56.729861 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:26:56.729874 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:26:56.729889 systemd[1]: Stopped verity-setup.service. Feb 8 23:26:56.729902 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:26:56.729914 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:26:56.729927 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:26:56.729939 systemd[1]: Mounted media.mount. Feb 8 23:26:56.729951 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:26:56.729964 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:26:56.729975 systemd[1]: Mounted tmp.mount. Feb 8 23:26:56.729988 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:26:56.730001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:26:56.730013 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:26:56.730026 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:26:56.730039 systemd[1]: Finished modprobe@drm.service. Feb 8 23:26:56.730052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:26:56.730065 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:26:56.730077 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:26:56.730088 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:26:56.730101 systemd[1]: Reached target network-pre.target. Feb 8 23:26:56.730113 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:26:56.730125 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:26:56.730140 systemd-journald[907]: Journal started Feb 8 23:26:56.730182 systemd-journald[907]: Runtime Journal (/run/log/journal/20afcd259fab4e9084b4603b5b049e18) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:26:52.935000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:26:53.062000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:26:53.062000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:26:53.064000 audit: BPF prog-id=10 op=LOAD Feb 8 23:26:53.064000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:26:53.064000 audit: BPF prog-id=11 op=LOAD Feb 8 23:26:53.064000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:26:53.253000 audit[845]: AVC avc: denied { associate } for pid=845 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:26:53.253000 audit[845]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=828 pid=845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:53.253000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:26:53.256000 audit[845]: AVC avc: denied { associate } for pid=845 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:26:53.256000 audit[845]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=828 pid=845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:53.256000 audit: CWD cwd="/" Feb 8 23:26:53.256000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:53.256000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:53.256000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:26:56.518000 audit: BPF prog-id=12 op=LOAD Feb 8 23:26:56.518000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:26:56.518000 audit: BPF prog-id=13 op=LOAD Feb 8 23:26:56.518000 audit: BPF prog-id=14 op=LOAD Feb 8 23:26:56.518000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:26:56.518000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:26:56.519000 audit: BPF prog-id=15 op=LOAD Feb 8 23:26:56.519000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:26:56.520000 audit: BPF prog-id=16 op=LOAD Feb 8 23:26:56.520000 audit: BPF prog-id=17 op=LOAD Feb 8 23:26:56.520000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:26:56.520000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:26:56.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.533000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:26:56.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.638000 audit: BPF prog-id=18 op=LOAD Feb 8 23:26:56.639000 audit: BPF prog-id=19 op=LOAD Feb 8 23:26:56.639000 audit: BPF prog-id=20 op=LOAD Feb 8 23:26:56.639000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:26:56.639000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:26:56.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.726000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:26:56.726000 audit[907]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffe28c11df0 a2=4000 a3=7ffe28c11e8c items=0 ppid=1 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:56.726000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:26:53.249928 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:26:56.517672 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:26:53.250871 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:26:56.517685 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:26:56.736902 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:26:53.250895 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:26:56.522391 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:26:53.250928 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:26:53.250939 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:26:53.250973 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:26:53.250987 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:26:53.251195 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:26:53.251237 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:26:53.251252 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:26:53.252358 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:26:53.252400 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:26:56.744940 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:26:56.744981 systemd[1]: Started systemd-journald.service. Feb 8 23:26:53.252422 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:26:53.252440 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:26:53.252460 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:26:53.252476 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:26:56.034770 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:56.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.035284 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:56.747167 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:26:56.035510 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:56.035913 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:26:56.036034 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:26:56.036172 /usr/lib/systemd/system-generators/torcx-generator[845]: time="2024-02-08T23:26:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:26:56.755055 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:26:56.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.756134 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:26:56.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.759727 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:26:56.761219 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:26:56.762707 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:26:56.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.764564 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:26:56.765929 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:26:56.767536 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:26:56.774860 kernel: fuse: init (API version 7.34) Feb 8 23:26:56.778867 kernel: loop: module loaded Feb 8 23:26:56.779010 systemd-journald[907]: Time spent on flushing to /var/log/journal/20afcd259fab4e9084b4603b5b049e18 is 42.913ms for 1125 entries. Feb 8 23:26:56.779010 systemd-journald[907]: System Journal (/var/log/journal/20afcd259fab4e9084b4603b5b049e18) is 8.0M, max 584.8M, 576.8M free. Feb 8 23:26:56.844900 systemd-journald[907]: Received client request to flush runtime journal. Feb 8 23:26:56.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.782818 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:26:56.782988 systemd[1]: Finished modprobe@loop.service. Feb 8 23:26:56.783668 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:26:56.784836 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:26:56.785038 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:26:56.788345 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:26:56.794705 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:26:56.805026 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:26:56.846732 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:26:56.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.858474 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:26:56.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.860149 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:26:56.869637 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:26:56.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:56.871391 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:26:56.873485 udevadm[954]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:26:56.904931 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:26:56.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.403149 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:26:57.416622 kernel: kauditd_printk_skb: 100 callbacks suppressed Feb 8 23:26:57.417003 kernel: audit: type=1130 audit(1707434817.403:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.406000 audit: BPF prog-id=21 op=LOAD Feb 8 23:26:57.418621 systemd[1]: Starting systemd-udevd.service... Feb 8 23:26:57.415000 audit: BPF prog-id=22 op=LOAD Feb 8 23:26:57.420948 kernel: audit: type=1334 audit(1707434817.406:139): prog-id=21 op=LOAD Feb 8 23:26:57.420996 kernel: audit: type=1334 audit(1707434817.415:140): prog-id=22 op=LOAD Feb 8 23:26:57.416000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:26:57.416000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:26:57.424364 kernel: audit: type=1334 audit(1707434817.416:141): prog-id=7 op=UNLOAD Feb 8 23:26:57.424445 kernel: audit: type=1334 audit(1707434817.416:142): prog-id=8 op=UNLOAD Feb 8 23:26:57.465294 systemd-udevd[957]: Using default interface naming scheme 'v252'. Feb 8 23:26:57.503146 systemd[1]: Started systemd-udevd.service. Feb 8 23:26:57.520917 kernel: audit: type=1130 audit(1707434817.506:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.530333 kernel: audit: type=1334 audit(1707434817.525:144): prog-id=23 op=LOAD Feb 8 23:26:57.525000 audit: BPF prog-id=23 op=LOAD Feb 8 23:26:57.531218 systemd[1]: Starting systemd-networkd.service... Feb 8 23:26:57.553087 kernel: audit: type=1334 audit(1707434817.542:145): prog-id=24 op=LOAD Feb 8 23:26:57.553335 kernel: audit: type=1334 audit(1707434817.547:146): prog-id=25 op=LOAD Feb 8 23:26:57.542000 audit: BPF prog-id=24 op=LOAD Feb 8 23:26:57.558303 kernel: audit: type=1334 audit(1707434817.550:147): prog-id=26 op=LOAD Feb 8 23:26:57.547000 audit: BPF prog-id=25 op=LOAD Feb 8 23:26:57.550000 audit: BPF prog-id=26 op=LOAD Feb 8 23:26:57.559101 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:26:57.563801 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:26:57.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.598225 systemd[1]: Started systemd-userdbd.service. Feb 8 23:26:57.638889 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 8 23:26:57.663866 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:26:57.695416 systemd-networkd[973]: lo: Link UP Feb 8 23:26:57.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.695425 systemd-networkd[973]: lo: Gained carrier Feb 8 23:26:57.695823 systemd-networkd[973]: Enumeration completed Feb 8 23:26:57.695942 systemd[1]: Started systemd-networkd.service. Feb 8 23:26:57.695975 systemd-networkd[973]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:26:57.697947 systemd-networkd[973]: eth0: Link UP Feb 8 23:26:57.697956 systemd-networkd[973]: eth0: Gained carrier Feb 8 23:26:57.677000 audit[971]: AVC avc: denied { confidentiality } for pid=971 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:26:57.677000 audit[971]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f6ffc68ce0 a1=32194 a2=7ffbbe991bc5 a3=5 items=108 ppid=957 pid=971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:57.677000 audit: CWD cwd="/" Feb 8 23:26:57.677000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=1 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=2 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=3 name=(null) inode=13917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=4 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=5 name=(null) inode=13918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=6 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=7 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=8 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=9 name=(null) inode=13920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=10 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=11 name=(null) inode=13921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=12 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=13 name=(null) inode=13922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=14 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=15 name=(null) inode=13923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=16 name=(null) inode=13919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=17 name=(null) inode=13924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=18 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=19 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=20 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=21 name=(null) inode=13926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=22 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=23 name=(null) inode=13927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=24 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=25 name=(null) inode=13928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=26 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=27 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=28 name=(null) inode=13925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=29 name=(null) inode=13930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=30 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=31 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=32 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=33 name=(null) inode=13932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=34 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=35 name=(null) inode=13933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=36 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=37 name=(null) inode=13934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=38 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=39 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=40 name=(null) inode=13931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=41 name=(null) inode=13936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=42 name=(null) inode=13916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=43 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=44 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=45 name=(null) inode=13938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=46 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=47 name=(null) inode=13939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=48 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=49 name=(null) inode=13940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=50 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=51 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=52 name=(null) inode=13937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=53 name=(null) inode=13942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=55 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=56 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=57 name=(null) inode=13944 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=58 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=59 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=60 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=61 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=62 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=63 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=64 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=65 name=(null) inode=13948 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=66 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=67 name=(null) inode=13949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=68 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=69 name=(null) inode=13950 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=70 name=(null) inode=13946 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=71 name=(null) inode=13951 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=72 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=73 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=74 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=75 name=(null) inode=13953 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=76 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=77 name=(null) inode=13954 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=78 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=79 name=(null) inode=13955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=80 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=81 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=82 name=(null) inode=13952 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=83 name=(null) inode=13957 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=84 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=85 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=86 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=87 name=(null) inode=13959 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=88 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=89 name=(null) inode=13960 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=90 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=91 name=(null) inode=13961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=92 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=93 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=94 name=(null) inode=13958 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=95 name=(null) inode=13963 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=96 name=(null) inode=13943 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=97 name=(null) inode=13964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=98 name=(null) inode=13964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=99 name=(null) inode=13965 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=100 name=(null) inode=13964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=101 name=(null) inode=13966 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=102 name=(null) inode=13964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=103 name=(null) inode=13967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=104 name=(null) inode=13964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=105 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=106 name=(null) inode=13964 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PATH item=107 name=(null) inode=13969 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:26:57.677000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:26:57.710903 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:26:57.712122 systemd-networkd[973]: eth0: DHCPv4 address 172.24.4.40/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:26:57.717872 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 8 23:26:57.721867 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:26:57.723558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:26:57.766235 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:26:57.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:57.768037 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:26:57.980787 lvm[986]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:26:58.022801 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:26:58.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.024296 systemd[1]: Reached target cryptsetup.target. Feb 8 23:26:58.027778 systemd[1]: Starting lvm2-activation.service... Feb 8 23:26:58.037267 lvm[987]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:26:58.080057 systemd[1]: Finished lvm2-activation.service. Feb 8 23:26:58.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.081474 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:26:58.082592 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:26:58.082654 systemd[1]: Reached target local-fs.target. Feb 8 23:26:58.083767 systemd[1]: Reached target machines.target. Feb 8 23:26:58.087637 systemd[1]: Starting ldconfig.service... Feb 8 23:26:58.090307 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:26:58.090410 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:26:58.092721 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:26:58.096939 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:26:58.100720 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:26:58.102289 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:26:58.102397 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:26:58.107135 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:26:58.130594 systemd[1]: boot.automount: Got automount request for /boot, triggered by 989 (bootctl) Feb 8 23:26:58.133232 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:26:58.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.182766 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:26:58.485474 systemd-tmpfiles[992]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:26:58.507028 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:26:58.508009 systemd-tmpfiles[992]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:26:58.508305 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:26:58.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.534022 systemd-tmpfiles[992]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:26:58.687215 systemd-fsck[999]: fsck.fat 4.2 (2021-01-31) Feb 8 23:26:58.687215 systemd-fsck[999]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:26:58.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.697035 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:26:58.701401 systemd[1]: Mounting boot.mount... Feb 8 23:26:58.722003 systemd[1]: Mounted boot.mount. Feb 8 23:26:58.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.751143 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:26:58.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.816000 audit: BPF prog-id=27 op=LOAD Feb 8 23:26:58.809030 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:26:58.810935 systemd[1]: Starting audit-rules.service... Feb 8 23:26:58.812474 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:26:58.818000 audit: BPF prog-id=28 op=LOAD Feb 8 23:26:58.814108 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:26:58.817788 systemd[1]: Starting systemd-resolved.service... Feb 8 23:26:58.821052 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:26:58.825072 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:26:58.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.843704 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:26:58.844298 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:26:58.857000 audit[1007]: SYSTEM_BOOT pid=1007 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.867961 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:26:58.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:26:58.873708 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:26:58.929200 augenrules[1022]: No rules Feb 8 23:26:58.928000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:26:58.928000 audit[1022]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc664d740 a2=420 a3=0 items=0 ppid=1002 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:26:58.928000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:26:58.930013 systemd[1]: Finished audit-rules.service. Feb 8 23:26:58.930754 systemd-resolved[1005]: Positive Trust Anchors: Feb 8 23:26:58.930770 systemd-resolved[1005]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:26:58.930807 systemd-resolved[1005]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:26:58.936975 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:26:58.937531 systemd[1]: Reached target time-set.target. Feb 8 23:26:59.552859 systemd-timesyncd[1006]: Contacted time server 92.243.6.5:123 (0.flatcar.pool.ntp.org). Feb 8 23:26:59.553200 systemd-timesyncd[1006]: Initial clock synchronization to Thu 2024-02-08 23:26:59.552698 UTC. Feb 8 23:26:59.553634 systemd-resolved[1005]: Using system hostname 'ci-3510-3-2-4-bfb6381473.novalocal'. Feb 8 23:26:59.555465 systemd[1]: Started systemd-resolved.service. Feb 8 23:26:59.555984 systemd[1]: Reached target network.target. Feb 8 23:26:59.556428 systemd[1]: Reached target nss-lookup.target. Feb 8 23:26:59.753504 systemd-networkd[973]: eth0: Gained IPv6LL Feb 8 23:26:59.770131 ldconfig[988]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:26:59.782403 systemd[1]: Finished ldconfig.service. Feb 8 23:26:59.784305 systemd[1]: Starting systemd-update-done.service... Feb 8 23:26:59.797725 systemd[1]: Finished systemd-update-done.service. Feb 8 23:26:59.798285 systemd[1]: Reached target sysinit.target. Feb 8 23:26:59.798817 systemd[1]: Started motdgen.path. Feb 8 23:26:59.799243 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:26:59.799912 systemd[1]: Started logrotate.timer. Feb 8 23:26:59.800493 systemd[1]: Started mdadm.timer. Feb 8 23:26:59.800887 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:26:59.801348 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:26:59.801392 systemd[1]: Reached target paths.target. Feb 8 23:26:59.801791 systemd[1]: Reached target timers.target. Feb 8 23:26:59.802477 systemd[1]: Listening on dbus.socket. Feb 8 23:26:59.807517 systemd[1]: Starting docker.socket... Feb 8 23:26:59.811822 systemd[1]: Listening on sshd.socket. Feb 8 23:26:59.812445 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:26:59.812902 systemd[1]: Listening on docker.socket. Feb 8 23:26:59.813462 systemd[1]: Reached target sockets.target. Feb 8 23:26:59.813950 systemd[1]: Reached target basic.target. Feb 8 23:26:59.814481 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:26:59.814594 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:26:59.815720 systemd[1]: Starting containerd.service... Feb 8 23:26:59.817538 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 8 23:26:59.822676 systemd[1]: Starting dbus.service... Feb 8 23:26:59.827513 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:26:59.835870 systemd[1]: Starting extend-filesystems.service... Feb 8 23:26:59.837631 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:26:59.842853 systemd[1]: Starting motdgen.service... Feb 8 23:26:59.850461 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:26:59.853918 systemd[1]: Starting prepare-critools.service... Feb 8 23:26:59.859691 systemd[1]: Starting prepare-helm.service... Feb 8 23:26:59.865607 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:26:59.869402 dbus-daemon[1035]: [system] SELinux support is enabled Feb 8 23:26:59.871935 systemd[1]: Starting sshd-keygen.service... Feb 8 23:26:59.875246 systemd[1]: Starting systemd-logind.service... Feb 8 23:26:59.876049 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:26:59.876136 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:26:59.876585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:26:59.878312 systemd[1]: Starting update-engine.service... Feb 8 23:26:59.881555 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:26:59.882875 systemd[1]: Started dbus.service. Feb 8 23:26:59.886652 jq[1054]: true Feb 8 23:26:59.891689 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:26:59.891733 systemd[1]: Reached target system-config.target. Feb 8 23:26:59.893056 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:26:59.893083 systemd[1]: Reached target user-config.target. Feb 8 23:26:59.900334 jq[1058]: true Feb 8 23:26:59.902844 systemd[1]: Created slice system-sshd.slice. Feb 8 23:26:59.904667 extend-filesystems[1037]: Found vda Feb 8 23:26:59.905577 tar[1057]: linux-amd64/helm Feb 8 23:26:59.910957 tar[1062]: ./ Feb 8 23:26:59.911170 tar[1062]: ./loopback Feb 8 23:26:59.918179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:26:59.918364 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:26:59.926135 extend-filesystems[1037]: Found vda1 Feb 8 23:26:59.927437 tar[1056]: crictl Feb 8 23:26:59.927696 extend-filesystems[1037]: Found vda2 Feb 8 23:26:59.928220 extend-filesystems[1037]: Found vda3 Feb 8 23:26:59.930198 extend-filesystems[1037]: Found usr Feb 8 23:26:59.930758 extend-filesystems[1037]: Found vda4 Feb 8 23:26:59.932141 extend-filesystems[1037]: Found vda6 Feb 8 23:26:59.933124 extend-filesystems[1037]: Found vda7 Feb 8 23:26:59.933124 extend-filesystems[1037]: Found vda9 Feb 8 23:26:59.933124 extend-filesystems[1037]: Checking size of /dev/vda9 Feb 8 23:26:59.936243 jq[1036]: false Feb 8 23:26:59.936111 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:26:59.936304 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:26:59.946088 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:26:59.946312 systemd[1]: Finished motdgen.service. Feb 8 23:26:59.965415 extend-filesystems[1037]: Resized partition /dev/vda9 Feb 8 23:26:59.982381 extend-filesystems[1082]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:27:00.004297 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 8 23:27:00.025978 update_engine[1053]: I0208 23:27:00.024904 1053 main.cc:92] Flatcar Update Engine starting Feb 8 23:27:00.032681 systemd[1]: Started update-engine.service. Feb 8 23:27:00.035981 systemd[1]: Started locksmithd.service. Feb 8 23:27:00.037076 update_engine[1053]: I0208 23:27:00.037034 1053 update_check_scheduler.cc:74] Next update check in 4m34s Feb 8 23:27:00.078568 env[1068]: time="2024-02-08T23:27:00.048536118Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:27:00.105292 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 8 23:27:00.125932 coreos-metadata[1032]: Feb 08 23:27:00.125 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 8 23:27:00.169466 env[1068]: time="2024-02-08T23:27:00.127507338Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:27:00.173706 env[1068]: time="2024-02-08T23:27:00.172178429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:27:00.173758 bash[1093]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:27:00.173847 extend-filesystems[1082]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:27:00.173847 extend-filesystems[1082]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 8 23:27:00.173847 extend-filesystems[1082]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 8 23:27:00.185638 extend-filesystems[1037]: Resized filesystem in /dev/vda9 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.177022363Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.177076976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.177433615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.178102640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.178128879Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.178143526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.178445573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.178836456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.179004230Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:27:00.188921 env[1068]: time="2024-02-08T23:27:00.179024098Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:27:00.175063 systemd-logind[1052]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.179095442Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.179112023Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.187518619Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.187553885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.187591806Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.187632924Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188226026Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188308160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188327947Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188345250Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188361981Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188395884Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188412706Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.191032 env[1068]: time="2024-02-08T23:27:00.188427203Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:27:00.175082 systemd-logind[1052]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.188553560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.188658657Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189014394Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189045373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189060150Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189125503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189142024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189220401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189310159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189328564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189341989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189354853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189367757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189386953Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:27:00.192213 env[1068]: time="2024-02-08T23:27:00.189520293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.175246 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189538177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189552614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189565909Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189582961Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189595925Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189616784Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:27:00.193176 env[1068]: time="2024-02-08T23:27:00.189655858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:27:00.175457 systemd[1]: Finished extend-filesystems.service. Feb 8 23:27:00.176698 systemd-logind[1052]: New seat seat0. Feb 8 23:27:00.193468 env[1068]: time="2024-02-08T23:27:00.189875710Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:27:00.193468 env[1068]: time="2024-02-08T23:27:00.189964126Z" level=info msg="Connect containerd service" Feb 8 23:27:00.193468 env[1068]: time="2024-02-08T23:27:00.189996957Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:27:00.193468 env[1068]: time="2024-02-08T23:27:00.190682423Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:27:00.193468 env[1068]: time="2024-02-08T23:27:00.190976575Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:27:00.193468 env[1068]: time="2024-02-08T23:27:00.191068026Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:27:00.180928 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201086596Z" level=info msg="Start subscribing containerd event" Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201129737Z" level=info msg="containerd successfully booted in 0.158249s" Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201168750Z" level=info msg="Start recovering state" Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201461970Z" level=info msg="Start event monitor" Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201497577Z" level=info msg="Start snapshots syncer" Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201514619Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:27:00.203210 env[1068]: time="2024-02-08T23:27:00.201532663Z" level=info msg="Start streaming server" Feb 8 23:27:00.185576 systemd[1]: Started systemd-logind.service. Feb 8 23:27:00.191167 systemd[1]: Started containerd.service. Feb 8 23:27:00.208832 tar[1062]: ./bandwidth Feb 8 23:27:00.309085 tar[1062]: ./ptp Feb 8 23:27:00.340785 coreos-metadata[1032]: Feb 08 23:27:00.340 INFO Fetch successful Feb 8 23:27:00.340785 coreos-metadata[1032]: Feb 08 23:27:00.340 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 8 23:27:00.356099 coreos-metadata[1032]: Feb 08 23:27:00.355 INFO Fetch successful Feb 8 23:27:00.362766 unknown[1032]: wrote ssh authorized keys file for user: core Feb 8 23:27:00.392962 update-ssh-keys[1100]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:27:00.391934 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 8 23:27:00.443072 tar[1062]: ./vlan Feb 8 23:27:00.551020 tar[1062]: ./host-device Feb 8 23:27:00.678688 tar[1062]: ./tuning Feb 8 23:27:00.684571 tar[1057]: linux-amd64/LICENSE Feb 8 23:27:00.685020 tar[1057]: linux-amd64/README.md Feb 8 23:27:00.697325 systemd[1]: Finished prepare-helm.service. Feb 8 23:27:00.722974 tar[1062]: ./vrf Feb 8 23:27:00.759131 tar[1062]: ./sbr Feb 8 23:27:00.793956 tar[1062]: ./tap Feb 8 23:27:00.834924 tar[1062]: ./dhcp Feb 8 23:27:00.969458 tar[1062]: ./static Feb 8 23:27:01.028577 tar[1062]: ./firewall Feb 8 23:27:01.082386 tar[1062]: ./macvlan Feb 8 23:27:01.107628 systemd[1]: Finished prepare-critools.service. Feb 8 23:27:01.126724 tar[1062]: ./dummy Feb 8 23:27:01.168040 tar[1062]: ./bridge Feb 8 23:27:01.212353 tar[1062]: ./ipvlan Feb 8 23:27:01.253101 tar[1062]: ./portmap Feb 8 23:27:01.290861 tar[1062]: ./host-local Feb 8 23:27:01.336797 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:27:01.339785 locksmithd[1095]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:27:02.371127 sshd_keygen[1069]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:27:02.394983 systemd[1]: Finished sshd-keygen.service. Feb 8 23:27:02.397161 systemd[1]: Starting issuegen.service... Feb 8 23:27:02.398773 systemd[1]: Started sshd@0-172.24.4.40:22-172.24.4.1:40020.service. Feb 8 23:27:02.408455 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:27:02.408617 systemd[1]: Finished issuegen.service. Feb 8 23:27:02.410547 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:27:02.418868 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:27:02.420846 systemd[1]: Started getty@tty1.service. Feb 8 23:27:02.422526 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:27:02.423324 systemd[1]: Reached target getty.target. Feb 8 23:27:02.423862 systemd[1]: Reached target multi-user.target. Feb 8 23:27:02.425610 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:27:02.434592 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:27:02.434854 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:27:02.435823 systemd[1]: Startup finished in 976ms (kernel) + 13.044s (initrd) + 9.018s (userspace) = 23.039s. Feb 8 23:27:03.640831 sshd[1118]: Accepted publickey for core from 172.24.4.1 port 40020 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:27:03.645643 sshd[1118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:03.672403 systemd-logind[1052]: New session 1 of user core. Feb 8 23:27:03.676152 systemd[1]: Created slice user-500.slice. Feb 8 23:27:03.678887 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:27:03.699565 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:27:03.703410 systemd[1]: Starting user@500.service... Feb 8 23:27:03.711078 (systemd)[1127]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:03.837588 systemd[1127]: Queued start job for default target default.target. Feb 8 23:27:03.838172 systemd[1127]: Reached target paths.target. Feb 8 23:27:03.838199 systemd[1127]: Reached target sockets.target. Feb 8 23:27:03.838214 systemd[1127]: Reached target timers.target. Feb 8 23:27:03.838228 systemd[1127]: Reached target basic.target. Feb 8 23:27:03.838292 systemd[1127]: Reached target default.target. Feb 8 23:27:03.838320 systemd[1127]: Startup finished in 113ms. Feb 8 23:27:03.839368 systemd[1]: Started user@500.service. Feb 8 23:27:03.840419 systemd[1]: Started session-1.scope. Feb 8 23:27:04.344535 systemd[1]: Started sshd@1-172.24.4.40:22-172.24.4.1:40028.service. Feb 8 23:27:05.670201 sshd[1136]: Accepted publickey for core from 172.24.4.1 port 40028 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:27:05.672924 sshd[1136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:05.683721 systemd-logind[1052]: New session 2 of user core. Feb 8 23:27:05.684737 systemd[1]: Started session-2.scope. Feb 8 23:27:06.542885 sshd[1136]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:06.550970 systemd[1]: sshd@1-172.24.4.40:22-172.24.4.1:40028.service: Deactivated successfully. Feb 8 23:27:06.552800 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:27:06.556572 systemd-logind[1052]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:27:06.559812 systemd[1]: Started sshd@2-172.24.4.40:22-172.24.4.1:38900.service. Feb 8 23:27:06.564525 systemd-logind[1052]: Removed session 2. Feb 8 23:27:08.079998 sshd[1142]: Accepted publickey for core from 172.24.4.1 port 38900 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:27:08.082706 sshd[1142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:08.091865 systemd-logind[1052]: New session 3 of user core. Feb 8 23:27:08.092103 systemd[1]: Started session-3.scope. Feb 8 23:27:08.563641 sshd[1142]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:08.570765 systemd[1]: Started sshd@3-172.24.4.40:22-172.24.4.1:38916.service. Feb 8 23:27:08.571918 systemd[1]: sshd@2-172.24.4.40:22-172.24.4.1:38900.service: Deactivated successfully. Feb 8 23:27:08.573249 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:27:08.576153 systemd-logind[1052]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:27:08.578598 systemd-logind[1052]: Removed session 3. Feb 8 23:27:09.679912 sshd[1147]: Accepted publickey for core from 172.24.4.1 port 38916 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:27:09.683508 sshd[1147]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:09.695139 systemd-logind[1052]: New session 4 of user core. Feb 8 23:27:09.696083 systemd[1]: Started session-4.scope. Feb 8 23:27:10.307140 sshd[1147]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:10.314847 systemd[1]: Started sshd@4-172.24.4.40:22-172.24.4.1:38920.service. Feb 8 23:27:10.320943 systemd[1]: sshd@3-172.24.4.40:22-172.24.4.1:38916.service: Deactivated successfully. Feb 8 23:27:10.323080 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:27:10.328062 systemd-logind[1052]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:27:10.332102 systemd-logind[1052]: Removed session 4. Feb 8 23:27:11.475907 sshd[1153]: Accepted publickey for core from 172.24.4.1 port 38920 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:27:11.478681 sshd[1153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:27:11.488885 systemd-logind[1052]: New session 5 of user core. Feb 8 23:27:11.489790 systemd[1]: Started session-5.scope. Feb 8 23:27:11.833236 sudo[1157]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:27:11.835118 sudo[1157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:27:12.857064 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:27:12.870978 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:27:12.871843 systemd[1]: Reached target network-online.target. Feb 8 23:27:12.874965 systemd[1]: Starting docker.service... Feb 8 23:27:12.966275 env[1173]: time="2024-02-08T23:27:12.966173834Z" level=info msg="Starting up" Feb 8 23:27:12.969482 env[1173]: time="2024-02-08T23:27:12.969430973Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:27:12.969668 env[1173]: time="2024-02-08T23:27:12.969631789Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:27:12.969845 env[1173]: time="2024-02-08T23:27:12.969803621Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:27:12.970003 env[1173]: time="2024-02-08T23:27:12.969970444Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:27:12.972689 env[1173]: time="2024-02-08T23:27:12.972646383Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:27:12.972878 env[1173]: time="2024-02-08T23:27:12.972844705Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:27:12.973043 env[1173]: time="2024-02-08T23:27:12.973000818Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:27:12.973223 env[1173]: time="2024-02-08T23:27:12.973185344Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:27:12.989565 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport206863674-merged.mount: Deactivated successfully. Feb 8 23:27:13.042364 env[1173]: time="2024-02-08T23:27:13.042299087Z" level=info msg="Loading containers: start." Feb 8 23:27:13.225318 kernel: Initializing XFRM netlink socket Feb 8 23:27:13.284468 env[1173]: time="2024-02-08T23:27:13.284387964Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:27:13.366123 systemd-networkd[973]: docker0: Link UP Feb 8 23:27:13.382114 env[1173]: time="2024-02-08T23:27:13.382050100Z" level=info msg="Loading containers: done." Feb 8 23:27:13.401653 env[1173]: time="2024-02-08T23:27:13.401577483Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:27:13.401957 env[1173]: time="2024-02-08T23:27:13.401909386Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:27:13.402159 env[1173]: time="2024-02-08T23:27:13.402117526Z" level=info msg="Daemon has completed initialization" Feb 8 23:27:13.433143 systemd[1]: Started docker.service. Feb 8 23:27:13.453884 env[1173]: time="2024-02-08T23:27:13.453780510Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:27:13.487195 systemd[1]: Reloading. Feb 8 23:27:13.600170 /usr/lib/systemd/system-generators/torcx-generator[1311]: time="2024-02-08T23:27:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:27:13.600835 /usr/lib/systemd/system-generators/torcx-generator[1311]: time="2024-02-08T23:27:13Z" level=info msg="torcx already run" Feb 8 23:27:13.677881 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:27:13.677901 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:27:13.699919 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:27:13.784584 systemd[1]: Started kubelet.service. Feb 8 23:27:13.868314 kubelet[1357]: E0208 23:27:13.868179 1357 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:27:13.870480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:27:13.870627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:27:14.828234 env[1068]: time="2024-02-08T23:27:14.828122869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 8 23:27:15.587785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091210763.mount: Deactivated successfully. Feb 8 23:27:18.291056 env[1068]: time="2024-02-08T23:27:18.290469850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:18.294923 env[1068]: time="2024-02-08T23:27:18.294864001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:18.298115 env[1068]: time="2024-02-08T23:27:18.298062279Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:18.301372 env[1068]: time="2024-02-08T23:27:18.301240139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:18.302691 env[1068]: time="2024-02-08T23:27:18.302156388Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 8 23:27:18.318402 env[1068]: time="2024-02-08T23:27:18.318367421Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 8 23:27:21.352506 env[1068]: time="2024-02-08T23:27:21.352344845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:21.354875 env[1068]: time="2024-02-08T23:27:21.354823955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:21.357655 env[1068]: time="2024-02-08T23:27:21.357631691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:21.360653 env[1068]: time="2024-02-08T23:27:21.360631978Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:21.362487 env[1068]: time="2024-02-08T23:27:21.362429179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 8 23:27:21.377170 env[1068]: time="2024-02-08T23:27:21.377137544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 8 23:27:23.234592 env[1068]: time="2024-02-08T23:27:23.234416180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:23.238648 env[1068]: time="2024-02-08T23:27:23.238512031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:23.250721 env[1068]: time="2024-02-08T23:27:23.250587769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:23.255377 env[1068]: time="2024-02-08T23:27:23.255319524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:23.257510 env[1068]: time="2024-02-08T23:27:23.257434621Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 8 23:27:23.284102 env[1068]: time="2024-02-08T23:27:23.284011305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 8 23:27:24.111379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:27:24.111616 systemd[1]: Stopped kubelet.service. Feb 8 23:27:24.114527 systemd[1]: Started kubelet.service. Feb 8 23:27:24.213539 kubelet[1391]: E0208 23:27:24.213486 1391 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:27:24.216769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:27:24.216902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:27:24.699707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697732758.mount: Deactivated successfully. Feb 8 23:27:25.538683 env[1068]: time="2024-02-08T23:27:25.538575300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.541636 env[1068]: time="2024-02-08T23:27:25.541592198Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.543965 env[1068]: time="2024-02-08T23:27:25.543925444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.546243 env[1068]: time="2024-02-08T23:27:25.546155216Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:25.546560 env[1068]: time="2024-02-08T23:27:25.546472771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 8 23:27:25.566007 env[1068]: time="2024-02-08T23:27:25.565959278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:27:26.165995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245335825.mount: Deactivated successfully. Feb 8 23:27:26.177575 env[1068]: time="2024-02-08T23:27:26.177432346Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:26.183482 env[1068]: time="2024-02-08T23:27:26.183401391Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:26.190295 env[1068]: time="2024-02-08T23:27:26.190159465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:26.195421 env[1068]: time="2024-02-08T23:27:26.195339791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:26.196869 env[1068]: time="2024-02-08T23:27:26.196812643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:27:26.221907 env[1068]: time="2024-02-08T23:27:26.221828110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 8 23:27:26.885468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415988277.mount: Deactivated successfully. Feb 8 23:27:33.647646 env[1068]: time="2024-02-08T23:27:33.647592954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:33.651409 env[1068]: time="2024-02-08T23:27:33.651382924Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:33.654171 env[1068]: time="2024-02-08T23:27:33.654151069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:33.658398 env[1068]: time="2024-02-08T23:27:33.658375178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:33.660876 env[1068]: time="2024-02-08T23:27:33.659676259Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 8 23:27:33.681303 env[1068]: time="2024-02-08T23:27:33.681270232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 8 23:27:34.279544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:27:34.279713 systemd[1]: Stopped kubelet.service. Feb 8 23:27:34.283607 systemd[1]: Started kubelet.service. Feb 8 23:27:34.296947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount961417829.mount: Deactivated successfully. Feb 8 23:27:34.382556 kubelet[1413]: E0208 23:27:34.382496 1413 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:27:34.384369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:27:34.384510 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:27:35.709991 env[1068]: time="2024-02-08T23:27:35.709865853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.712544 env[1068]: time="2024-02-08T23:27:35.712484665Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.714611 env[1068]: time="2024-02-08T23:27:35.714538422Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.716872 env[1068]: time="2024-02-08T23:27:35.716816842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:35.718101 env[1068]: time="2024-02-08T23:27:35.717963081Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 8 23:27:39.610786 systemd[1]: Stopped kubelet.service. Feb 8 23:27:39.636704 systemd[1]: Reloading. Feb 8 23:27:39.773080 /usr/lib/systemd/system-generators/torcx-generator[1508]: time="2024-02-08T23:27:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:27:39.777440 /usr/lib/systemd/system-generators/torcx-generator[1508]: time="2024-02-08T23:27:39Z" level=info msg="torcx already run" Feb 8 23:27:39.866684 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:27:39.866924 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:27:39.890394 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:27:39.996829 systemd[1]: Started kubelet.service. Feb 8 23:27:40.056575 kubelet[1555]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:27:40.056906 kubelet[1555]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:27:40.056962 kubelet[1555]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:27:40.057089 kubelet[1555]: I0208 23:27:40.057060 1555 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:27:40.480286 kubelet[1555]: I0208 23:27:40.480212 1555 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 8 23:27:40.480450 kubelet[1555]: I0208 23:27:40.480337 1555 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:27:40.480869 kubelet[1555]: I0208 23:27:40.480831 1555 server.go:895] "Client rotation is on, will bootstrap in background" Feb 8 23:27:40.490341 kubelet[1555]: I0208 23:27:40.490319 1555 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:27:40.490661 kubelet[1555]: E0208 23:27:40.490646 1555 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.506682 kubelet[1555]: I0208 23:27:40.506653 1555 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:27:40.507081 kubelet[1555]: I0208 23:27:40.507068 1555 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:27:40.507471 kubelet[1555]: I0208 23:27:40.507453 1555 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 8 23:27:40.507624 kubelet[1555]: I0208 23:27:40.507612 1555 topology_manager.go:138] "Creating topology manager with none policy" Feb 8 23:27:40.507686 kubelet[1555]: I0208 23:27:40.507677 1555 container_manager_linux.go:301] "Creating device plugin manager" Feb 8 23:27:40.507880 kubelet[1555]: I0208 23:27:40.507868 1555 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:27:40.508522 kubelet[1555]: I0208 23:27:40.508423 1555 kubelet.go:393] "Attempting to sync node with API server" Feb 8 23:27:40.509221 kubelet[1555]: I0208 23:27:40.509209 1555 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:27:40.509345 kubelet[1555]: I0208 23:27:40.509335 1555 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:27:40.509415 kubelet[1555]: I0208 23:27:40.509406 1555 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:27:40.513570 kubelet[1555]: I0208 23:27:40.513531 1555 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:27:40.514136 kubelet[1555]: W0208 23:27:40.514100 1555 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:27:40.515610 kubelet[1555]: I0208 23:27:40.515572 1555 server.go:1232] "Started kubelet" Feb 8 23:27:40.515937 kubelet[1555]: W0208 23:27:40.515841 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-bfb6381473.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.515993 kubelet[1555]: E0208 23:27:40.515976 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-bfb6381473.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.516728 kubelet[1555]: W0208 23:27:40.516687 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.516824 kubelet[1555]: E0208 23:27:40.516812 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.517012 kubelet[1555]: I0208 23:27:40.516997 1555 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:27:40.517331 kubelet[1555]: I0208 23:27:40.517293 1555 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:27:40.518447 kubelet[1555]: I0208 23:27:40.518406 1555 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 8 23:27:40.520395 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:27:40.520459 kubelet[1555]: E0208 23:27:40.519126 1555 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-4-bfb6381473.novalocal.17b206f592258eed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-4-bfb6381473.novalocal", UID:"ci-3510-3-2-4-bfb6381473.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-4-bfb6381473.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 27, 40, 515528429, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 27, 40, 515528429, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-2-4-bfb6381473.novalocal"}': 'Post "https://172.24.4.40:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.40:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:27:40.520677 kubelet[1555]: I0208 23:27:40.520662 1555 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:27:40.522164 kubelet[1555]: I0208 23:27:40.522118 1555 server.go:462] "Adding debug handlers to kubelet server" Feb 8 23:27:40.524981 kubelet[1555]: E0208 23:27:40.524944 1555 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:27:40.525041 kubelet[1555]: E0208 23:27:40.524998 1555 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:27:40.529321 kubelet[1555]: I0208 23:27:40.529306 1555 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 8 23:27:40.530661 kubelet[1555]: E0208 23:27:40.530622 1555 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-bfb6381473.novalocal?timeout=10s\": dial tcp 172.24.4.40:6443: connect: connection refused" interval="200ms" Feb 8 23:27:40.532160 kubelet[1555]: I0208 23:27:40.532131 1555 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:27:40.532546 kubelet[1555]: I0208 23:27:40.532535 1555 reconciler_new.go:29] "Reconciler: start to sync state" Feb 8 23:27:40.550094 kubelet[1555]: W0208 23:27:40.550043 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.550301 kubelet[1555]: E0208 23:27:40.550286 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.561824 kubelet[1555]: I0208 23:27:40.561799 1555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 8 23:27:40.563302 kubelet[1555]: I0208 23:27:40.563289 1555 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 8 23:27:40.563404 kubelet[1555]: I0208 23:27:40.563393 1555 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 8 23:27:40.563478 kubelet[1555]: I0208 23:27:40.563468 1555 kubelet.go:2303] "Starting kubelet main sync loop" Feb 8 23:27:40.563608 kubelet[1555]: E0208 23:27:40.563596 1555 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:27:40.574097 kubelet[1555]: W0208 23:27:40.574076 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.574221 kubelet[1555]: E0208 23:27:40.574210 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:40.580685 kubelet[1555]: I0208 23:27:40.580671 1555 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:27:40.580778 kubelet[1555]: I0208 23:27:40.580768 1555 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:27:40.580862 kubelet[1555]: I0208 23:27:40.580852 1555 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:27:40.586852 kubelet[1555]: I0208 23:27:40.586816 1555 policy_none.go:49] "None policy: Start" Feb 8 23:27:40.587911 kubelet[1555]: I0208 23:27:40.587878 1555 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:27:40.587911 kubelet[1555]: I0208 23:27:40.587909 1555 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:27:40.594310 systemd[1]: Created slice kubepods.slice. Feb 8 23:27:40.599089 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:27:40.606849 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:27:40.608509 kubelet[1555]: I0208 23:27:40.608464 1555 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:27:40.608702 kubelet[1555]: I0208 23:27:40.608680 1555 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:27:40.610102 kubelet[1555]: E0208 23:27:40.610076 1555 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-4-bfb6381473.novalocal\" not found" Feb 8 23:27:40.632915 kubelet[1555]: I0208 23:27:40.632873 1555 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.633505 kubelet[1555]: E0208 23:27:40.633453 1555 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.40:6443/api/v1/nodes\": dial tcp 172.24.4.40:6443: connect: connection refused" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.664537 kubelet[1555]: I0208 23:27:40.664488 1555 topology_manager.go:215] "Topology Admit Handler" podUID="fe067416ae10d42c6f1b8f1418192472" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.666295 kubelet[1555]: I0208 23:27:40.666226 1555 topology_manager.go:215] "Topology Admit Handler" podUID="34bafd4af05481d58fc0dd86c41e7f47" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.668718 kubelet[1555]: I0208 23:27:40.668622 1555 topology_manager.go:215] "Topology Admit Handler" podUID="10fadfce44d5fa27dba6fd086d51c1f1" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.675386 systemd[1]: Created slice kubepods-burstable-podfe067416ae10d42c6f1b8f1418192472.slice. Feb 8 23:27:40.688557 systemd[1]: Created slice kubepods-burstable-pod10fadfce44d5fa27dba6fd086d51c1f1.slice. Feb 8 23:27:40.697800 systemd[1]: Created slice kubepods-burstable-pod34bafd4af05481d58fc0dd86c41e7f47.slice. Feb 8 23:27:40.732026 kubelet[1555]: E0208 23:27:40.731818 1555 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-bfb6381473.novalocal?timeout=10s\": dial tcp 172.24.4.40:6443: connect: connection refused" interval="400ms" Feb 8 23:27:40.774324 kubelet[1555]: E0208 23:27:40.773902 1555 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-4-bfb6381473.novalocal.17b206f592258eed", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-4-bfb6381473.novalocal", UID:"ci-3510-3-2-4-bfb6381473.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-4-bfb6381473.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 27, 40, 515528429, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 27, 40, 515528429, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-2-4-bfb6381473.novalocal"}': 'Post "https://172.24.4.40:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.40:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:27:40.835052 kubelet[1555]: I0208 23:27:40.834557 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.835052 kubelet[1555]: I0208 23:27:40.834764 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.835052 kubelet[1555]: I0208 23:27:40.834890 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.835052 kubelet[1555]: I0208 23:27:40.835000 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.835827 kubelet[1555]: I0208 23:27:40.835759 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.837425 kubelet[1555]: I0208 23:27:40.835879 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10fadfce44d5fa27dba6fd086d51c1f1-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"10fadfce44d5fa27dba6fd086d51c1f1\") " pod="kube-system/kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.837425 kubelet[1555]: I0208 23:27:40.837152 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe067416ae10d42c6f1b8f1418192472-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"fe067416ae10d42c6f1b8f1418192472\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.837425 kubelet[1555]: I0208 23:27:40.837407 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe067416ae10d42c6f1b8f1418192472-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"fe067416ae10d42c6f1b8f1418192472\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.837681 kubelet[1555]: I0208 23:27:40.837536 1555 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe067416ae10d42c6f1b8f1418192472-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"fe067416ae10d42c6f1b8f1418192472\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.838457 kubelet[1555]: I0208 23:27:40.838396 1555 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.839412 kubelet[1555]: E0208 23:27:40.839379 1555 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.40:6443/api/v1/nodes\": dial tcp 172.24.4.40:6443: connect: connection refused" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:40.988462 env[1068]: time="2024-02-08T23:27:40.987051207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal,Uid:fe067416ae10d42c6f1b8f1418192472,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:40.995014 env[1068]: time="2024-02-08T23:27:40.994730460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal,Uid:10fadfce44d5fa27dba6fd086d51c1f1,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:41.002798 env[1068]: time="2024-02-08T23:27:41.002644486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal,Uid:34bafd4af05481d58fc0dd86c41e7f47,Namespace:kube-system,Attempt:0,}" Feb 8 23:27:41.133392 kubelet[1555]: E0208 23:27:41.133186 1555 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-bfb6381473.novalocal?timeout=10s\": dial tcp 172.24.4.40:6443: connect: connection refused" interval="800ms" Feb 8 23:27:41.243574 kubelet[1555]: I0208 23:27:41.243094 1555 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:41.244855 kubelet[1555]: E0208 23:27:41.244649 1555 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.40:6443/api/v1/nodes\": dial tcp 172.24.4.40:6443: connect: connection refused" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:41.366035 kubelet[1555]: W0208 23:27:41.365885 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:41.366201 kubelet[1555]: E0208 23:27:41.366080 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:41.475709 kubelet[1555]: W0208 23:27:41.475518 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:41.475877 kubelet[1555]: E0208 23:27:41.475721 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:41.505543 kubelet[1555]: W0208 23:27:41.505134 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-bfb6381473.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:41.505543 kubelet[1555]: E0208 23:27:41.505206 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-4-bfb6381473.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:41.722937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455887193.mount: Deactivated successfully. Feb 8 23:27:41.736064 env[1068]: time="2024-02-08T23:27:41.735963834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.743534 env[1068]: time="2024-02-08T23:27:41.743460733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.746762 env[1068]: time="2024-02-08T23:27:41.746673554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.748883 env[1068]: time="2024-02-08T23:27:41.748803089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.754207 env[1068]: time="2024-02-08T23:27:41.754132111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.756743 env[1068]: time="2024-02-08T23:27:41.756384196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.761427 env[1068]: time="2024-02-08T23:27:41.761366876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.764509 env[1068]: time="2024-02-08T23:27:41.764450145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.768865 env[1068]: time="2024-02-08T23:27:41.768809313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.775089 env[1068]: time="2024-02-08T23:27:41.775046242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.776900 env[1068]: time="2024-02-08T23:27:41.776854522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.778126 env[1068]: time="2024-02-08T23:27:41.778090086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:27:41.849770 env[1068]: time="2024-02-08T23:27:41.849666211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:41.849937 env[1068]: time="2024-02-08T23:27:41.849759145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:41.849937 env[1068]: time="2024-02-08T23:27:41.849791145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:41.850213 env[1068]: time="2024-02-08T23:27:41.850155380Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be92a9d971c9b3009386ffd0bf8615dcf74ce9d2bc67947bf2c04b449ad4148c pid=1603 runtime=io.containerd.runc.v2 Feb 8 23:27:41.865285 env[1068]: time="2024-02-08T23:27:41.864982114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:41.865285 env[1068]: time="2024-02-08T23:27:41.865083354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:41.865285 env[1068]: time="2024-02-08T23:27:41.865108541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:41.865679 env[1068]: time="2024-02-08T23:27:41.865625463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58cc0a4e4756cf0379dc4b01cc31b823dd2f0d246e38b825c61bfe2d40dd0ead pid=1602 runtime=io.containerd.runc.v2 Feb 8 23:27:41.868549 env[1068]: time="2024-02-08T23:27:41.868287419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:27:41.868549 env[1068]: time="2024-02-08T23:27:41.868335770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:27:41.868549 env[1068]: time="2024-02-08T23:27:41.868348925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:27:41.868549 env[1068]: time="2024-02-08T23:27:41.868470473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a17d037097064a29fea925649b343682c07c92370b80e6bdf8ee9e24818a8d2 pid=1630 runtime=io.containerd.runc.v2 Feb 8 23:27:41.874304 systemd[1]: Started cri-containerd-be92a9d971c9b3009386ffd0bf8615dcf74ce9d2bc67947bf2c04b449ad4148c.scope. Feb 8 23:27:41.916291 systemd[1]: Started cri-containerd-58cc0a4e4756cf0379dc4b01cc31b823dd2f0d246e38b825c61bfe2d40dd0ead.scope. Feb 8 23:27:41.922202 systemd[1]: Started cri-containerd-3a17d037097064a29fea925649b343682c07c92370b80e6bdf8ee9e24818a8d2.scope. Feb 8 23:27:41.934353 kubelet[1555]: E0208 23:27:41.934098 1555 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-4-bfb6381473.novalocal?timeout=10s\": dial tcp 172.24.4.40:6443: connect: connection refused" interval="1.6s" Feb 8 23:27:41.966470 env[1068]: time="2024-02-08T23:27:41.966358680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal,Uid:34bafd4af05481d58fc0dd86c41e7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"be92a9d971c9b3009386ffd0bf8615dcf74ce9d2bc67947bf2c04b449ad4148c\"" Feb 8 23:27:41.974695 env[1068]: time="2024-02-08T23:27:41.974610048Z" level=info msg="CreateContainer within sandbox \"be92a9d971c9b3009386ffd0bf8615dcf74ce9d2bc67947bf2c04b449ad4148c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:27:41.997979 env[1068]: time="2024-02-08T23:27:41.997938359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal,Uid:10fadfce44d5fa27dba6fd086d51c1f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"58cc0a4e4756cf0379dc4b01cc31b823dd2f0d246e38b825c61bfe2d40dd0ead\"" Feb 8 23:27:42.003507 env[1068]: time="2024-02-08T23:27:42.003408936Z" level=info msg="CreateContainer within sandbox \"58cc0a4e4756cf0379dc4b01cc31b823dd2f0d246e38b825c61bfe2d40dd0ead\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:27:42.005464 env[1068]: time="2024-02-08T23:27:42.005422502Z" level=info msg="CreateContainer within sandbox \"be92a9d971c9b3009386ffd0bf8615dcf74ce9d2bc67947bf2c04b449ad4148c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e746612cf4e5aaafaa944e7afd5786b54ddad446d9eff803e388961b3cd0d1bb\"" Feb 8 23:27:42.010310 env[1068]: time="2024-02-08T23:27:42.008710073Z" level=info msg="StartContainer for \"e746612cf4e5aaafaa944e7afd5786b54ddad446d9eff803e388961b3cd0d1bb\"" Feb 8 23:27:42.024875 env[1068]: time="2024-02-08T23:27:42.024830444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal,Uid:fe067416ae10d42c6f1b8f1418192472,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a17d037097064a29fea925649b343682c07c92370b80e6bdf8ee9e24818a8d2\"" Feb 8 23:27:42.028222 env[1068]: time="2024-02-08T23:27:42.028175343Z" level=info msg="CreateContainer within sandbox \"3a17d037097064a29fea925649b343682c07c92370b80e6bdf8ee9e24818a8d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:27:42.037851 systemd[1]: Started cri-containerd-e746612cf4e5aaafaa944e7afd5786b54ddad446d9eff803e388961b3cd0d1bb.scope. Feb 8 23:27:42.049782 kubelet[1555]: I0208 23:27:42.049732 1555 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:42.050372 kubelet[1555]: E0208 23:27:42.050340 1555 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.40:6443/api/v1/nodes\": dial tcp 172.24.4.40:6443: connect: connection refused" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:42.055929 env[1068]: time="2024-02-08T23:27:42.055862263Z" level=info msg="CreateContainer within sandbox \"58cc0a4e4756cf0379dc4b01cc31b823dd2f0d246e38b825c61bfe2d40dd0ead\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4e0d0dd5124e7025658b27f141ca6594e7ed2d98460f2c96c25ee239cf7d46b\"" Feb 8 23:27:42.056664 env[1068]: time="2024-02-08T23:27:42.056633111Z" level=info msg="StartContainer for \"a4e0d0dd5124e7025658b27f141ca6594e7ed2d98460f2c96c25ee239cf7d46b\"" Feb 8 23:27:42.073713 env[1068]: time="2024-02-08T23:27:42.073474308Z" level=info msg="CreateContainer within sandbox \"3a17d037097064a29fea925649b343682c07c92370b80e6bdf8ee9e24818a8d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9f9305058474a2a5c87f9b599013d49d3a2abfa04cdc3de922df2e002673856c\"" Feb 8 23:27:42.075452 env[1068]: time="2024-02-08T23:27:42.075419695Z" level=info msg="StartContainer for \"9f9305058474a2a5c87f9b599013d49d3a2abfa04cdc3de922df2e002673856c\"" Feb 8 23:27:42.081165 kubelet[1555]: W0208 23:27:42.081020 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:42.081165 kubelet[1555]: E0208 23:27:42.081120 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:42.084528 systemd[1]: Started cri-containerd-a4e0d0dd5124e7025658b27f141ca6594e7ed2d98460f2c96c25ee239cf7d46b.scope. Feb 8 23:27:42.131642 systemd[1]: Started cri-containerd-9f9305058474a2a5c87f9b599013d49d3a2abfa04cdc3de922df2e002673856c.scope. Feb 8 23:27:42.145988 env[1068]: time="2024-02-08T23:27:42.145877454Z" level=info msg="StartContainer for \"e746612cf4e5aaafaa944e7afd5786b54ddad446d9eff803e388961b3cd0d1bb\" returns successfully" Feb 8 23:27:42.216997 env[1068]: time="2024-02-08T23:27:42.216921927Z" level=info msg="StartContainer for \"a4e0d0dd5124e7025658b27f141ca6594e7ed2d98460f2c96c25ee239cf7d46b\" returns successfully" Feb 8 23:27:42.246377 env[1068]: time="2024-02-08T23:27:42.246326275Z" level=info msg="StartContainer for \"9f9305058474a2a5c87f9b599013d49d3a2abfa04cdc3de922df2e002673856c\" returns successfully" Feb 8 23:27:42.578427 kubelet[1555]: E0208 23:27:42.578395 1555 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:43.225108 kubelet[1555]: W0208 23:27:43.225059 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:43.225407 kubelet[1555]: E0208 23:27:43.225396 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:43.441151 kubelet[1555]: W0208 23:27:43.441103 1555 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:43.441418 kubelet[1555]: E0208 23:27:43.441406 1555 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.40:6443: connect: connection refused Feb 8 23:27:43.654577 kubelet[1555]: I0208 23:27:43.654494 1555 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:45.287586 kubelet[1555]: E0208 23:27:45.287510 1555 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-4-bfb6381473.novalocal\" not found" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:45.328363 kubelet[1555]: I0208 23:27:45.328322 1555 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:45.400214 update_engine[1053]: I0208 23:27:45.400087 1053 update_attempter.cc:509] Updating boot flags... Feb 8 23:27:45.518191 kubelet[1555]: I0208 23:27:45.517933 1555 apiserver.go:52] "Watching apiserver" Feb 8 23:27:45.532579 kubelet[1555]: I0208 23:27:45.532522 1555 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:27:45.655626 kubelet[1555]: E0208 23:27:45.655552 1555 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:45.941841 kubelet[1555]: E0208 23:27:45.941716 1555 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.189500 kubelet[1555]: W0208 23:27:48.189431 1555 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:27:48.198171 systemd[1]: Reloading. Feb 8 23:27:48.281207 /usr/lib/systemd/system-generators/torcx-generator[1860]: time="2024-02-08T23:27:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:27:48.281243 /usr/lib/systemd/system-generators/torcx-generator[1860]: time="2024-02-08T23:27:48Z" level=info msg="torcx already run" Feb 8 23:27:48.386985 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:27:48.387210 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:27:48.412607 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:27:48.543715 kubelet[1555]: I0208 23:27:48.543554 1555 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:27:48.544115 systemd[1]: Stopping kubelet.service... Feb 8 23:27:48.564824 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:27:48.565185 systemd[1]: Stopped kubelet.service. Feb 8 23:27:48.568937 systemd[1]: Started kubelet.service. Feb 8 23:27:48.698405 kubelet[1908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:27:48.698405 kubelet[1908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:27:48.698405 kubelet[1908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:27:48.698405 kubelet[1908]: I0208 23:27:48.697889 1908 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:27:48.701957 sudo[1919]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:27:48.702245 sudo[1919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:27:48.705937 kubelet[1908]: I0208 23:27:48.705908 1908 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 8 23:27:48.706064 kubelet[1908]: I0208 23:27:48.706049 1908 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:27:48.706493 kubelet[1908]: I0208 23:27:48.706475 1908 server.go:895] "Client rotation is on, will bootstrap in background" Feb 8 23:27:48.708588 kubelet[1908]: I0208 23:27:48.708571 1908 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:27:48.710928 kubelet[1908]: I0208 23:27:48.710911 1908 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:27:48.717138 kubelet[1908]: I0208 23:27:48.717117 1908 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:27:48.717528 kubelet[1908]: I0208 23:27:48.717490 1908 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:27:48.717780 kubelet[1908]: I0208 23:27:48.717764 1908 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 8 23:27:48.717929 kubelet[1908]: I0208 23:27:48.717918 1908 topology_manager.go:138] "Creating topology manager with none policy" Feb 8 23:27:48.718003 kubelet[1908]: I0208 23:27:48.717992 1908 container_manager_linux.go:301] "Creating device plugin manager" Feb 8 23:27:48.718092 kubelet[1908]: I0208 23:27:48.718082 1908 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:27:48.718241 kubelet[1908]: I0208 23:27:48.718229 1908 kubelet.go:393] "Attempting to sync node with API server" Feb 8 23:27:48.718351 kubelet[1908]: I0208 23:27:48.718339 1908 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:27:48.718433 kubelet[1908]: I0208 23:27:48.718422 1908 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:27:48.718519 kubelet[1908]: I0208 23:27:48.718508 1908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:27:48.731936 kubelet[1908]: I0208 23:27:48.731914 1908 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:27:48.735353 kubelet[1908]: I0208 23:27:48.735333 1908 server.go:1232] "Started kubelet" Feb 8 23:27:48.740010 kubelet[1908]: I0208 23:27:48.739956 1908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:27:48.742840 kubelet[1908]: I0208 23:27:48.741936 1908 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:27:48.744621 kubelet[1908]: I0208 23:27:48.744604 1908 server.go:462] "Adding debug handlers to kubelet server" Feb 8 23:27:48.749015 kubelet[1908]: I0208 23:27:48.748978 1908 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:27:48.749371 kubelet[1908]: I0208 23:27:48.749352 1908 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 8 23:27:48.775779 kubelet[1908]: I0208 23:27:48.755521 1908 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 8 23:27:48.777971 kubelet[1908]: E0208 23:27:48.777926 1908 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:27:48.778668 kubelet[1908]: I0208 23:27:48.778648 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 8 23:27:48.779320 kubelet[1908]: E0208 23:27:48.779295 1908 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:27:48.781137 kubelet[1908]: I0208 23:27:48.755541 1908 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:27:48.781137 kubelet[1908]: I0208 23:27:48.780030 1908 reconciler_new.go:29] "Reconciler: start to sync state" Feb 8 23:27:48.782813 kubelet[1908]: I0208 23:27:48.782793 1908 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 8 23:27:48.782924 kubelet[1908]: I0208 23:27:48.782913 1908 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 8 23:27:48.783012 kubelet[1908]: I0208 23:27:48.783001 1908 kubelet.go:2303] "Starting kubelet main sync loop" Feb 8 23:27:48.783155 kubelet[1908]: E0208 23:27:48.783139 1908 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:27:48.854204 kubelet[1908]: I0208 23:27:48.852590 1908 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:27:48.854516 kubelet[1908]: I0208 23:27:48.854449 1908 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:27:48.854665 kubelet[1908]: I0208 23:27:48.854652 1908 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:27:48.854898 kubelet[1908]: I0208 23:27:48.854886 1908 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:27:48.854993 kubelet[1908]: I0208 23:27:48.854982 1908 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 8 23:27:48.855054 kubelet[1908]: I0208 23:27:48.855044 1908 policy_none.go:49] "None policy: Start" Feb 8 23:27:48.857934 kubelet[1908]: I0208 23:27:48.857915 1908 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.862068 kubelet[1908]: I0208 23:27:48.862045 1908 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:27:48.863659 kubelet[1908]: I0208 23:27:48.863639 1908 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:27:48.863963 kubelet[1908]: I0208 23:27:48.863950 1908 state_mem.go:75] "Updated machine memory state" Feb 8 23:27:48.870240 kubelet[1908]: I0208 23:27:48.868488 1908 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.870240 kubelet[1908]: I0208 23:27:48.868585 1908 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.880045 kubelet[1908]: I0208 23:27:48.876105 1908 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:27:48.887749 kubelet[1908]: I0208 23:27:48.887625 1908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:27:48.889545 kubelet[1908]: I0208 23:27:48.887954 1908 topology_manager.go:215] "Topology Admit Handler" podUID="fe067416ae10d42c6f1b8f1418192472" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.892665 kubelet[1908]: I0208 23:27:48.892642 1908 topology_manager.go:215] "Topology Admit Handler" podUID="34bafd4af05481d58fc0dd86c41e7f47" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.892917 kubelet[1908]: I0208 23:27:48.892885 1908 topology_manager.go:215] "Topology Admit Handler" podUID="10fadfce44d5fa27dba6fd086d51c1f1" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.905621 kubelet[1908]: W0208 23:27:48.905588 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:27:48.905761 kubelet[1908]: E0208 23:27:48.905687 1908 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.905854 kubelet[1908]: W0208 23:27:48.905834 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:27:48.914629 kubelet[1908]: W0208 23:27:48.914551 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:27:48.981467 kubelet[1908]: I0208 23:27:48.981431 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe067416ae10d42c6f1b8f1418192472-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"fe067416ae10d42c6f1b8f1418192472\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.981726 kubelet[1908]: I0208 23:27:48.981707 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.981876 kubelet[1908]: I0208 23:27:48.981858 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.982026 kubelet[1908]: I0208 23:27:48.982008 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/10fadfce44d5fa27dba6fd086d51c1f1-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"10fadfce44d5fa27dba6fd086d51c1f1\") " pod="kube-system/kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.982206 kubelet[1908]: I0208 23:27:48.982194 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe067416ae10d42c6f1b8f1418192472-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"fe067416ae10d42c6f1b8f1418192472\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.982355 kubelet[1908]: I0208 23:27:48.982343 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.982463 kubelet[1908]: I0208 23:27:48.982452 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.982568 kubelet[1908]: I0208 23:27:48.982557 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34bafd4af05481d58fc0dd86c41e7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"34bafd4af05481d58fc0dd86c41e7f47\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:48.982682 kubelet[1908]: I0208 23:27:48.982671 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe067416ae10d42c6f1b8f1418192472-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal\" (UID: \"fe067416ae10d42c6f1b8f1418192472\") " pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" Feb 8 23:27:49.437530 sudo[1919]: pam_unix(sudo:session): session closed for user root Feb 8 23:27:49.722170 kubelet[1908]: I0208 23:27:49.721967 1908 apiserver.go:52] "Watching apiserver" Feb 8 23:27:49.780350 kubelet[1908]: I0208 23:27:49.780284 1908 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:27:49.914098 kubelet[1908]: I0208 23:27:49.914055 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-4-bfb6381473.novalocal" podStartSLOduration=1.914007389 podCreationTimestamp="2024-02-08 23:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:49.908783996 +0000 UTC m=+1.327988296" watchObservedRunningTime="2024-02-08 23:27:49.914007389 +0000 UTC m=+1.333211688" Feb 8 23:27:49.914449 kubelet[1908]: I0208 23:27:49.914436 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-4-bfb6381473.novalocal" podStartSLOduration=1.914411358 podCreationTimestamp="2024-02-08 23:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:49.895665924 +0000 UTC m=+1.314870213" watchObservedRunningTime="2024-02-08 23:27:49.914411358 +0000 UTC m=+1.333615647" Feb 8 23:27:49.944721 kubelet[1908]: I0208 23:27:49.944664 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-4-bfb6381473.novalocal" podStartSLOduration=1.944621509 podCreationTimestamp="2024-02-08 23:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:27:49.926904797 +0000 UTC m=+1.346109096" watchObservedRunningTime="2024-02-08 23:27:49.944621509 +0000 UTC m=+1.363825808" Feb 8 23:27:51.519633 sudo[1157]: pam_unix(sudo:session): session closed for user root Feb 8 23:27:51.799600 sshd[1153]: pam_unix(sshd:session): session closed for user core Feb 8 23:27:51.805401 systemd[1]: sshd@4-172.24.4.40:22-172.24.4.1:38920.service: Deactivated successfully. Feb 8 23:27:51.807096 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:27:51.808302 systemd[1]: session-5.scope: Consumed 6.148s CPU time. Feb 8 23:27:51.810649 systemd-logind[1052]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:27:51.813152 systemd-logind[1052]: Removed session 5. Feb 8 23:28:02.833732 kubelet[1908]: I0208 23:28:02.833693 1908 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:28:02.834744 env[1068]: time="2024-02-08T23:28:02.834667582Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:28:02.835027 kubelet[1908]: I0208 23:28:02.834974 1908 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:28:03.621017 kubelet[1908]: I0208 23:28:03.620963 1908 topology_manager.go:215] "Topology Admit Handler" podUID="be83102d-c996-4e8d-8788-b4210a2b0570" podNamespace="kube-system" podName="kube-proxy-9b9n6" Feb 8 23:28:03.637647 systemd[1]: Created slice kubepods-besteffort-podbe83102d_c996_4e8d_8788_b4210a2b0570.slice. Feb 8 23:28:03.645119 kubelet[1908]: I0208 23:28:03.645055 1908 topology_manager.go:215] "Topology Admit Handler" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" podNamespace="kube-system" podName="cilium-w6lft" Feb 8 23:28:03.653864 systemd[1]: Created slice kubepods-burstable-pod34dd5b45_0ef4_46da_8faf_4118a561c9c4.slice. Feb 8 23:28:03.781377 kubelet[1908]: I0208 23:28:03.781163 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-lib-modules\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.781667 kubelet[1908]: I0208 23:28:03.781407 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-config-path\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.781667 kubelet[1908]: I0208 23:28:03.781484 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be83102d-c996-4e8d-8788-b4210a2b0570-xtables-lock\") pod \"kube-proxy-9b9n6\" (UID: \"be83102d-c996-4e8d-8788-b4210a2b0570\") " pod="kube-system/kube-proxy-9b9n6" Feb 8 23:28:03.781667 kubelet[1908]: I0208 23:28:03.781574 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be83102d-c996-4e8d-8788-b4210a2b0570-lib-modules\") pod \"kube-proxy-9b9n6\" (UID: \"be83102d-c996-4e8d-8788-b4210a2b0570\") " pod="kube-system/kube-proxy-9b9n6" Feb 8 23:28:03.781925 kubelet[1908]: I0208 23:28:03.781663 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cni-path\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.781925 kubelet[1908]: I0208 23:28:03.781760 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34dd5b45-0ef4-46da-8faf-4118a561c9c4-clustermesh-secrets\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.781925 kubelet[1908]: I0208 23:28:03.781833 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-net\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.781925 kubelet[1908]: I0208 23:28:03.781903 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-run\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782207 kubelet[1908]: I0208 23:28:03.781958 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hubble-tls\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782207 kubelet[1908]: I0208 23:28:03.782013 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be83102d-c996-4e8d-8788-b4210a2b0570-kube-proxy\") pod \"kube-proxy-9b9n6\" (UID: \"be83102d-c996-4e8d-8788-b4210a2b0570\") " pod="kube-system/kube-proxy-9b9n6" Feb 8 23:28:03.782207 kubelet[1908]: I0208 23:28:03.782064 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hostproc\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782207 kubelet[1908]: I0208 23:28:03.782158 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-cgroup\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782564 kubelet[1908]: I0208 23:28:03.782218 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-kernel\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782564 kubelet[1908]: I0208 23:28:03.782311 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5cv\" (UniqueName: \"kubernetes.io/projected/be83102d-c996-4e8d-8788-b4210a2b0570-kube-api-access-tw5cv\") pod \"kube-proxy-9b9n6\" (UID: \"be83102d-c996-4e8d-8788-b4210a2b0570\") " pod="kube-system/kube-proxy-9b9n6" Feb 8 23:28:03.782564 kubelet[1908]: I0208 23:28:03.782379 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-bpf-maps\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782564 kubelet[1908]: I0208 23:28:03.782432 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-etc-cni-netd\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782564 kubelet[1908]: I0208 23:28:03.782486 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-xtables-lock\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.782947 kubelet[1908]: I0208 23:28:03.782584 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9sbn\" (UniqueName: \"kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-kube-api-access-d9sbn\") pod \"cilium-w6lft\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " pod="kube-system/cilium-w6lft" Feb 8 23:28:03.829400 kubelet[1908]: I0208 23:28:03.829335 1908 topology_manager.go:215] "Topology Admit Handler" podUID="a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-bp5bw" Feb 8 23:28:03.837606 systemd[1]: Created slice kubepods-besteffort-poda8ed573f_e1dc_4b88_a3ea_eae00bf7edcc.slice. Feb 8 23:28:03.883574 kubelet[1908]: I0208 23:28:03.883431 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-bp5bw\" (UID: \"a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc\") " pod="kube-system/cilium-operator-6bc8ccdb58-bp5bw" Feb 8 23:28:03.884591 kubelet[1908]: I0208 23:28:03.884577 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g62sp\" (UniqueName: \"kubernetes.io/projected/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-kube-api-access-g62sp\") pod \"cilium-operator-6bc8ccdb58-bp5bw\" (UID: \"a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc\") " pod="kube-system/cilium-operator-6bc8ccdb58-bp5bw" Feb 8 23:28:03.951032 env[1068]: time="2024-02-08T23:28:03.950577206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9b9n6,Uid:be83102d-c996-4e8d-8788-b4210a2b0570,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:03.963156 env[1068]: time="2024-02-08T23:28:03.962717400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6lft,Uid:34dd5b45-0ef4-46da-8faf-4118a561c9c4,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:04.034167 env[1068]: time="2024-02-08T23:28:04.033694992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:04.034586 env[1068]: time="2024-02-08T23:28:04.034501275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:04.034835 env[1068]: time="2024-02-08T23:28:04.034763789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:04.036085 env[1068]: time="2024-02-08T23:28:04.036048028Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd48d01d1f46b6e61d2e6c02df6cad7c153cd9874f7f306df93d0256df2a81be pid=1992 runtime=io.containerd.runc.v2 Feb 8 23:28:04.053200 systemd[1]: Started cri-containerd-fd48d01d1f46b6e61d2e6c02df6cad7c153cd9874f7f306df93d0256df2a81be.scope. Feb 8 23:28:04.064917 env[1068]: time="2024-02-08T23:28:04.064843687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:04.065114 env[1068]: time="2024-02-08T23:28:04.064896306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:04.065114 env[1068]: time="2024-02-08T23:28:04.064911424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:04.065382 env[1068]: time="2024-02-08T23:28:04.065327065Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2 pid=2021 runtime=io.containerd.runc.v2 Feb 8 23:28:04.093988 systemd[1]: Started cri-containerd-a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2.scope. Feb 8 23:28:04.116818 env[1068]: time="2024-02-08T23:28:04.116709411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9b9n6,Uid:be83102d-c996-4e8d-8788-b4210a2b0570,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd48d01d1f46b6e61d2e6c02df6cad7c153cd9874f7f306df93d0256df2a81be\"" Feb 8 23:28:04.127660 env[1068]: time="2024-02-08T23:28:04.127599429Z" level=info msg="CreateContainer within sandbox \"fd48d01d1f46b6e61d2e6c02df6cad7c153cd9874f7f306df93d0256df2a81be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:28:04.138418 env[1068]: time="2024-02-08T23:28:04.138302244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w6lft,Uid:34dd5b45-0ef4-46da-8faf-4118a561c9c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\"" Feb 8 23:28:04.143579 env[1068]: time="2024-02-08T23:28:04.143518032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-bp5bw,Uid:a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:04.145564 env[1068]: time="2024-02-08T23:28:04.144044380Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:28:04.192462 env[1068]: time="2024-02-08T23:28:04.192425304Z" level=info msg="CreateContainer within sandbox \"fd48d01d1f46b6e61d2e6c02df6cad7c153cd9874f7f306df93d0256df2a81be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c913b807505c8996fa127068d03d63fb69d1ccab12e8438ecd5e0de9c4a367f\"" Feb 8 23:28:04.196450 env[1068]: time="2024-02-08T23:28:04.196214084Z" level=info msg="StartContainer for \"5c913b807505c8996fa127068d03d63fb69d1ccab12e8438ecd5e0de9c4a367f\"" Feb 8 23:28:04.207684 env[1068]: time="2024-02-08T23:28:04.207567791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:04.207978 env[1068]: time="2024-02-08T23:28:04.207688227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:04.207978 env[1068]: time="2024-02-08T23:28:04.207728262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:04.208300 env[1068]: time="2024-02-08T23:28:04.208227469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66 pid=2076 runtime=io.containerd.runc.v2 Feb 8 23:28:04.228221 systemd[1]: Started cri-containerd-d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66.scope. Feb 8 23:28:04.240796 systemd[1]: Started cri-containerd-5c913b807505c8996fa127068d03d63fb69d1ccab12e8438ecd5e0de9c4a367f.scope. Feb 8 23:28:04.318455 env[1068]: time="2024-02-08T23:28:04.318391258Z" level=info msg="StartContainer for \"5c913b807505c8996fa127068d03d63fb69d1ccab12e8438ecd5e0de9c4a367f\" returns successfully" Feb 8 23:28:04.320078 env[1068]: time="2024-02-08T23:28:04.320042186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-bp5bw,Uid:a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66\"" Feb 8 23:28:08.815323 kubelet[1908]: I0208 23:28:08.815236 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9b9n6" podStartSLOduration=5.815103985 podCreationTimestamp="2024-02-08 23:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:28:04.956503703 +0000 UTC m=+16.375708003" watchObservedRunningTime="2024-02-08 23:28:08.815103985 +0000 UTC m=+20.234308324" Feb 8 23:28:10.845599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3205412204.mount: Deactivated successfully. Feb 8 23:28:15.489748 env[1068]: time="2024-02-08T23:28:15.489647008Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:15.503585 env[1068]: time="2024-02-08T23:28:15.503524934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:15.506770 env[1068]: time="2024-02-08T23:28:15.506717914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:15.508668 env[1068]: time="2024-02-08T23:28:15.508614713Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:28:15.513045 env[1068]: time="2024-02-08T23:28:15.511867065Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:28:15.515974 env[1068]: time="2024-02-08T23:28:15.515916261Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:28:15.540046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108222142.mount: Deactivated successfully. Feb 8 23:28:15.553203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567886412.mount: Deactivated successfully. Feb 8 23:28:15.560938 env[1068]: time="2024-02-08T23:28:15.560868685Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\"" Feb 8 23:28:15.565436 env[1068]: time="2024-02-08T23:28:15.564930786Z" level=info msg="StartContainer for \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\"" Feb 8 23:28:15.613026 systemd[1]: Started cri-containerd-f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032.scope. Feb 8 23:28:15.649638 env[1068]: time="2024-02-08T23:28:15.649529769Z" level=info msg="StartContainer for \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\" returns successfully" Feb 8 23:28:15.657615 systemd[1]: cri-containerd-f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032.scope: Deactivated successfully. Feb 8 23:28:16.011691 env[1068]: time="2024-02-08T23:28:16.010873523Z" level=info msg="shim disconnected" id=f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032 Feb 8 23:28:16.011691 env[1068]: time="2024-02-08T23:28:16.010985783Z" level=warning msg="cleaning up after shim disconnected" id=f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032 namespace=k8s.io Feb 8 23:28:16.011691 env[1068]: time="2024-02-08T23:28:16.011012223Z" level=info msg="cleaning up dead shim" Feb 8 23:28:16.040436 env[1068]: time="2024-02-08T23:28:16.040328471Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2314 runtime=io.containerd.runc.v2\n" Feb 8 23:28:16.537689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032-rootfs.mount: Deactivated successfully. Feb 8 23:28:16.895972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518349511.mount: Deactivated successfully. Feb 8 23:28:16.999359 env[1068]: time="2024-02-08T23:28:16.999227416Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:28:17.333747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392326065.mount: Deactivated successfully. Feb 8 23:28:17.358337 env[1068]: time="2024-02-08T23:28:17.358214655Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\"" Feb 8 23:28:17.361164 env[1068]: time="2024-02-08T23:28:17.360007439Z" level=info msg="StartContainer for \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\"" Feb 8 23:28:17.398834 systemd[1]: Started cri-containerd-873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13.scope. Feb 8 23:28:17.452539 env[1068]: time="2024-02-08T23:28:17.452482792Z" level=info msg="StartContainer for \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\" returns successfully" Feb 8 23:28:17.457088 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:28:17.458417 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:28:17.458650 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:28:17.461174 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:28:17.465305 systemd[1]: cri-containerd-873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13.scope: Deactivated successfully. Feb 8 23:28:17.607005 env[1068]: time="2024-02-08T23:28:17.606886323Z" level=info msg="shim disconnected" id=873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13 Feb 8 23:28:17.607385 env[1068]: time="2024-02-08T23:28:17.607366193Z" level=warning msg="cleaning up after shim disconnected" id=873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13 namespace=k8s.io Feb 8 23:28:17.607484 env[1068]: time="2024-02-08T23:28:17.607468344Z" level=info msg="cleaning up dead shim" Feb 8 23:28:17.629962 env[1068]: time="2024-02-08T23:28:17.629921506Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2378 runtime=io.containerd.runc.v2\n" Feb 8 23:28:17.732040 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:28:17.997733 env[1068]: time="2024-02-08T23:28:17.997657829Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:28:19.107854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1339442160.mount: Deactivated successfully. Feb 8 23:28:19.125646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079251350.mount: Deactivated successfully. Feb 8 23:28:19.221368 env[1068]: time="2024-02-08T23:28:19.221216647Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\"" Feb 8 23:28:19.223422 env[1068]: time="2024-02-08T23:28:19.223360008Z" level=info msg="StartContainer for \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\"" Feb 8 23:28:19.284189 systemd[1]: Started cri-containerd-24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a.scope. Feb 8 23:28:19.396881 systemd[1]: cri-containerd-24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a.scope: Deactivated successfully. Feb 8 23:28:19.400202 env[1068]: time="2024-02-08T23:28:19.400116914Z" level=info msg="StartContainer for \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\" returns successfully" Feb 8 23:28:19.413408 env[1068]: time="2024-02-08T23:28:19.413366178Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:19.416960 env[1068]: time="2024-02-08T23:28:19.416932408Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:19.419191 env[1068]: time="2024-02-08T23:28:19.419145109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:28:19.419890 env[1068]: time="2024-02-08T23:28:19.419852155Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:28:19.424431 env[1068]: time="2024-02-08T23:28:19.424395839Z" level=info msg="CreateContainer within sandbox \"d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:28:19.568323 env[1068]: time="2024-02-08T23:28:19.568169259Z" level=info msg="CreateContainer within sandbox \"d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\"" Feb 8 23:28:19.570182 env[1068]: time="2024-02-08T23:28:19.570082638Z" level=info msg="StartContainer for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\"" Feb 8 23:28:19.584804 env[1068]: time="2024-02-08T23:28:19.584720018Z" level=info msg="shim disconnected" id=24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a Feb 8 23:28:19.585369 env[1068]: time="2024-02-08T23:28:19.585315735Z" level=warning msg="cleaning up after shim disconnected" id=24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a namespace=k8s.io Feb 8 23:28:19.585635 env[1068]: time="2024-02-08T23:28:19.585565714Z" level=info msg="cleaning up dead shim" Feb 8 23:28:19.606292 env[1068]: time="2024-02-08T23:28:19.606149968Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2445 runtime=io.containerd.runc.v2\n" Feb 8 23:28:19.618504 systemd[1]: Started cri-containerd-6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf.scope. Feb 8 23:28:19.691639 env[1068]: time="2024-02-08T23:28:19.691471025Z" level=info msg="StartContainer for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" returns successfully" Feb 8 23:28:19.995423 env[1068]: time="2024-02-08T23:28:19.995124530Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:28:20.102899 env[1068]: time="2024-02-08T23:28:20.102825768Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\"" Feb 8 23:28:20.105299 env[1068]: time="2024-02-08T23:28:20.104110718Z" level=info msg="StartContainer for \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\"" Feb 8 23:28:20.108811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a-rootfs.mount: Deactivated successfully. Feb 8 23:28:20.146434 systemd[1]: Started cri-containerd-1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd.scope. Feb 8 23:28:20.157028 systemd[1]: run-containerd-runc-k8s.io-1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd-runc.7h9XSv.mount: Deactivated successfully. Feb 8 23:28:20.159484 kubelet[1908]: I0208 23:28:20.159358 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-bp5bw" podStartSLOduration=2.06064573 podCreationTimestamp="2024-02-08 23:28:03 +0000 UTC" firstStartedPulling="2024-02-08 23:28:04.321586855 +0000 UTC m=+15.740791145" lastFinishedPulling="2024-02-08 23:28:19.420184228 +0000 UTC m=+30.839388517" observedRunningTime="2024-02-08 23:28:20.158856617 +0000 UTC m=+31.578060906" watchObservedRunningTime="2024-02-08 23:28:20.159243102 +0000 UTC m=+31.578447391" Feb 8 23:28:20.236762 systemd[1]: cri-containerd-1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd.scope: Deactivated successfully. Feb 8 23:28:20.239214 env[1068]: time="2024-02-08T23:28:20.238854616Z" level=info msg="StartContainer for \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\" returns successfully" Feb 8 23:28:20.285673 env[1068]: time="2024-02-08T23:28:20.285517762Z" level=info msg="shim disconnected" id=1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd Feb 8 23:28:20.285941 env[1068]: time="2024-02-08T23:28:20.285920117Z" level=warning msg="cleaning up after shim disconnected" id=1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd namespace=k8s.io Feb 8 23:28:20.286037 env[1068]: time="2024-02-08T23:28:20.286020785Z" level=info msg="cleaning up dead shim" Feb 8 23:28:20.297765 env[1068]: time="2024-02-08T23:28:20.297724411Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:28:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2528 runtime=io.containerd.runc.v2\n" Feb 8 23:28:20.998367 env[1068]: time="2024-02-08T23:28:20.998156769Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:28:21.041003 env[1068]: time="2024-02-08T23:28:21.040920408Z" level=info msg="CreateContainer within sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\"" Feb 8 23:28:21.042125 env[1068]: time="2024-02-08T23:28:21.042096104Z" level=info msg="StartContainer for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\"" Feb 8 23:28:21.063507 systemd[1]: Started cri-containerd-d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22.scope. Feb 8 23:28:21.100699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd-rootfs.mount: Deactivated successfully. Feb 8 23:28:21.139721 env[1068]: time="2024-02-08T23:28:21.139655483Z" level=info msg="StartContainer for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" returns successfully" Feb 8 23:28:21.158644 systemd[1]: run-containerd-runc-k8s.io-d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22-runc.M6XVrY.mount: Deactivated successfully. Feb 8 23:28:21.314122 kubelet[1908]: I0208 23:28:21.313999 1908 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:28:21.444192 kubelet[1908]: I0208 23:28:21.444055 1908 topology_manager.go:215] "Topology Admit Handler" podUID="4464dfff-b8bf-4ffc-aca3-1601a40df7fe" podNamespace="kube-system" podName="coredns-5dd5756b68-vqffw" Feb 8 23:28:21.459622 kubelet[1908]: I0208 23:28:21.459359 1908 topology_manager.go:215] "Topology Admit Handler" podUID="f70ae1e8-a590-490a-81c4-b5853456d1e6" podNamespace="kube-system" podName="coredns-5dd5756b68-l7bs6" Feb 8 23:28:21.463999 systemd[1]: Created slice kubepods-burstable-pod4464dfff_b8bf_4ffc_aca3_1601a40df7fe.slice. Feb 8 23:28:21.473863 systemd[1]: Created slice kubepods-burstable-podf70ae1e8_a590_490a_81c4_b5853456d1e6.slice. Feb 8 23:28:21.614968 kubelet[1908]: I0208 23:28:21.614870 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjzf\" (UniqueName: \"kubernetes.io/projected/4464dfff-b8bf-4ffc-aca3-1601a40df7fe-kube-api-access-bmjzf\") pod \"coredns-5dd5756b68-vqffw\" (UID: \"4464dfff-b8bf-4ffc-aca3-1601a40df7fe\") " pod="kube-system/coredns-5dd5756b68-vqffw" Feb 8 23:28:21.615194 kubelet[1908]: I0208 23:28:21.614999 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4464dfff-b8bf-4ffc-aca3-1601a40df7fe-config-volume\") pod \"coredns-5dd5756b68-vqffw\" (UID: \"4464dfff-b8bf-4ffc-aca3-1601a40df7fe\") " pod="kube-system/coredns-5dd5756b68-vqffw" Feb 8 23:28:21.615194 kubelet[1908]: I0208 23:28:21.615143 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f70ae1e8-a590-490a-81c4-b5853456d1e6-config-volume\") pod \"coredns-5dd5756b68-l7bs6\" (UID: \"f70ae1e8-a590-490a-81c4-b5853456d1e6\") " pod="kube-system/coredns-5dd5756b68-l7bs6" Feb 8 23:28:21.615458 kubelet[1908]: I0208 23:28:21.615214 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvw5\" (UniqueName: \"kubernetes.io/projected/f70ae1e8-a590-490a-81c4-b5853456d1e6-kube-api-access-6nvw5\") pod \"coredns-5dd5756b68-l7bs6\" (UID: \"f70ae1e8-a590-490a-81c4-b5853456d1e6\") " pod="kube-system/coredns-5dd5756b68-l7bs6" Feb 8 23:28:22.075140 env[1068]: time="2024-02-08T23:28:22.074924428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vqffw,Uid:4464dfff-b8bf-4ffc-aca3-1601a40df7fe,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:22.084789 env[1068]: time="2024-02-08T23:28:22.084690007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l7bs6,Uid:f70ae1e8-a590-490a-81c4-b5853456d1e6,Namespace:kube-system,Attempt:0,}" Feb 8 23:28:24.040506 systemd-networkd[973]: cilium_host: Link UP Feb 8 23:28:24.042948 systemd-networkd[973]: cilium_net: Link UP Feb 8 23:28:24.046025 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:28:24.046873 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:28:24.046195 systemd-networkd[973]: cilium_net: Gained carrier Feb 8 23:28:24.046420 systemd-networkd[973]: cilium_host: Gained carrier Feb 8 23:28:24.171594 systemd-networkd[973]: cilium_vxlan: Link UP Feb 8 23:28:24.171607 systemd-networkd[973]: cilium_vxlan: Gained carrier Feb 8 23:28:24.433670 systemd-networkd[973]: cilium_net: Gained IPv6LL Feb 8 23:28:25.004648 systemd-networkd[973]: cilium_host: Gained IPv6LL Feb 8 23:28:25.028397 kernel: NET: Registered PF_ALG protocol family Feb 8 23:28:25.449640 systemd-networkd[973]: cilium_vxlan: Gained IPv6LL Feb 8 23:28:25.906496 systemd-networkd[973]: lxc_health: Link UP Feb 8 23:28:25.913284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:28:25.913440 systemd-networkd[973]: lxc_health: Gained carrier Feb 8 23:28:25.987551 kubelet[1908]: I0208 23:28:25.987515 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-w6lft" podStartSLOduration=11.618625274 podCreationTimestamp="2024-02-08 23:28:03 +0000 UTC" firstStartedPulling="2024-02-08 23:28:04.140604925 +0000 UTC m=+15.559809214" lastFinishedPulling="2024-02-08 23:28:15.509415796 +0000 UTC m=+26.928620135" observedRunningTime="2024-02-08 23:28:22.035333041 +0000 UTC m=+33.454537330" watchObservedRunningTime="2024-02-08 23:28:25.987436195 +0000 UTC m=+37.406640494" Feb 8 23:28:26.178496 systemd-networkd[973]: lxc01ec6927296c: Link UP Feb 8 23:28:26.197313 kernel: eth0: renamed from tmp70438 Feb 8 23:28:26.207328 systemd-networkd[973]: lxc9e2cbae8f37b: Link UP Feb 8 23:28:26.220690 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc01ec6927296c: link becomes ready Feb 8 23:28:26.222447 kernel: eth0: renamed from tmp6a930 Feb 8 23:28:26.219852 systemd-networkd[973]: lxc01ec6927296c: Gained carrier Feb 8 23:28:26.231473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9e2cbae8f37b: link becomes ready Feb 8 23:28:26.230239 systemd-networkd[973]: lxc9e2cbae8f37b: Gained carrier Feb 8 23:28:27.113472 systemd-networkd[973]: lxc_health: Gained IPv6LL Feb 8 23:28:27.625674 systemd-networkd[973]: lxc9e2cbae8f37b: Gained IPv6LL Feb 8 23:28:27.626376 systemd-networkd[973]: lxc01ec6927296c: Gained IPv6LL Feb 8 23:28:30.863924 env[1068]: time="2024-02-08T23:28:30.863757408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:30.864599 env[1068]: time="2024-02-08T23:28:30.864571024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:30.864699 env[1068]: time="2024-02-08T23:28:30.864675279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:30.865287 env[1068]: time="2024-02-08T23:28:30.865153015Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a930edf3a162db9c1c9fe15ee7988481eabcf63e61b2d0c164907d359238b54 pid=3066 runtime=io.containerd.runc.v2 Feb 8 23:28:30.892176 systemd[1]: run-containerd-runc-k8s.io-6a930edf3a162db9c1c9fe15ee7988481eabcf63e61b2d0c164907d359238b54-runc.TV2JXi.mount: Deactivated successfully. Feb 8 23:28:30.905671 systemd[1]: Started cri-containerd-6a930edf3a162db9c1c9fe15ee7988481eabcf63e61b2d0c164907d359238b54.scope. Feb 8 23:28:30.954007 env[1068]: time="2024-02-08T23:28:30.953878469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:28:30.954007 env[1068]: time="2024-02-08T23:28:30.953954551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:28:30.954007 env[1068]: time="2024-02-08T23:28:30.953968688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:28:30.954515 env[1068]: time="2024-02-08T23:28:30.954459629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70438cceaa73876be5438cb62a1782a270301c3431c86324fcceab2a823d29b2 pid=3100 runtime=io.containerd.runc.v2 Feb 8 23:28:30.984063 systemd[1]: Started cri-containerd-70438cceaa73876be5438cb62a1782a270301c3431c86324fcceab2a823d29b2.scope. Feb 8 23:28:31.005995 env[1068]: time="2024-02-08T23:28:31.004945406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-l7bs6,Uid:f70ae1e8-a590-490a-81c4-b5853456d1e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a930edf3a162db9c1c9fe15ee7988481eabcf63e61b2d0c164907d359238b54\"" Feb 8 23:28:31.012637 env[1068]: time="2024-02-08T23:28:31.012584926Z" level=info msg="CreateContainer within sandbox \"6a930edf3a162db9c1c9fe15ee7988481eabcf63e61b2d0c164907d359238b54\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:28:31.061594 env[1068]: time="2024-02-08T23:28:31.061460642Z" level=info msg="CreateContainer within sandbox \"6a930edf3a162db9c1c9fe15ee7988481eabcf63e61b2d0c164907d359238b54\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca34cca9f7c12957be9419061940bcb62cd69636537a4e333264a4157814f666\"" Feb 8 23:28:31.063227 env[1068]: time="2024-02-08T23:28:31.063193412Z" level=info msg="StartContainer for \"ca34cca9f7c12957be9419061940bcb62cd69636537a4e333264a4157814f666\"" Feb 8 23:28:31.085980 env[1068]: time="2024-02-08T23:28:31.085903389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vqffw,Uid:4464dfff-b8bf-4ffc-aca3-1601a40df7fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"70438cceaa73876be5438cb62a1782a270301c3431c86324fcceab2a823d29b2\"" Feb 8 23:28:31.093228 env[1068]: time="2024-02-08T23:28:31.093113423Z" level=info msg="CreateContainer within sandbox \"70438cceaa73876be5438cb62a1782a270301c3431c86324fcceab2a823d29b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:28:31.106369 systemd[1]: Started cri-containerd-ca34cca9f7c12957be9419061940bcb62cd69636537a4e333264a4157814f666.scope. Feb 8 23:28:31.119500 env[1068]: time="2024-02-08T23:28:31.119403916Z" level=info msg="CreateContainer within sandbox \"70438cceaa73876be5438cb62a1782a270301c3431c86324fcceab2a823d29b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ceddbadb0021d402679a2971d88310007b17c2db4d592049fd0b5c1c7e8e2ec\"" Feb 8 23:28:31.120868 env[1068]: time="2024-02-08T23:28:31.120845751Z" level=info msg="StartContainer for \"6ceddbadb0021d402679a2971d88310007b17c2db4d592049fd0b5c1c7e8e2ec\"" Feb 8 23:28:31.142561 systemd[1]: Started cri-containerd-6ceddbadb0021d402679a2971d88310007b17c2db4d592049fd0b5c1c7e8e2ec.scope. Feb 8 23:28:31.191165 env[1068]: time="2024-02-08T23:28:31.191115355Z" level=info msg="StartContainer for \"ca34cca9f7c12957be9419061940bcb62cd69636537a4e333264a4157814f666\" returns successfully" Feb 8 23:28:31.226788 env[1068]: time="2024-02-08T23:28:31.226636582Z" level=info msg="StartContainer for \"6ceddbadb0021d402679a2971d88310007b17c2db4d592049fd0b5c1c7e8e2ec\" returns successfully" Feb 8 23:28:32.123842 kubelet[1908]: I0208 23:28:32.123753 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vqffw" podStartSLOduration=29.12363427 podCreationTimestamp="2024-02-08 23:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:28:32.089687858 +0000 UTC m=+43.508892197" watchObservedRunningTime="2024-02-08 23:28:32.12363427 +0000 UTC m=+43.542838659" Feb 8 23:28:32.147647 kubelet[1908]: I0208 23:28:32.147619 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-l7bs6" podStartSLOduration=29.14755041 podCreationTimestamp="2024-02-08 23:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:28:32.146336693 +0000 UTC m=+43.565541013" watchObservedRunningTime="2024-02-08 23:28:32.14755041 +0000 UTC m=+43.566754699" Feb 8 23:28:42.769104 systemd[1]: Started sshd@5-172.24.4.40:22-172.24.4.1:59574.service. Feb 8 23:28:44.184162 sshd[3223]: Accepted publickey for core from 172.24.4.1 port 59574 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:28:44.190328 sshd[3223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:44.205577 systemd-logind[1052]: New session 6 of user core. Feb 8 23:28:44.207040 systemd[1]: Started session-6.scope. Feb 8 23:28:45.085714 sshd[3223]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:45.090908 systemd[1]: sshd@5-172.24.4.40:22-172.24.4.1:59574.service: Deactivated successfully. Feb 8 23:28:45.092584 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:28:45.093702 systemd-logind[1052]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:28:45.095010 systemd-logind[1052]: Removed session 6. Feb 8 23:28:50.093608 systemd[1]: Started sshd@6-172.24.4.40:22-172.24.4.1:46890.service. Feb 8 23:28:51.626344 sshd[3239]: Accepted publickey for core from 172.24.4.1 port 46890 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:28:51.629232 sshd[3239]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:51.643438 systemd[1]: Started session-7.scope. Feb 8 23:28:51.644347 systemd-logind[1052]: New session 7 of user core. Feb 8 23:28:52.843806 sshd[3239]: pam_unix(sshd:session): session closed for user core Feb 8 23:28:52.849228 systemd[1]: sshd@6-172.24.4.40:22-172.24.4.1:46890.service: Deactivated successfully. Feb 8 23:28:52.850857 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:28:52.852146 systemd-logind[1052]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:28:52.853965 systemd-logind[1052]: Removed session 7. Feb 8 23:28:57.861226 systemd[1]: Started sshd@7-172.24.4.40:22-172.24.4.1:47324.service. Feb 8 23:28:59.104387 sshd[3252]: Accepted publickey for core from 172.24.4.1 port 47324 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:28:59.107163 sshd[3252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:28:59.130586 systemd-logind[1052]: New session 8 of user core. Feb 8 23:28:59.131441 systemd[1]: Started session-8.scope. Feb 8 23:29:00.123520 sshd[3252]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:00.125911 systemd[1]: sshd@7-172.24.4.40:22-172.24.4.1:47324.service: Deactivated successfully. Feb 8 23:29:00.126704 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:29:00.127927 systemd-logind[1052]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:29:00.128747 systemd-logind[1052]: Removed session 8. Feb 8 23:29:05.133755 systemd[1]: Started sshd@8-172.24.4.40:22-172.24.4.1:57860.service. Feb 8 23:29:06.472389 sshd[3266]: Accepted publickey for core from 172.24.4.1 port 57860 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:06.476815 sshd[3266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:06.489727 systemd-logind[1052]: New session 9 of user core. Feb 8 23:29:06.492451 systemd[1]: Started session-9.scope. Feb 8 23:29:07.357304 sshd[3266]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:07.364601 systemd[1]: sshd@8-172.24.4.40:22-172.24.4.1:57860.service: Deactivated successfully. Feb 8 23:29:07.365958 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:29:07.367619 systemd-logind[1052]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:29:07.371545 systemd[1]: Started sshd@9-172.24.4.40:22-172.24.4.1:57868.service. Feb 8 23:29:07.374976 systemd-logind[1052]: Removed session 9. Feb 8 23:29:08.759388 sshd[3281]: Accepted publickey for core from 172.24.4.1 port 57868 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:08.762049 sshd[3281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:08.775718 systemd-logind[1052]: New session 10 of user core. Feb 8 23:29:08.775953 systemd[1]: Started session-10.scope. Feb 8 23:29:10.762712 sshd[3281]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:10.770065 systemd[1]: Started sshd@10-172.24.4.40:22-172.24.4.1:57880.service. Feb 8 23:29:10.784792 systemd[1]: sshd@9-172.24.4.40:22-172.24.4.1:57868.service: Deactivated successfully. Feb 8 23:29:10.785937 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:29:10.788472 systemd-logind[1052]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:29:10.793239 systemd-logind[1052]: Removed session 10. Feb 8 23:29:11.928707 sshd[3290]: Accepted publickey for core from 172.24.4.1 port 57880 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:11.931993 sshd[3290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:11.950436 systemd[1]: Started session-11.scope. Feb 8 23:29:11.951454 systemd-logind[1052]: New session 11 of user core. Feb 8 23:29:12.853456 sshd[3290]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:12.859428 systemd[1]: sshd@10-172.24.4.40:22-172.24.4.1:57880.service: Deactivated successfully. Feb 8 23:29:12.861369 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:29:12.863027 systemd-logind[1052]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:29:12.865201 systemd-logind[1052]: Removed session 11. Feb 8 23:29:17.864703 systemd[1]: Started sshd@11-172.24.4.40:22-172.24.4.1:40752.service. Feb 8 23:29:19.359336 sshd[3304]: Accepted publickey for core from 172.24.4.1 port 40752 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:19.364563 sshd[3304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:19.384429 systemd-logind[1052]: New session 12 of user core. Feb 8 23:29:19.387613 systemd[1]: Started session-12.scope. Feb 8 23:29:20.122511 sshd[3304]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:20.128947 systemd[1]: Started sshd@12-172.24.4.40:22-172.24.4.1:40754.service. Feb 8 23:29:20.129803 systemd[1]: sshd@11-172.24.4.40:22-172.24.4.1:40752.service: Deactivated successfully. Feb 8 23:29:20.130784 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:29:20.133802 systemd-logind[1052]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:29:20.136201 systemd-logind[1052]: Removed session 12. Feb 8 23:29:21.423626 sshd[3315]: Accepted publickey for core from 172.24.4.1 port 40754 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:21.425797 sshd[3315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:21.436610 systemd-logind[1052]: New session 13 of user core. Feb 8 23:29:21.437178 systemd[1]: Started session-13.scope. Feb 8 23:29:23.337176 sshd[3315]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:23.347794 systemd[1]: Started sshd@13-172.24.4.40:22-172.24.4.1:40760.service. Feb 8 23:29:23.351336 systemd[1]: sshd@12-172.24.4.40:22-172.24.4.1:40754.service: Deactivated successfully. Feb 8 23:29:23.353060 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:29:23.355681 systemd-logind[1052]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:29:23.358833 systemd-logind[1052]: Removed session 13. Feb 8 23:29:24.631433 sshd[3324]: Accepted publickey for core from 172.24.4.1 port 40760 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:24.632950 sshd[3324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:24.645034 systemd-logind[1052]: New session 14 of user core. Feb 8 23:29:24.646478 systemd[1]: Started session-14.scope. Feb 8 23:29:26.702141 sshd[3324]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:26.708536 systemd[1]: sshd@13-172.24.4.40:22-172.24.4.1:40760.service: Deactivated successfully. Feb 8 23:29:26.711357 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:29:26.713689 systemd-logind[1052]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:29:26.719041 systemd[1]: Started sshd@14-172.24.4.40:22-172.24.4.1:37496.service. Feb 8 23:29:26.723531 systemd-logind[1052]: Removed session 14. Feb 8 23:29:27.804075 sshd[3342]: Accepted publickey for core from 172.24.4.1 port 37496 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:27.806601 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:27.818317 systemd[1]: Started session-15.scope. Feb 8 23:29:27.819938 systemd-logind[1052]: New session 15 of user core. Feb 8 23:29:29.373604 sshd[3342]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:29.382101 systemd[1]: sshd@14-172.24.4.40:22-172.24.4.1:37496.service: Deactivated successfully. Feb 8 23:29:29.384395 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:29:29.386865 systemd-logind[1052]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:29:29.391648 systemd[1]: Started sshd@15-172.24.4.40:22-172.24.4.1:37506.service. Feb 8 23:29:29.397949 systemd-logind[1052]: Removed session 15. Feb 8 23:29:30.594382 sshd[3352]: Accepted publickey for core from 172.24.4.1 port 37506 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:30.597038 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:30.607948 systemd-logind[1052]: New session 16 of user core. Feb 8 23:29:30.608889 systemd[1]: Started session-16.scope. Feb 8 23:29:31.693448 sshd[3352]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:31.700016 systemd[1]: sshd@15-172.24.4.40:22-172.24.4.1:37506.service: Deactivated successfully. Feb 8 23:29:31.702324 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:29:31.704393 systemd-logind[1052]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:29:31.707770 systemd-logind[1052]: Removed session 16. Feb 8 23:29:36.704903 systemd[1]: Started sshd@16-172.24.4.40:22-172.24.4.1:44774.service. Feb 8 23:29:38.070056 sshd[3370]: Accepted publickey for core from 172.24.4.1 port 44774 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:38.072915 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:38.084126 systemd-logind[1052]: New session 17 of user core. Feb 8 23:29:38.085502 systemd[1]: Started session-17.scope. Feb 8 23:29:38.773933 sshd[3370]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:38.779480 systemd[1]: sshd@16-172.24.4.40:22-172.24.4.1:44774.service: Deactivated successfully. Feb 8 23:29:38.781551 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:29:38.783725 systemd-logind[1052]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:29:38.787188 systemd-logind[1052]: Removed session 17. Feb 8 23:29:43.785129 systemd[1]: Started sshd@17-172.24.4.40:22-172.24.4.1:44790.service. Feb 8 23:29:45.012504 sshd[3382]: Accepted publickey for core from 172.24.4.1 port 44790 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:45.015503 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:45.026449 systemd[1]: Started session-18.scope. Feb 8 23:29:45.027873 systemd-logind[1052]: New session 18 of user core. Feb 8 23:29:45.640869 sshd[3382]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:45.644306 systemd[1]: sshd@17-172.24.4.40:22-172.24.4.1:44790.service: Deactivated successfully. Feb 8 23:29:45.645066 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:29:45.646153 systemd-logind[1052]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:29:45.647796 systemd-logind[1052]: Removed session 18. Feb 8 23:29:50.650403 systemd[1]: Started sshd@18-172.24.4.40:22-172.24.4.1:52830.service. Feb 8 23:29:51.841687 sshd[3396]: Accepted publickey for core from 172.24.4.1 port 52830 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:51.845522 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:51.856840 systemd-logind[1052]: New session 19 of user core. Feb 8 23:29:51.859192 systemd[1]: Started session-19.scope. Feb 8 23:29:52.643606 sshd[3396]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:52.652474 systemd[1]: Started sshd@19-172.24.4.40:22-172.24.4.1:52840.service. Feb 8 23:29:52.653790 systemd[1]: sshd@18-172.24.4.40:22-172.24.4.1:52830.service: Deactivated successfully. Feb 8 23:29:52.655466 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:29:52.661116 systemd-logind[1052]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:29:52.664853 systemd-logind[1052]: Removed session 19. Feb 8 23:29:53.998292 sshd[3407]: Accepted publickey for core from 172.24.4.1 port 52840 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:54.001907 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:54.017891 systemd[1]: Started session-20.scope. Feb 8 23:29:54.021459 systemd-logind[1052]: New session 20 of user core. Feb 8 23:29:57.040499 systemd[1]: run-containerd-runc-k8s.io-d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22-runc.eI8GJH.mount: Deactivated successfully. Feb 8 23:29:57.071596 env[1068]: time="2024-02-08T23:29:57.071525298Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:29:57.074339 env[1068]: time="2024-02-08T23:29:57.074303564Z" level=info msg="StopContainer for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" with timeout 30 (s)" Feb 8 23:29:57.074783 env[1068]: time="2024-02-08T23:29:57.074758307Z" level=info msg="Stop container \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" with signal terminated" Feb 8 23:29:57.084144 env[1068]: time="2024-02-08T23:29:57.084109994Z" level=info msg="StopContainer for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" with timeout 2 (s)" Feb 8 23:29:57.084991 env[1068]: time="2024-02-08T23:29:57.084658013Z" level=info msg="Stop container \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" with signal terminated" Feb 8 23:29:57.086690 systemd[1]: cri-containerd-6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf.scope: Deactivated successfully. Feb 8 23:29:57.098220 systemd-networkd[973]: lxc_health: Link DOWN Feb 8 23:29:57.098230 systemd-networkd[973]: lxc_health: Lost carrier Feb 8 23:29:57.136313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf-rootfs.mount: Deactivated successfully. Feb 8 23:29:57.137698 systemd[1]: cri-containerd-d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22.scope: Deactivated successfully. Feb 8 23:29:57.137987 systemd[1]: cri-containerd-d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22.scope: Consumed 8.962s CPU time. Feb 8 23:29:57.150382 env[1068]: time="2024-02-08T23:29:57.150324735Z" level=info msg="shim disconnected" id=6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf Feb 8 23:29:57.150666 env[1068]: time="2024-02-08T23:29:57.150643914Z" level=warning msg="cleaning up after shim disconnected" id=6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf namespace=k8s.io Feb 8 23:29:57.150765 env[1068]: time="2024-02-08T23:29:57.150748871Z" level=info msg="cleaning up dead shim" Feb 8 23:29:57.161236 env[1068]: time="2024-02-08T23:29:57.161189502Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3469 runtime=io.containerd.runc.v2\n" Feb 8 23:29:57.167095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22-rootfs.mount: Deactivated successfully. Feb 8 23:29:57.168498 env[1068]: time="2024-02-08T23:29:57.168464491Z" level=info msg="StopContainer for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" returns successfully" Feb 8 23:29:57.170122 env[1068]: time="2024-02-08T23:29:57.170097035Z" level=info msg="StopPodSandbox for \"d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66\"" Feb 8 23:29:57.170309 env[1068]: time="2024-02-08T23:29:57.170240314Z" level=info msg="Container to stop \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:29:57.178941 systemd[1]: cri-containerd-d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66.scope: Deactivated successfully. Feb 8 23:29:57.180681 env[1068]: time="2024-02-08T23:29:57.180640900Z" level=info msg="shim disconnected" id=d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22 Feb 8 23:29:57.180888 env[1068]: time="2024-02-08T23:29:57.180856785Z" level=warning msg="cleaning up after shim disconnected" id=d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22 namespace=k8s.io Feb 8 23:29:57.180968 env[1068]: time="2024-02-08T23:29:57.180951964Z" level=info msg="cleaning up dead shim" Feb 8 23:29:57.197042 env[1068]: time="2024-02-08T23:29:57.196971268Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3495 runtime=io.containerd.runc.v2\n" Feb 8 23:29:57.201836 env[1068]: time="2024-02-08T23:29:57.201780768Z" level=info msg="StopContainer for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" returns successfully" Feb 8 23:29:57.202743 env[1068]: time="2024-02-08T23:29:57.202719921Z" level=info msg="StopPodSandbox for \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\"" Feb 8 23:29:57.202924 env[1068]: time="2024-02-08T23:29:57.202871937Z" level=info msg="Container to stop \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:29:57.203018 env[1068]: time="2024-02-08T23:29:57.202998334Z" level=info msg="Container to stop \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:29:57.203124 env[1068]: time="2024-02-08T23:29:57.203100555Z" level=info msg="Container to stop \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:29:57.203234 env[1068]: time="2024-02-08T23:29:57.203210983Z" level=info msg="Container to stop \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:29:57.203354 env[1068]: time="2024-02-08T23:29:57.203331068Z" level=info msg="Container to stop \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:29:57.214122 systemd[1]: cri-containerd-a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2.scope: Deactivated successfully. Feb 8 23:29:57.220672 env[1068]: time="2024-02-08T23:29:57.220570714Z" level=info msg="shim disconnected" id=d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66 Feb 8 23:29:57.221002 env[1068]: time="2024-02-08T23:29:57.220982236Z" level=warning msg="cleaning up after shim disconnected" id=d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66 namespace=k8s.io Feb 8 23:29:57.221100 env[1068]: time="2024-02-08T23:29:57.221084258Z" level=info msg="cleaning up dead shim" Feb 8 23:29:57.232989 env[1068]: time="2024-02-08T23:29:57.232929586Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3526 runtime=io.containerd.runc.v2\n" Feb 8 23:29:57.233466 env[1068]: time="2024-02-08T23:29:57.233393497Z" level=info msg="TearDown network for sandbox \"d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66\" successfully" Feb 8 23:29:57.233466 env[1068]: time="2024-02-08T23:29:57.233431638Z" level=info msg="StopPodSandbox for \"d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66\" returns successfully" Feb 8 23:29:57.264701 env[1068]: time="2024-02-08T23:29:57.264640770Z" level=info msg="shim disconnected" id=a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2 Feb 8 23:29:57.264701 env[1068]: time="2024-02-08T23:29:57.264700303Z" level=warning msg="cleaning up after shim disconnected" id=a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2 namespace=k8s.io Feb 8 23:29:57.264926 env[1068]: time="2024-02-08T23:29:57.264713046Z" level=info msg="cleaning up dead shim" Feb 8 23:29:57.272007 env[1068]: time="2024-02-08T23:29:57.271965632Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:29:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3552 runtime=io.containerd.runc.v2\n" Feb 8 23:29:57.272323 env[1068]: time="2024-02-08T23:29:57.272247561Z" level=info msg="TearDown network for sandbox \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" successfully" Feb 8 23:29:57.272323 env[1068]: time="2024-02-08T23:29:57.272292866Z" level=info msg="StopPodSandbox for \"a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2\" returns successfully" Feb 8 23:29:57.295085 kubelet[1908]: I0208 23:29:57.294934 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9sbn\" (UniqueName: \"kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-kube-api-access-d9sbn\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.295085 kubelet[1908]: I0208 23:29:57.294993 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cni-path\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.295085 kubelet[1908]: I0208 23:29:57.295017 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-bpf-maps\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.295085 kubelet[1908]: I0208 23:29:57.295045 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-xtables-lock\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.295085 kubelet[1908]: I0208 23:29:57.295073 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g62sp\" (UniqueName: \"kubernetes.io/projected/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-kube-api-access-g62sp\") pod \"a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc\" (UID: \"a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc\") " Feb 8 23:29:57.300492 kubelet[1908]: I0208 23:29:57.295097 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hostproc\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300492 kubelet[1908]: I0208 23:29:57.295119 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-net\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300492 kubelet[1908]: I0208 23:29:57.295143 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-kernel\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300492 kubelet[1908]: I0208 23:29:57.295168 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hubble-tls\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300492 kubelet[1908]: I0208 23:29:57.295195 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34dd5b45-0ef4-46da-8faf-4118a561c9c4-clustermesh-secrets\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300492 kubelet[1908]: I0208 23:29:57.295232 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-run\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300989 kubelet[1908]: I0208 23:29:57.295358 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-config-path\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300989 kubelet[1908]: I0208 23:29:57.295396 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-cilium-config-path\") pod \"a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc\" (UID: \"a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc\") " Feb 8 23:29:57.300989 kubelet[1908]: I0208 23:29:57.295419 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-lib-modules\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300989 kubelet[1908]: I0208 23:29:57.295443 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-cgroup\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300989 kubelet[1908]: I0208 23:29:57.295465 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-etc-cni-netd\") pod \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\" (UID: \"34dd5b45-0ef4-46da-8faf-4118a561c9c4\") " Feb 8 23:29:57.300989 kubelet[1908]: I0208 23:29:57.295535 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.301338 kubelet[1908]: I0208 23:29:57.295710 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.305072 kubelet[1908]: I0208 23:29:57.304201 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cni-path" (OuterVolumeSpecName: "cni-path") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.305072 kubelet[1908]: I0208 23:29:57.304271 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.305072 kubelet[1908]: I0208 23:29:57.304296 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.305072 kubelet[1908]: I0208 23:29:57.304501 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hostproc" (OuterVolumeSpecName: "hostproc") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.305072 kubelet[1908]: I0208 23:29:57.304526 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.308091 kubelet[1908]: I0208 23:29:57.308052 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:29:57.314974 kubelet[1908]: I0208 23:29:57.314926 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:29:57.316064 kubelet[1908]: I0208 23:29:57.316041 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dd5b45-0ef4-46da-8faf-4118a561c9c4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:29:57.316174 kubelet[1908]: I0208 23:29:57.316157 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.316334 kubelet[1908]: I0208 23:29:57.316298 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.316452 kubelet[1908]: I0208 23:29:57.316435 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:29:57.320180 kubelet[1908]: I0208 23:29:57.320138 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc" (UID: "a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:29:57.320979 kubelet[1908]: I0208 23:29:57.320939 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-kube-api-access-d9sbn" (OuterVolumeSpecName: "kube-api-access-d9sbn") pod "34dd5b45-0ef4-46da-8faf-4118a561c9c4" (UID: "34dd5b45-0ef4-46da-8faf-4118a561c9c4"). InnerVolumeSpecName "kube-api-access-d9sbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:29:57.322409 kubelet[1908]: I0208 23:29:57.322377 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-kube-api-access-g62sp" (OuterVolumeSpecName: "kube-api-access-g62sp") pod "a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc" (UID: "a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc"). InnerVolumeSpecName "kube-api-access-g62sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:29:57.358166 kubelet[1908]: I0208 23:29:57.358107 1908 scope.go:117] "RemoveContainer" containerID="6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf" Feb 8 23:29:57.369744 env[1068]: time="2024-02-08T23:29:57.366773744Z" level=info msg="RemoveContainer for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\"" Feb 8 23:29:57.371229 systemd[1]: Removed slice kubepods-besteffort-poda8ed573f_e1dc_4b88_a3ea_eae00bf7edcc.slice. Feb 8 23:29:57.380233 env[1068]: time="2024-02-08T23:29:57.380183639Z" level=info msg="RemoveContainer for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" returns successfully" Feb 8 23:29:57.386983 kubelet[1908]: I0208 23:29:57.386960 1908 scope.go:117] "RemoveContainer" containerID="6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf" Feb 8 23:29:57.392392 env[1068]: time="2024-02-08T23:29:57.392314163Z" level=error msg="ContainerStatus for \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\": not found" Feb 8 23:29:57.393288 kubelet[1908]: E0208 23:29:57.393269 1908 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\": not found" containerID="6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf" Feb 8 23:29:57.399083 kubelet[1908]: I0208 23:29:57.399068 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf"} err="failed to get container status \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"6124bfcd7ef514c6204732df4e821366dc045cf5f62d93b164ed7d3490e2c4bf\": not found" Feb 8 23:29:57.399224 kubelet[1908]: I0208 23:29:57.399194 1908 scope.go:117] "RemoveContainer" containerID="d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22" Feb 8 23:29:57.403667 systemd[1]: Removed slice kubepods-burstable-pod34dd5b45_0ef4_46da_8faf_4118a561c9c4.slice. Feb 8 23:29:57.403764 systemd[1]: kubepods-burstable-pod34dd5b45_0ef4_46da_8faf_4118a561c9c4.slice: Consumed 9.085s CPU time. Feb 8 23:29:57.404289 env[1068]: time="2024-02-08T23:29:57.404180610Z" level=info msg="RemoveContainer for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\"" Feb 8 23:29:57.406204 kubelet[1908]: I0208 23:29:57.406187 1908 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hostproc\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.406391 kubelet[1908]: I0208 23:29:57.406379 1908 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-xtables-lock\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.406504 kubelet[1908]: I0208 23:29:57.406493 1908 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g62sp\" (UniqueName: \"kubernetes.io/projected/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-kube-api-access-g62sp\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.406593 kubelet[1908]: I0208 23:29:57.406582 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-net\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.406698 kubelet[1908]: I0208 23:29:57.406688 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-host-proc-sys-kernel\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.406775 kubelet[1908]: I0208 23:29:57.406765 1908 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-hubble-tls\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.406878 kubelet[1908]: I0208 23:29:57.406859 1908 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34dd5b45-0ef4-46da-8faf-4118a561c9c4-clustermesh-secrets\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.407077 kubelet[1908]: I0208 23:29:57.407065 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-run\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.407279 kubelet[1908]: I0208 23:29:57.407241 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-config-path\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.407385 kubelet[1908]: I0208 23:29:57.407374 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc-cilium-config-path\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.407517 kubelet[1908]: I0208 23:29:57.407505 1908 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-lib-modules\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.408751 kubelet[1908]: I0208 23:29:57.408739 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cilium-cgroup\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.408896 kubelet[1908]: I0208 23:29:57.408867 1908 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-etc-cni-netd\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.408992 kubelet[1908]: I0208 23:29:57.408982 1908 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-d9sbn\" (UniqueName: \"kubernetes.io/projected/34dd5b45-0ef4-46da-8faf-4118a561c9c4-kube-api-access-d9sbn\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.409087 kubelet[1908]: I0208 23:29:57.409077 1908 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-cni-path\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.409179 kubelet[1908]: I0208 23:29:57.409169 1908 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34dd5b45-0ef4-46da-8faf-4118a561c9c4-bpf-maps\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:29:57.410350 env[1068]: time="2024-02-08T23:29:57.410309056Z" level=info msg="RemoveContainer for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" returns successfully" Feb 8 23:29:57.411117 kubelet[1908]: I0208 23:29:57.411100 1908 scope.go:117] "RemoveContainer" containerID="1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd" Feb 8 23:29:57.412630 env[1068]: time="2024-02-08T23:29:57.412566524Z" level=info msg="RemoveContainer for \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\"" Feb 8 23:29:57.420173 env[1068]: time="2024-02-08T23:29:57.419904871Z" level=info msg="RemoveContainer for \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\" returns successfully" Feb 8 23:29:57.420442 kubelet[1908]: I0208 23:29:57.420424 1908 scope.go:117] "RemoveContainer" containerID="24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a" Feb 8 23:29:57.422663 env[1068]: time="2024-02-08T23:29:57.422628334Z" level=info msg="RemoveContainer for \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\"" Feb 8 23:29:57.429662 env[1068]: time="2024-02-08T23:29:57.429577610Z" level=info msg="RemoveContainer for \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\" returns successfully" Feb 8 23:29:57.429791 kubelet[1908]: I0208 23:29:57.429777 1908 scope.go:117] "RemoveContainer" containerID="873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13" Feb 8 23:29:57.433026 env[1068]: time="2024-02-08T23:29:57.432080379Z" level=info msg="RemoveContainer for \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\"" Feb 8 23:29:57.440639 env[1068]: time="2024-02-08T23:29:57.438931532Z" level=info msg="RemoveContainer for \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\" returns successfully" Feb 8 23:29:57.441025 kubelet[1908]: I0208 23:29:57.440980 1908 scope.go:117] "RemoveContainer" containerID="f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032" Feb 8 23:29:57.443991 env[1068]: time="2024-02-08T23:29:57.443956997Z" level=info msg="RemoveContainer for \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\"" Feb 8 23:29:57.451930 env[1068]: time="2024-02-08T23:29:57.451905609Z" level=info msg="RemoveContainer for \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\" returns successfully" Feb 8 23:29:57.452164 kubelet[1908]: I0208 23:29:57.452148 1908 scope.go:117] "RemoveContainer" containerID="d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22" Feb 8 23:29:57.452467 env[1068]: time="2024-02-08T23:29:57.452415667Z" level=error msg="ContainerStatus for \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\": not found" Feb 8 23:29:57.452705 kubelet[1908]: E0208 23:29:57.452692 1908 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\": not found" containerID="d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22" Feb 8 23:29:57.452802 kubelet[1908]: I0208 23:29:57.452790 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22"} err="failed to get container status \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\": rpc error: code = NotFound desc = an error occurred when try to find container \"d72f8b6d1dc69b04f7a2ec603636e9b5d8a40b6a05fce87b250df8df449aac22\": not found" Feb 8 23:29:57.452873 kubelet[1908]: I0208 23:29:57.452863 1908 scope.go:117] "RemoveContainer" containerID="1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd" Feb 8 23:29:57.453098 env[1068]: time="2024-02-08T23:29:57.453057612Z" level=error msg="ContainerStatus for \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\": not found" Feb 8 23:29:57.453349 kubelet[1908]: E0208 23:29:57.453311 1908 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\": not found" containerID="1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd" Feb 8 23:29:57.453406 kubelet[1908]: I0208 23:29:57.453382 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd"} err="failed to get container status \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"1acbb9ac588e3bae441af82129979d85769b7ef53a3cdbed77821a54d68ba8dd\": not found" Feb 8 23:29:57.453406 kubelet[1908]: I0208 23:29:57.453397 1908 scope.go:117] "RemoveContainer" containerID="24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a" Feb 8 23:29:57.453610 env[1068]: time="2024-02-08T23:29:57.453569323Z" level=error msg="ContainerStatus for \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\": not found" Feb 8 23:29:57.453809 kubelet[1908]: E0208 23:29:57.453797 1908 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\": not found" containerID="24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a" Feb 8 23:29:57.453902 kubelet[1908]: I0208 23:29:57.453894 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a"} err="failed to get container status \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\": rpc error: code = NotFound desc = an error occurred when try to find container \"24eed611d2bdf94ce27dcb93093e77de8cd1f12394bcb1eebda6f6fb5175530a\": not found" Feb 8 23:29:57.453975 kubelet[1908]: I0208 23:29:57.453966 1908 scope.go:117] "RemoveContainer" containerID="873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13" Feb 8 23:29:57.454194 env[1068]: time="2024-02-08T23:29:57.454153199Z" level=error msg="ContainerStatus for \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\": not found" Feb 8 23:29:57.454384 kubelet[1908]: E0208 23:29:57.454373 1908 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\": not found" containerID="873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13" Feb 8 23:29:57.454473 kubelet[1908]: I0208 23:29:57.454463 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13"} err="failed to get container status \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\": rpc error: code = NotFound desc = an error occurred when try to find container \"873629895b0da542d7d35670b1b1e8f6338badaad19acf838865b8f84039fa13\": not found" Feb 8 23:29:57.454537 kubelet[1908]: I0208 23:29:57.454528 1908 scope.go:117] "RemoveContainer" containerID="f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032" Feb 8 23:29:57.454809 env[1068]: time="2024-02-08T23:29:57.454718981Z" level=error msg="ContainerStatus for \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\": not found" Feb 8 23:29:57.455016 kubelet[1908]: E0208 23:29:57.454990 1908 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\": not found" containerID="f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032" Feb 8 23:29:57.455066 kubelet[1908]: I0208 23:29:57.455038 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032"} err="failed to get container status \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2762af2206e2eb6e8088197cf6104d1e1c472157181f53e2ec298b31f033032\": not found" Feb 8 23:29:58.034361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66-rootfs.mount: Deactivated successfully. Feb 8 23:29:58.034588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0d899cbe38f48100ac885ba5784e493c8c5d254d2f22034f2b879e92e1a4d66-shm.mount: Deactivated successfully. Feb 8 23:29:58.034755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2-rootfs.mount: Deactivated successfully. Feb 8 23:29:58.034943 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4a279e6c5dc25e0056f1f83198288a89a5293007aa91b2fbbd07182165bd2a2-shm.mount: Deactivated successfully. Feb 8 23:29:58.035133 systemd[1]: var-lib-kubelet-pods-a8ed573f\x2de1dc\x2d4b88\x2da3ea\x2deae00bf7edcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg62sp.mount: Deactivated successfully. Feb 8 23:29:58.035347 systemd[1]: var-lib-kubelet-pods-34dd5b45\x2d0ef4\x2d46da\x2d8faf\x2d4118a561c9c4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9sbn.mount: Deactivated successfully. Feb 8 23:29:58.035501 systemd[1]: var-lib-kubelet-pods-34dd5b45\x2d0ef4\x2d46da\x2d8faf\x2d4118a561c9c4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:29:58.035648 systemd[1]: var-lib-kubelet-pods-34dd5b45\x2d0ef4\x2d46da\x2d8faf\x2d4118a561c9c4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:29:58.790348 kubelet[1908]: I0208 23:29:58.790247 1908 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" path="/var/lib/kubelet/pods/34dd5b45-0ef4-46da-8faf-4118a561c9c4/volumes" Feb 8 23:29:58.791901 kubelet[1908]: I0208 23:29:58.791846 1908 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc" path="/var/lib/kubelet/pods/a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc/volumes" Feb 8 23:29:58.933849 kubelet[1908]: E0208 23:29:58.933769 1908 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:29:58.938658 sshd[3407]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:58.953865 systemd[1]: Started sshd@20-172.24.4.40:22-172.24.4.1:44430.service. Feb 8 23:29:58.960141 systemd[1]: sshd@19-172.24.4.40:22-172.24.4.1:52840.service: Deactivated successfully. Feb 8 23:29:58.961920 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:29:58.962499 systemd[1]: session-20.scope: Consumed 1.486s CPU time. Feb 8 23:29:58.966717 systemd-logind[1052]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:29:58.971808 systemd-logind[1052]: Removed session 20. Feb 8 23:30:00.017524 sshd[3569]: Accepted publickey for core from 172.24.4.1 port 44430 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:00.020498 sshd[3569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:00.031878 systemd-logind[1052]: New session 21 of user core. Feb 8 23:30:00.032777 systemd[1]: Started session-21.scope. Feb 8 23:30:01.292439 kubelet[1908]: I0208 23:30:01.292407 1908 topology_manager.go:215] "Topology Admit Handler" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" podNamespace="kube-system" podName="cilium-zlksq" Feb 8 23:30:01.297172 kubelet[1908]: E0208 23:30:01.297139 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" containerName="mount-cgroup" Feb 8 23:30:01.297391 kubelet[1908]: E0208 23:30:01.297377 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" containerName="clean-cilium-state" Feb 8 23:30:01.297475 kubelet[1908]: E0208 23:30:01.297464 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc" containerName="cilium-operator" Feb 8 23:30:01.297548 kubelet[1908]: E0208 23:30:01.297538 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" containerName="cilium-agent" Feb 8 23:30:01.297618 kubelet[1908]: E0208 23:30:01.297608 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" containerName="apply-sysctl-overwrites" Feb 8 23:30:01.297684 kubelet[1908]: E0208 23:30:01.297674 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" containerName="mount-bpf-fs" Feb 8 23:30:01.297777 kubelet[1908]: I0208 23:30:01.297766 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="a8ed573f-e1dc-4b88-a3ea-eae00bf7edcc" containerName="cilium-operator" Feb 8 23:30:01.297848 kubelet[1908]: I0208 23:30:01.297836 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="34dd5b45-0ef4-46da-8faf-4118a561c9c4" containerName="cilium-agent" Feb 8 23:30:01.314175 systemd[1]: Created slice kubepods-burstable-pode336882b_ca8b_4c3d_b45b_2c654f45888a.slice. Feb 8 23:30:01.338786 kubelet[1908]: I0208 23:30:01.338759 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-config-path\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339091 kubelet[1908]: I0208 23:30:01.339063 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-ipsec-secrets\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339244 kubelet[1908]: I0208 23:30:01.339229 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-net\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339377 kubelet[1908]: I0208 23:30:01.339363 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-run\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339483 kubelet[1908]: I0208 23:30:01.339471 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-hostproc\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339624 kubelet[1908]: I0208 23:30:01.339594 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgpz\" (UniqueName: \"kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-kube-api-access-txgpz\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339669 kubelet[1908]: I0208 23:30:01.339646 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-cgroup\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339708 kubelet[1908]: I0208 23:30:01.339692 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-etc-cni-netd\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339771 kubelet[1908]: I0208 23:30:01.339740 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-xtables-lock\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339815 kubelet[1908]: I0208 23:30:01.339782 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-bpf-maps\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339815 kubelet[1908]: I0208 23:30:01.339810 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cni-path\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339874 kubelet[1908]: I0208 23:30:01.339837 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-kernel\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339874 kubelet[1908]: I0208 23:30:01.339865 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-lib-modules\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339936 kubelet[1908]: I0208 23:30:01.339892 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-clustermesh-secrets\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.339936 kubelet[1908]: I0208 23:30:01.339918 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-hubble-tls\") pod \"cilium-zlksq\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " pod="kube-system/cilium-zlksq" Feb 8 23:30:01.348732 kubelet[1908]: I0208 23:30:01.348702 1908 setters.go:552] "Node became not ready" node="ci-3510-3-2-4-bfb6381473.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-08T23:30:01Z","lastTransitionTime":"2024-02-08T23:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 8 23:30:01.447845 sshd[3569]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:01.451323 systemd[1]: sshd@20-172.24.4.40:22-172.24.4.1:44430.service: Deactivated successfully. Feb 8 23:30:01.451927 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:30:01.454455 systemd-logind[1052]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:30:01.457443 systemd[1]: Started sshd@21-172.24.4.40:22-172.24.4.1:44442.service. Feb 8 23:30:01.458648 systemd-logind[1052]: Removed session 21. Feb 8 23:30:01.622241 env[1068]: time="2024-02-08T23:30:01.621826399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlksq,Uid:e336882b-ca8b-4c3d-b45b-2c654f45888a,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:01.642258 env[1068]: time="2024-02-08T23:30:01.640809546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:01.642258 env[1068]: time="2024-02-08T23:30:01.640847948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:01.642258 env[1068]: time="2024-02-08T23:30:01.640860852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:01.642258 env[1068]: time="2024-02-08T23:30:01.641102196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d pid=3594 runtime=io.containerd.runc.v2 Feb 8 23:30:01.666816 systemd[1]: Started cri-containerd-e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d.scope. Feb 8 23:30:01.713659 env[1068]: time="2024-02-08T23:30:01.713605437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zlksq,Uid:e336882b-ca8b-4c3d-b45b-2c654f45888a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\"" Feb 8 23:30:01.718339 env[1068]: time="2024-02-08T23:30:01.718294379Z" level=info msg="CreateContainer within sandbox \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:30:01.735478 env[1068]: time="2024-02-08T23:30:01.735439756Z" level=info msg="CreateContainer within sandbox \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\"" Feb 8 23:30:01.736226 env[1068]: time="2024-02-08T23:30:01.736085448Z" level=info msg="StartContainer for \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\"" Feb 8 23:30:01.760725 systemd[1]: Started cri-containerd-80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd.scope. Feb 8 23:30:01.774317 systemd[1]: cri-containerd-80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd.scope: Deactivated successfully. Feb 8 23:30:01.800860 env[1068]: time="2024-02-08T23:30:01.800796821Z" level=info msg="shim disconnected" id=80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd Feb 8 23:30:01.801217 env[1068]: time="2024-02-08T23:30:01.801186643Z" level=warning msg="cleaning up after shim disconnected" id=80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd namespace=k8s.io Feb 8 23:30:01.801389 env[1068]: time="2024-02-08T23:30:01.801371359Z" level=info msg="cleaning up dead shim" Feb 8 23:30:01.818752 env[1068]: time="2024-02-08T23:30:01.818473155Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3657 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:30:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:30:01.819436 env[1068]: time="2024-02-08T23:30:01.819172637Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 8 23:30:01.819947 env[1068]: time="2024-02-08T23:30:01.819871681Z" level=error msg="Failed to pipe stderr of container \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\"" error="reading from a closed fifo" Feb 8 23:30:01.820483 env[1068]: time="2024-02-08T23:30:01.820398339Z" level=error msg="Failed to pipe stdout of container \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\"" error="reading from a closed fifo" Feb 8 23:30:01.824611 env[1068]: time="2024-02-08T23:30:01.824504758Z" level=error msg="StartContainer for \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:30:01.825075 kubelet[1908]: E0208 23:30:01.824930 1908 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd" Feb 8 23:30:01.831146 kubelet[1908]: E0208 23:30:01.831046 1908 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:30:01.831146 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:30:01.831146 kubelet[1908]: rm /hostbin/cilium-mount Feb 8 23:30:01.831314 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-txgpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zlksq_kube-system(e336882b-ca8b-4c3d-b45b-2c654f45888a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:30:01.831314 kubelet[1908]: E0208 23:30:01.831111 1908 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlksq" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" Feb 8 23:30:02.395334 env[1068]: time="2024-02-08T23:30:02.395096302Z" level=info msg="CreateContainer within sandbox \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:30:02.425180 env[1068]: time="2024-02-08T23:30:02.425078147Z" level=info msg="CreateContainer within sandbox \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\"" Feb 8 23:30:02.426068 env[1068]: time="2024-02-08T23:30:02.426006569Z" level=info msg="StartContainer for \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\"" Feb 8 23:30:02.453810 sshd[3583]: Accepted publickey for core from 172.24.4.1 port 44442 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:02.460873 sshd[3583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:02.478783 systemd[1]: Started session-22.scope. Feb 8 23:30:02.480156 systemd-logind[1052]: New session 22 of user core. Feb 8 23:30:02.491027 systemd[1]: run-containerd-runc-k8s.io-91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d-runc.ahN575.mount: Deactivated successfully. Feb 8 23:30:02.506423 systemd[1]: Started cri-containerd-91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d.scope. Feb 8 23:30:02.517283 systemd[1]: cri-containerd-91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d.scope: Deactivated successfully. Feb 8 23:30:02.517553 systemd[1]: Stopped cri-containerd-91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d.scope. Feb 8 23:30:02.528287 env[1068]: time="2024-02-08T23:30:02.528215785Z" level=info msg="shim disconnected" id=91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d Feb 8 23:30:02.528464 env[1068]: time="2024-02-08T23:30:02.528292419Z" level=warning msg="cleaning up after shim disconnected" id=91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d namespace=k8s.io Feb 8 23:30:02.528464 env[1068]: time="2024-02-08T23:30:02.528305303Z" level=info msg="cleaning up dead shim" Feb 8 23:30:02.536460 env[1068]: time="2024-02-08T23:30:02.536409767Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3697 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:30:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:30:02.537091 env[1068]: time="2024-02-08T23:30:02.537043427Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 8 23:30:02.537423 env[1068]: time="2024-02-08T23:30:02.537372795Z" level=error msg="Failed to pipe stderr of container \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\"" error="reading from a closed fifo" Feb 8 23:30:02.537480 env[1068]: time="2024-02-08T23:30:02.537444530Z" level=error msg="Failed to pipe stdout of container \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\"" error="reading from a closed fifo" Feb 8 23:30:02.540663 env[1068]: time="2024-02-08T23:30:02.540627225Z" level=error msg="StartContainer for \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:30:02.541322 kubelet[1908]: E0208 23:30:02.541017 1908 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d" Feb 8 23:30:02.542365 kubelet[1908]: E0208 23:30:02.542217 1908 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:30:02.542365 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:30:02.542365 kubelet[1908]: rm /hostbin/cilium-mount Feb 8 23:30:02.542365 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-txgpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-zlksq_kube-system(e336882b-ca8b-4c3d-b45b-2c654f45888a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:30:02.542365 kubelet[1908]: E0208 23:30:02.542305 1908 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-zlksq" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" Feb 8 23:30:03.065686 sshd[3583]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:03.072003 systemd[1]: sshd@21-172.24.4.40:22-172.24.4.1:44442.service: Deactivated successfully. Feb 8 23:30:03.074479 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:30:03.076756 systemd-logind[1052]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:30:03.080598 systemd[1]: Started sshd@22-172.24.4.40:22-172.24.4.1:44458.service. Feb 8 23:30:03.085742 systemd-logind[1052]: Removed session 22. Feb 8 23:30:03.396550 kubelet[1908]: I0208 23:30:03.396500 1908 scope.go:117] "RemoveContainer" containerID="80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd" Feb 8 23:30:03.399346 env[1068]: time="2024-02-08T23:30:03.399038317Z" level=info msg="RemoveContainer for \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\"" Feb 8 23:30:03.402723 env[1068]: time="2024-02-08T23:30:03.402651750Z" level=info msg="StopPodSandbox for \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\"" Feb 8 23:30:03.402879 env[1068]: time="2024-02-08T23:30:03.402785792Z" level=info msg="Container to stop \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:03.402879 env[1068]: time="2024-02-08T23:30:03.402825577Z" level=info msg="Container to stop \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:30:03.406444 env[1068]: time="2024-02-08T23:30:03.406383826Z" level=info msg="RemoveContainer for \"80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd\" returns successfully" Feb 8 23:30:03.426488 systemd[1]: cri-containerd-e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d.scope: Deactivated successfully. Feb 8 23:30:03.452749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d-rootfs.mount: Deactivated successfully. Feb 8 23:30:03.453017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d-shm.mount: Deactivated successfully. Feb 8 23:30:03.486762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d-rootfs.mount: Deactivated successfully. Feb 8 23:30:03.568158 env[1068]: time="2024-02-08T23:30:03.568053122Z" level=info msg="shim disconnected" id=e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d Feb 8 23:30:03.568445 env[1068]: time="2024-02-08T23:30:03.568162438Z" level=warning msg="cleaning up after shim disconnected" id=e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d namespace=k8s.io Feb 8 23:30:03.568445 env[1068]: time="2024-02-08T23:30:03.568189578Z" level=info msg="cleaning up dead shim" Feb 8 23:30:03.578698 env[1068]: time="2024-02-08T23:30:03.578657028Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3741 runtime=io.containerd.runc.v2\n" Feb 8 23:30:03.579241 env[1068]: time="2024-02-08T23:30:03.579212661Z" level=info msg="TearDown network for sandbox \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\" successfully" Feb 8 23:30:03.579377 env[1068]: time="2024-02-08T23:30:03.579357663Z" level=info msg="StopPodSandbox for \"e0bb385d979b30c80b2c1ef7ea356c94af0fb35ea1a6c9c70d85e40ae155305d\" returns successfully" Feb 8 23:30:03.660349 kubelet[1908]: I0208 23:30:03.660210 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-config-path\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.660903 kubelet[1908]: I0208 23:30:03.660349 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-net\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.660903 kubelet[1908]: I0208 23:30:03.660589 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.661171 kubelet[1908]: I0208 23:30:03.660702 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-hubble-tls\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.661325 kubelet[1908]: I0208 23:30:03.661249 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-kernel\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.661423 kubelet[1908]: I0208 23:30:03.661407 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-cgroup\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.661550 kubelet[1908]: I0208 23:30:03.661513 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-clustermesh-secrets\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.661628 kubelet[1908]: I0208 23:30:03.661617 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-lib-modules\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.661761 kubelet[1908]: I0208 23:30:03.661719 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txgpz\" (UniqueName: \"kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-kube-api-access-txgpz\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.661859 kubelet[1908]: I0208 23:30:03.661831 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-ipsec-secrets\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.662167 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-hostproc\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.662332 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-xtables-lock\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.662426 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-bpf-maps\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.662517 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-etc-cni-netd\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.662613 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-run\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.662809 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cni-path\") pod \"e336882b-ca8b-4c3d-b45b-2c654f45888a\" (UID: \"e336882b-ca8b-4c3d-b45b-2c654f45888a\") " Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.663105 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-net\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.663193 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cni-path" (OuterVolumeSpecName: "cni-path") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.663386 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.663487 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.664821 kubelet[1908]: I0208 23:30:03.664342 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.665756 kubelet[1908]: I0208 23:30:03.665319 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-hostproc" (OuterVolumeSpecName: "hostproc") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.665756 kubelet[1908]: I0208 23:30:03.665383 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.665756 kubelet[1908]: I0208 23:30:03.665424 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.665756 kubelet[1908]: I0208 23:30:03.665460 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.665756 kubelet[1908]: I0208 23:30:03.665501 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:30:03.674214 systemd[1]: var-lib-kubelet-pods-e336882b\x2dca8b\x2d4c3d\x2db45b\x2d2c654f45888a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:30:03.677715 kubelet[1908]: I0208 23:30:03.677656 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:30:03.683549 systemd[1]: var-lib-kubelet-pods-e336882b\x2dca8b\x2d4c3d\x2db45b\x2d2c654f45888a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:30:03.685747 kubelet[1908]: I0208 23:30:03.685453 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:30:03.686521 kubelet[1908]: I0208 23:30:03.686458 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:03.693850 systemd[1]: var-lib-kubelet-pods-e336882b\x2dca8b\x2d4c3d\x2db45b\x2d2c654f45888a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtxgpz.mount: Deactivated successfully. Feb 8 23:30:03.697071 kubelet[1908]: I0208 23:30:03.697011 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-kube-api-access-txgpz" (OuterVolumeSpecName: "kube-api-access-txgpz") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "kube-api-access-txgpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:30:03.697981 kubelet[1908]: I0208 23:30:03.697905 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e336882b-ca8b-4c3d-b45b-2c654f45888a" (UID: "e336882b-ca8b-4c3d-b45b-2c654f45888a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:30:03.764452 kubelet[1908]: I0208 23:30:03.764379 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-run\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764452 kubelet[1908]: I0208 23:30:03.764456 1908 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cni-path\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764493 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-config-path\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764529 1908 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-hubble-tls\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764563 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-host-proc-sys-kernel\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764597 1908 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-txgpz\" (UniqueName: \"kubernetes.io/projected/e336882b-ca8b-4c3d-b45b-2c654f45888a-kube-api-access-txgpz\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764629 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-cgroup\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764660 1908 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-clustermesh-secrets\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764689 1908 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-lib-modules\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764721 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e336882b-ca8b-4c3d-b45b-2c654f45888a-cilium-ipsec-secrets\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.764743 kubelet[1908]: I0208 23:30:03.764750 1908 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-hostproc\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.765358 kubelet[1908]: I0208 23:30:03.764783 1908 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-etc-cni-netd\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.765358 kubelet[1908]: I0208 23:30:03.764813 1908 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-xtables-lock\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.765358 kubelet[1908]: I0208 23:30:03.764844 1908 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e336882b-ca8b-4c3d-b45b-2c654f45888a-bpf-maps\") on node \"ci-3510-3-2-4-bfb6381473.novalocal\" DevicePath \"\"" Feb 8 23:30:03.936240 kubelet[1908]: E0208 23:30:03.936023 1908 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:30:04.402757 kubelet[1908]: I0208 23:30:04.402718 1908 scope.go:117] "RemoveContainer" containerID="91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d" Feb 8 23:30:04.414942 env[1068]: time="2024-02-08T23:30:04.414811713Z" level=info msg="RemoveContainer for \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\"" Feb 8 23:30:04.418663 systemd[1]: Removed slice kubepods-burstable-pode336882b_ca8b_4c3d_b45b_2c654f45888a.slice. Feb 8 23:30:04.422222 env[1068]: time="2024-02-08T23:30:04.422108490Z" level=info msg="RemoveContainer for \"91e209d3b1f16e6848bc407fb8a1b4ef890bd8b284583353e0c007fde3cc886d\" returns successfully" Feb 8 23:30:04.446784 systemd[1]: var-lib-kubelet-pods-e336882b\x2dca8b\x2d4c3d\x2db45b\x2d2c654f45888a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:30:04.501446 kubelet[1908]: I0208 23:30:04.501402 1908 topology_manager.go:215] "Topology Admit Handler" podUID="758ec3ed-c048-4c82-9c26-b301332b1759" podNamespace="kube-system" podName="cilium-5zxqv" Feb 8 23:30:04.501807 kubelet[1908]: E0208 23:30:04.501784 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" containerName="mount-cgroup" Feb 8 23:30:04.501984 kubelet[1908]: I0208 23:30:04.501963 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" containerName="mount-cgroup" Feb 8 23:30:04.502107 kubelet[1908]: I0208 23:30:04.502089 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" containerName="mount-cgroup" Feb 8 23:30:04.502313 kubelet[1908]: E0208 23:30:04.502293 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" containerName="mount-cgroup" Feb 8 23:30:04.508032 systemd[1]: Created slice kubepods-burstable-pod758ec3ed_c048_4c82_9c26_b301332b1759.slice. Feb 8 23:30:04.571025 kubelet[1908]: I0208 23:30:04.570987 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-etc-cni-netd\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571025 kubelet[1908]: I0208 23:30:04.571031 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-cilium-cgroup\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571182 kubelet[1908]: I0208 23:30:04.571058 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/758ec3ed-c048-4c82-9c26-b301332b1759-clustermesh-secrets\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571182 kubelet[1908]: I0208 23:30:04.571088 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/758ec3ed-c048-4c82-9c26-b301332b1759-hubble-tls\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571182 kubelet[1908]: I0208 23:30:04.571111 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-bpf-maps\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571182 kubelet[1908]: I0208 23:30:04.571134 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/758ec3ed-c048-4c82-9c26-b301332b1759-cilium-config-path\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571182 kubelet[1908]: I0208 23:30:04.571156 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-cilium-run\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.571182 kubelet[1908]: I0208 23:30:04.571180 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-hostproc\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571203 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-host-proc-sys-net\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571236 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-lib-modules\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571276 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-xtables-lock\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571308 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/758ec3ed-c048-4c82-9c26-b301332b1759-cilium-ipsec-secrets\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571332 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ck9x\" (UniqueName: \"kubernetes.io/projected/758ec3ed-c048-4c82-9c26-b301332b1759-kube-api-access-2ck9x\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571367 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-host-proc-sys-kernel\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.572315 kubelet[1908]: I0208 23:30:04.571393 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/758ec3ed-c048-4c82-9c26-b301332b1759-cni-path\") pod \"cilium-5zxqv\" (UID: \"758ec3ed-c048-4c82-9c26-b301332b1759\") " pod="kube-system/cilium-5zxqv" Feb 8 23:30:04.601060 sshd[3720]: Accepted publickey for core from 172.24.4.1 port 44458 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:04.603427 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:04.611360 systemd-logind[1052]: New session 23 of user core. Feb 8 23:30:04.611830 systemd[1]: Started session-23.scope. Feb 8 23:30:04.786742 kubelet[1908]: I0208 23:30:04.786669 1908 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e336882b-ca8b-4c3d-b45b-2c654f45888a" path="/var/lib/kubelet/pods/e336882b-ca8b-4c3d-b45b-2c654f45888a/volumes" Feb 8 23:30:04.813752 env[1068]: time="2024-02-08T23:30:04.813628178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zxqv,Uid:758ec3ed-c048-4c82-9c26-b301332b1759,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:04.839106 env[1068]: time="2024-02-08T23:30:04.839002673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:04.839106 env[1068]: time="2024-02-08T23:30:04.839086420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:04.839508 env[1068]: time="2024-02-08T23:30:04.839120935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:04.839508 env[1068]: time="2024-02-08T23:30:04.839299029Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8 pid=3771 runtime=io.containerd.runc.v2 Feb 8 23:30:04.872983 systemd[1]: Started cri-containerd-21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8.scope. Feb 8 23:30:04.906320 env[1068]: time="2024-02-08T23:30:04.906222030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5zxqv,Uid:758ec3ed-c048-4c82-9c26-b301332b1759,Namespace:kube-system,Attempt:0,} returns sandbox id \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\"" Feb 8 23:30:04.911390 env[1068]: time="2024-02-08T23:30:04.911339316Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:30:04.921198 kubelet[1908]: W0208 23:30:04.920996 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode336882b_ca8b_4c3d_b45b_2c654f45888a.slice/cri-containerd-80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd.scope WatchSource:0}: container "80a12e6d1aca64c7a348caea130345d5717615b06b50902a6054fd0a4e95d2dd" in namespace "k8s.io": not found Feb 8 23:30:05.161057 env[1068]: time="2024-02-08T23:30:05.161006058Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc\"" Feb 8 23:30:05.162023 env[1068]: time="2024-02-08T23:30:05.161994434Z" level=info msg="StartContainer for \"d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc\"" Feb 8 23:30:05.204416 systemd[1]: Started cri-containerd-d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc.scope. Feb 8 23:30:05.367696 env[1068]: time="2024-02-08T23:30:05.367626041Z" level=info msg="StartContainer for \"d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc\" returns successfully" Feb 8 23:30:05.473111 systemd[1]: cri-containerd-d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc.scope: Deactivated successfully. Feb 8 23:30:05.505792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc-rootfs.mount: Deactivated successfully. Feb 8 23:30:05.522321 env[1068]: time="2024-02-08T23:30:05.522272568Z" level=info msg="shim disconnected" id=d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc Feb 8 23:30:05.522729 env[1068]: time="2024-02-08T23:30:05.522708005Z" level=warning msg="cleaning up after shim disconnected" id=d6462aa3f53e9eea86dea50d0847c22ee8a2bc4362dab29fb1b62c8132e42bfc namespace=k8s.io Feb 8 23:30:05.522798 env[1068]: time="2024-02-08T23:30:05.522783667Z" level=info msg="cleaning up dead shim" Feb 8 23:30:05.530978 env[1068]: time="2024-02-08T23:30:05.530934909Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3862 runtime=io.containerd.runc.v2\n" Feb 8 23:30:06.430685 env[1068]: time="2024-02-08T23:30:06.429774668Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:30:06.464493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668244054.mount: Deactivated successfully. Feb 8 23:30:06.484038 env[1068]: time="2024-02-08T23:30:06.483951269Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10\"" Feb 8 23:30:06.490173 env[1068]: time="2024-02-08T23:30:06.490092146Z" level=info msg="StartContainer for \"3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10\"" Feb 8 23:30:06.528299 systemd[1]: Started cri-containerd-3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10.scope. Feb 8 23:30:06.565624 env[1068]: time="2024-02-08T23:30:06.565347747Z" level=info msg="StartContainer for \"3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10\" returns successfully" Feb 8 23:30:06.573948 systemd[1]: cri-containerd-3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10.scope: Deactivated successfully. Feb 8 23:30:06.608275 env[1068]: time="2024-02-08T23:30:06.608192846Z" level=info msg="shim disconnected" id=3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10 Feb 8 23:30:06.608275 env[1068]: time="2024-02-08T23:30:06.608241929Z" level=warning msg="cleaning up after shim disconnected" id=3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10 namespace=k8s.io Feb 8 23:30:06.608275 env[1068]: time="2024-02-08T23:30:06.608268308Z" level=info msg="cleaning up dead shim" Feb 8 23:30:06.615967 env[1068]: time="2024-02-08T23:30:06.615930832Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3929 runtime=io.containerd.runc.v2\n" Feb 8 23:30:07.434072 env[1068]: time="2024-02-08T23:30:07.433977359Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:30:07.454869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b94e8e7e447a4aa7e56e3ba1e15b17c92d7d664f29ab952cb08e1c6759fbb10-rootfs.mount: Deactivated successfully. Feb 8 23:30:07.483456 env[1068]: time="2024-02-08T23:30:07.483373495Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591\"" Feb 8 23:30:07.484941 env[1068]: time="2024-02-08T23:30:07.484888389Z" level=info msg="StartContainer for \"a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591\"" Feb 8 23:30:07.524952 systemd[1]: Started cri-containerd-a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591.scope. Feb 8 23:30:07.571008 env[1068]: time="2024-02-08T23:30:07.570865644Z" level=info msg="StartContainer for \"a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591\" returns successfully" Feb 8 23:30:07.573435 systemd[1]: cri-containerd-a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591.scope: Deactivated successfully. Feb 8 23:30:07.603729 env[1068]: time="2024-02-08T23:30:07.603682037Z" level=info msg="shim disconnected" id=a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591 Feb 8 23:30:07.604080 env[1068]: time="2024-02-08T23:30:07.604059667Z" level=warning msg="cleaning up after shim disconnected" id=a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591 namespace=k8s.io Feb 8 23:30:07.604166 env[1068]: time="2024-02-08T23:30:07.604150778Z" level=info msg="cleaning up dead shim" Feb 8 23:30:07.612333 env[1068]: time="2024-02-08T23:30:07.612287151Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3989 runtime=io.containerd.runc.v2\n" Feb 8 23:30:08.442783 env[1068]: time="2024-02-08T23:30:08.441778187Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:30:08.455062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a43c84e54dc5f84417f1574914995dcfd6cb5854c62fdafbbe62698c7e3e7591-rootfs.mount: Deactivated successfully. Feb 8 23:30:08.482766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3528662163.mount: Deactivated successfully. Feb 8 23:30:08.483848 env[1068]: time="2024-02-08T23:30:08.483701925Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f\"" Feb 8 23:30:08.485435 env[1068]: time="2024-02-08T23:30:08.485201991Z" level=info msg="StartContainer for \"564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f\"" Feb 8 23:30:08.527174 systemd[1]: Started cri-containerd-564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f.scope. Feb 8 23:30:08.556141 systemd[1]: cri-containerd-564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f.scope: Deactivated successfully. Feb 8 23:30:08.558853 env[1068]: time="2024-02-08T23:30:08.558719736Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod758ec3ed_c048_4c82_9c26_b301332b1759.slice/cri-containerd-564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f.scope/memory.events\": no such file or directory" Feb 8 23:30:08.563681 env[1068]: time="2024-02-08T23:30:08.563648199Z" level=info msg="StartContainer for \"564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f\" returns successfully" Feb 8 23:30:08.594225 env[1068]: time="2024-02-08T23:30:08.594162489Z" level=info msg="shim disconnected" id=564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f Feb 8 23:30:08.594225 env[1068]: time="2024-02-08T23:30:08.594213134Z" level=warning msg="cleaning up after shim disconnected" id=564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f namespace=k8s.io Feb 8 23:30:08.594225 env[1068]: time="2024-02-08T23:30:08.594224175Z" level=info msg="cleaning up dead shim" Feb 8 23:30:08.601899 env[1068]: time="2024-02-08T23:30:08.601818501Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\n" Feb 8 23:30:08.938109 kubelet[1908]: E0208 23:30:08.938052 1908 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:30:09.455693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-564bdc75eb682d95a0809583413611379543869d6f7bc7a8b1b850acdefc6c1f-rootfs.mount: Deactivated successfully. Feb 8 23:30:09.472183 env[1068]: time="2024-02-08T23:30:09.472059197Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:30:09.525214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250252428.mount: Deactivated successfully. Feb 8 23:30:09.533476 env[1068]: time="2024-02-08T23:30:09.533411666Z" level=info msg="CreateContainer within sandbox \"21e2c78d5a78ea61c80de42f26b2d119d93bbf68e8616ed6e44fb5b019cd7be8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d\"" Feb 8 23:30:09.534861 env[1068]: time="2024-02-08T23:30:09.534117993Z" level=info msg="StartContainer for \"3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d\"" Feb 8 23:30:09.560518 systemd[1]: Started cri-containerd-3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d.scope. Feb 8 23:30:09.603116 env[1068]: time="2024-02-08T23:30:09.603028087Z" level=info msg="StartContainer for \"3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d\" returns successfully" Feb 8 23:30:10.829396 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:30:10.876301 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 8 23:30:11.550426 systemd[1]: run-containerd-runc-k8s.io-3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d-runc.9VdP36.mount: Deactivated successfully. Feb 8 23:30:13.751810 systemd[1]: run-containerd-runc-k8s.io-3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d-runc.nucSrB.mount: Deactivated successfully. Feb 8 23:30:13.888220 systemd-networkd[973]: lxc_health: Link UP Feb 8 23:30:13.895575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:30:13.895386 systemd-networkd[973]: lxc_health: Gained carrier Feb 8 23:30:14.863589 kubelet[1908]: I0208 23:30:14.863478 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5zxqv" podStartSLOduration=10.860675393 podCreationTimestamp="2024-02-08 23:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:10.519193109 +0000 UTC m=+141.938397418" watchObservedRunningTime="2024-02-08 23:30:14.860675393 +0000 UTC m=+146.279879732" Feb 8 23:30:15.849515 systemd-networkd[973]: lxc_health: Gained IPv6LL Feb 8 23:30:15.973692 systemd[1]: run-containerd-runc-k8s.io-3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d-runc.ymjy77.mount: Deactivated successfully. Feb 8 23:30:18.223159 systemd[1]: run-containerd-runc-k8s.io-3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d-runc.hkw9fd.mount: Deactivated successfully. Feb 8 23:30:20.477342 systemd[1]: run-containerd-runc-k8s.io-3928d31cf2c387654eb14093b0963120d1eeb27a60b48d1e8b61c78d8e14f07d-runc.zZnCpv.mount: Deactivated successfully. Feb 8 23:30:21.000862 sshd[3720]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:21.068490 systemd-logind[1052]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:30:21.068795 systemd[1]: sshd@22-172.24.4.40:22-172.24.4.1:44458.service: Deactivated successfully. Feb 8 23:30:21.070376 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:30:21.072478 systemd-logind[1052]: Removed session 23.