Feb 12 20:47:36.062317 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:47:36.062357 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:47:36.062380 kernel: BIOS-provided physical RAM map: Feb 12 20:47:36.062393 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:47:36.062406 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:47:36.062418 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:47:36.062433 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 12 20:47:36.062446 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 12 20:47:36.062462 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:47:36.062474 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:47:36.062486 kernel: NX (Execute Disable) protection: active Feb 12 20:47:36.062498 kernel: SMBIOS 2.8 present. Feb 12 20:47:36.062511 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 12 20:47:36.062524 kernel: Hypervisor detected: KVM Feb 12 20:47:36.062539 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:47:36.062556 kernel: kvm-clock: cpu 0, msr 60faa001, primary cpu clock Feb 12 20:47:36.062569 kernel: kvm-clock: using sched offset of 5007824982 cycles Feb 12 20:47:36.062583 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:47:36.062597 kernel: tsc: Detected 1996.249 MHz processor Feb 12 20:47:36.062611 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:47:36.062626 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:47:36.062640 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 12 20:47:36.062653 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:47:36.062671 kernel: ACPI: Early table checksum verification disabled Feb 12 20:47:36.062684 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 12 20:47:36.062698 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:47:36.062712 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:47:36.062726 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:47:36.062739 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 12 20:47:36.062753 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:47:36.062767 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:47:36.062780 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 12 20:47:36.062797 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 12 20:47:36.062810 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 12 20:47:36.062824 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 12 20:47:36.062838 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 12 20:47:36.062880 kernel: No NUMA configuration found Feb 12 20:47:36.062895 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 12 20:47:36.062908 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 12 20:47:36.062922 kernel: Zone ranges: Feb 12 20:47:36.062945 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:47:36.062959 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 12 20:47:36.062973 kernel: Normal empty Feb 12 20:47:36.062987 kernel: Movable zone start for each node Feb 12 20:47:36.063002 kernel: Early memory node ranges Feb 12 20:47:36.063016 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:47:36.063032 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 12 20:47:36.063047 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 12 20:47:36.063061 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:47:36.063075 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:47:36.063089 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 12 20:47:36.063103 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:47:36.063117 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:47:36.063131 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:47:36.063145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:47:36.063162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:47:36.063176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:47:36.063190 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:47:36.063205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:47:36.063219 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:47:36.063233 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 20:47:36.063247 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 12 20:47:36.063277 kernel: Booting paravirtualized kernel on KVM Feb 12 20:47:36.063292 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:47:36.063307 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 20:47:36.063325 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 20:47:36.063339 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 20:47:36.063353 kernel: pcpu-alloc: [0] 0 1 Feb 12 20:47:36.063367 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 12 20:47:36.063381 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 12 20:47:36.063395 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 12 20:47:36.063409 kernel: Policy zone: DMA32 Feb 12 20:47:36.063426 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:47:36.063446 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:47:36.063460 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:47:36.063475 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 20:47:36.063489 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:47:36.063504 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 12 20:47:36.063519 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:47:36.063533 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:47:36.063547 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:47:36.063564 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:47:36.063579 kernel: rcu: RCU event tracing is enabled. Feb 12 20:47:36.063594 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:47:36.063609 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:47:36.063623 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:47:36.063637 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:47:36.063652 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:47:36.063666 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 20:47:36.063680 kernel: Console: colour VGA+ 80x25 Feb 12 20:47:36.063694 kernel: printk: console [tty0] enabled Feb 12 20:47:36.063711 kernel: printk: console [ttyS0] enabled Feb 12 20:47:36.063725 kernel: ACPI: Core revision 20210730 Feb 12 20:47:36.063739 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:47:36.063753 kernel: x2apic enabled Feb 12 20:47:36.063767 kernel: Switched APIC routing to physical x2apic. Feb 12 20:47:36.063781 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:47:36.063796 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:47:36.063810 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 12 20:47:36.063824 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 12 20:47:36.063841 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 12 20:47:36.065890 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:47:36.065901 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:47:36.065909 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:47:36.065917 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:47:36.065924 kernel: Speculative Store Bypass: Vulnerable Feb 12 20:47:36.065932 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 12 20:47:36.065940 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:47:36.065947 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:47:36.065959 kernel: LSM: Security Framework initializing Feb 12 20:47:36.065966 kernel: SELinux: Initializing. Feb 12 20:47:36.065973 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:47:36.065981 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 20:47:36.065989 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 12 20:47:36.065997 kernel: Performance Events: AMD PMU driver. Feb 12 20:47:36.066004 kernel: ... version: 0 Feb 12 20:47:36.066011 kernel: ... bit width: 48 Feb 12 20:47:36.066019 kernel: ... generic registers: 4 Feb 12 20:47:36.066035 kernel: ... value mask: 0000ffffffffffff Feb 12 20:47:36.066043 kernel: ... max period: 00007fffffffffff Feb 12 20:47:36.066051 kernel: ... fixed-purpose events: 0 Feb 12 20:47:36.066060 kernel: ... event mask: 000000000000000f Feb 12 20:47:36.066068 kernel: signal: max sigframe size: 1440 Feb 12 20:47:36.066075 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:47:36.066083 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:47:36.066091 kernel: x86: Booting SMP configuration: Feb 12 20:47:36.066101 kernel: .... node #0, CPUs: #1 Feb 12 20:47:36.066108 kernel: kvm-clock: cpu 1, msr 60faa041, secondary cpu clock Feb 12 20:47:36.066116 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 12 20:47:36.066125 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:47:36.066132 kernel: smpboot: Max logical packages: 2 Feb 12 20:47:36.066140 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 12 20:47:36.066148 kernel: devtmpfs: initialized Feb 12 20:47:36.066156 kernel: x86/mm: Memory block size: 128MB Feb 12 20:47:36.066164 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:47:36.066173 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:47:36.066181 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:47:36.066189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:47:36.066197 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:47:36.066205 kernel: audit: type=2000 audit(1707770855.434:1): state=initialized audit_enabled=0 res=1 Feb 12 20:47:36.066213 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:47:36.066220 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:47:36.066228 kernel: cpuidle: using governor menu Feb 12 20:47:36.066236 kernel: ACPI: bus type PCI registered Feb 12 20:47:36.066246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:47:36.066254 kernel: dca service started, version 1.12.1 Feb 12 20:47:36.066262 kernel: PCI: Using configuration type 1 for base access Feb 12 20:47:36.066270 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:47:36.066278 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:47:36.066286 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:47:36.066294 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:47:36.066302 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:47:36.066310 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:47:36.066320 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:47:36.066328 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:47:36.066336 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:47:36.066344 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:47:36.066351 kernel: ACPI: Interpreter enabled Feb 12 20:47:36.066359 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:47:36.066367 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:47:36.066375 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:47:36.066383 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:47:36.066392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:47:36.066575 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:47:36.066660 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 20:47:36.066672 kernel: acpiphp: Slot [3] registered Feb 12 20:47:36.066680 kernel: acpiphp: Slot [4] registered Feb 12 20:47:36.066688 kernel: acpiphp: Slot [5] registered Feb 12 20:47:36.066696 kernel: acpiphp: Slot [6] registered Feb 12 20:47:36.066707 kernel: acpiphp: Slot [7] registered Feb 12 20:47:36.066715 kernel: acpiphp: Slot [8] registered Feb 12 20:47:36.066723 kernel: acpiphp: Slot [9] registered Feb 12 20:47:36.066731 kernel: acpiphp: Slot [10] registered Feb 12 20:47:36.066739 kernel: acpiphp: Slot [11] registered Feb 12 20:47:36.066747 kernel: acpiphp: Slot [12] registered Feb 12 20:47:36.066754 kernel: acpiphp: Slot [13] registered Feb 12 20:47:36.066762 kernel: acpiphp: Slot [14] registered Feb 12 20:47:36.066770 kernel: acpiphp: Slot [15] registered Feb 12 20:47:36.066778 kernel: acpiphp: Slot [16] registered Feb 12 20:47:36.066787 kernel: acpiphp: Slot [17] registered Feb 12 20:47:36.066795 kernel: acpiphp: Slot [18] registered Feb 12 20:47:36.066803 kernel: acpiphp: Slot [19] registered Feb 12 20:47:36.066811 kernel: acpiphp: Slot [20] registered Feb 12 20:47:36.066819 kernel: acpiphp: Slot [21] registered Feb 12 20:47:36.066827 kernel: acpiphp: Slot [22] registered Feb 12 20:47:36.066835 kernel: acpiphp: Slot [23] registered Feb 12 20:47:36.066844 kernel: acpiphp: Slot [24] registered Feb 12 20:47:36.066869 kernel: acpiphp: Slot [25] registered Feb 12 20:47:36.066879 kernel: acpiphp: Slot [26] registered Feb 12 20:47:36.066887 kernel: acpiphp: Slot [27] registered Feb 12 20:47:36.066895 kernel: acpiphp: Slot [28] registered Feb 12 20:47:36.066903 kernel: acpiphp: Slot [29] registered Feb 12 20:47:36.066911 kernel: acpiphp: Slot [30] registered Feb 12 20:47:36.066918 kernel: acpiphp: Slot [31] registered Feb 12 20:47:36.066926 kernel: PCI host bridge to bus 0000:00 Feb 12 20:47:36.067023 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:47:36.067097 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:47:36.067172 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:47:36.067251 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 20:47:36.067334 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:47:36.067405 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:47:36.067499 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:47:36.067592 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:47:36.067688 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:47:36.067770 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 12 20:47:36.067867 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:47:36.067955 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:47:36.068038 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:47:36.068134 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:47:36.068246 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:47:36.068334 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:47:36.068424 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:47:36.068513 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 12 20:47:36.068597 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 12 20:47:36.068690 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 12 20:47:36.068774 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 12 20:47:36.068877 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 12 20:47:36.068963 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:47:36.069110 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:47:36.069202 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 12 20:47:36.069309 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 12 20:47:36.069398 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 12 20:47:36.069481 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 12 20:47:36.069579 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:47:36.069667 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:47:36.069755 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 12 20:47:36.069841 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 12 20:47:36.077072 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 12 20:47:36.077167 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 12 20:47:36.077252 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 12 20:47:36.077361 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:47:36.077445 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 12 20:47:36.077527 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 12 20:47:36.077546 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:47:36.077555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:47:36.077564 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:47:36.077572 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:47:36.077580 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:47:36.077595 kernel: iommu: Default domain type: Translated Feb 12 20:47:36.077604 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:47:36.077693 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:47:36.077775 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:47:36.077872 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:47:36.077885 kernel: vgaarb: loaded Feb 12 20:47:36.077893 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:47:36.077902 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:47:36.077910 kernel: PTP clock support registered Feb 12 20:47:36.077922 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:47:36.077930 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:47:36.077939 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:47:36.077947 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 12 20:47:36.077955 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:47:36.077963 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:47:36.077971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:47:36.077979 kernel: pnp: PnP ACPI init Feb 12 20:47:36.078093 kernel: pnp 00:03: [dma 2] Feb 12 20:47:36.078112 kernel: pnp: PnP ACPI: found 5 devices Feb 12 20:47:36.078121 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:47:36.078129 kernel: NET: Registered PF_INET protocol family Feb 12 20:47:36.078138 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:47:36.078146 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 20:47:36.078154 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:47:36.078162 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:47:36.078171 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 20:47:36.078182 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 20:47:36.078190 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:47:36.078199 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 20:47:36.078207 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:47:36.078214 kernel: NET: Registered PF_XDP protocol family Feb 12 20:47:36.078302 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:47:36.078377 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:47:36.078447 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:47:36.078519 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 20:47:36.078595 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:47:36.078690 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:47:36.078774 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:47:36.078886 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:47:36.078900 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:47:36.078908 kernel: Initialise system trusted keyrings Feb 12 20:47:36.078917 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 20:47:36.078925 kernel: Key type asymmetric registered Feb 12 20:47:36.078936 kernel: Asymmetric key parser 'x509' registered Feb 12 20:47:36.078944 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:47:36.078952 kernel: io scheduler mq-deadline registered Feb 12 20:47:36.078961 kernel: io scheduler kyber registered Feb 12 20:47:36.078969 kernel: io scheduler bfq registered Feb 12 20:47:36.078977 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:47:36.078985 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 12 20:47:36.078993 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:47:36.079002 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 20:47:36.079012 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:47:36.079020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:47:36.079028 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:47:36.079036 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:47:36.079044 kernel: random: crng init done Feb 12 20:47:36.079052 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:47:36.079060 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:47:36.079068 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:47:36.079163 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 12 20:47:36.079273 kernel: rtc_cmos 00:04: registered as rtc0 Feb 12 20:47:36.079356 kernel: rtc_cmos 00:04: setting system clock to 2024-02-12T20:47:35 UTC (1707770855) Feb 12 20:47:36.079432 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 12 20:47:36.079444 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:47:36.079452 kernel: Segment Routing with IPv6 Feb 12 20:47:36.079460 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:47:36.079468 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:47:36.079476 kernel: Key type dns_resolver registered Feb 12 20:47:36.079487 kernel: IPI shorthand broadcast: enabled Feb 12 20:47:36.079495 kernel: sched_clock: Marking stable (717848636, 121403510)->(861369147, -22117001) Feb 12 20:47:36.079504 kernel: registered taskstats version 1 Feb 12 20:47:36.079511 kernel: Loading compiled-in X.509 certificates Feb 12 20:47:36.079520 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:47:36.079528 kernel: Key type .fscrypt registered Feb 12 20:47:36.079536 kernel: Key type fscrypt-provisioning registered Feb 12 20:47:36.079544 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:47:36.079555 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:47:36.079563 kernel: ima: No architecture policies found Feb 12 20:47:36.079571 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:47:36.079579 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:47:36.079587 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:47:36.079595 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:47:36.079603 kernel: Run /init as init process Feb 12 20:47:36.079611 kernel: with arguments: Feb 12 20:47:36.079619 kernel: /init Feb 12 20:47:36.079627 kernel: with environment: Feb 12 20:47:36.079636 kernel: HOME=/ Feb 12 20:47:36.079644 kernel: TERM=linux Feb 12 20:47:36.079652 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:47:36.079662 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:47:36.079673 systemd[1]: Detected virtualization kvm. Feb 12 20:47:36.079682 systemd[1]: Detected architecture x86-64. Feb 12 20:47:36.079691 systemd[1]: Running in initrd. Feb 12 20:47:36.079702 systemd[1]: No hostname configured, using default hostname. Feb 12 20:47:36.079710 systemd[1]: Hostname set to . Feb 12 20:47:36.079719 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:47:36.079728 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:47:36.079736 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:47:36.079745 systemd[1]: Reached target cryptsetup.target. Feb 12 20:47:36.079754 systemd[1]: Reached target paths.target. Feb 12 20:47:36.079762 systemd[1]: Reached target slices.target. Feb 12 20:47:36.079772 systemd[1]: Reached target swap.target. Feb 12 20:47:36.079781 systemd[1]: Reached target timers.target. Feb 12 20:47:36.079790 systemd[1]: Listening on iscsid.socket. Feb 12 20:47:36.079798 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:47:36.079807 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:47:36.079815 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:47:36.079824 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:47:36.079832 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:47:36.079843 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:47:36.079949 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:47:36.079959 systemd[1]: Reached target sockets.target. Feb 12 20:47:36.079968 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:47:36.079989 systemd[1]: Finished network-cleanup.service. Feb 12 20:47:36.080000 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:47:36.080012 systemd[1]: Starting systemd-journald.service... Feb 12 20:47:36.080021 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:47:36.080030 systemd[1]: Starting systemd-resolved.service... Feb 12 20:47:36.080039 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:47:36.080048 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:47:36.080056 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:47:36.080081 systemd-journald[184]: Journal started Feb 12 20:47:36.080133 systemd-journald[184]: Runtime Journal (/run/log/journal/0feb2bc9a7be460d9f7f9c4b5db49401) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:47:36.036930 systemd-modules-load[185]: Inserted module 'overlay' Feb 12 20:47:36.102097 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:47:36.102124 kernel: Bridge firewalling registered Feb 12 20:47:36.102141 systemd[1]: Started systemd-journald.service. Feb 12 20:47:36.102155 kernel: audit: type=1130 audit(1707770856.096:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.089769 systemd-resolved[186]: Positive Trust Anchors: Feb 12 20:47:36.109933 kernel: audit: type=1130 audit(1707770856.101:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.109952 kernel: audit: type=1130 audit(1707770856.105:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.089778 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:47:36.115097 kernel: audit: type=1130 audit(1707770856.109:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.115114 kernel: SCSI subsystem initialized Feb 12 20:47:36.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.089814 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:47:36.090121 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 12 20:47:36.092643 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 12 20:47:36.102548 systemd[1]: Started systemd-resolved.service. Feb 12 20:47:36.134556 kernel: audit: type=1130 audit(1707770856.127:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.106407 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:47:36.110500 systemd[1]: Reached target nss-lookup.target. Feb 12 20:47:36.116331 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:47:36.141624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:47:36.141643 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:47:36.141658 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:47:36.119357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:47:36.128193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:47:36.141817 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:47:36.147916 kernel: audit: type=1130 audit(1707770856.141:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.143725 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:47:36.147051 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 12 20:47:36.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.150166 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:47:36.151303 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:47:36.156581 kernel: audit: type=1130 audit(1707770856.149:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.161126 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:47:36.162395 dracut-cmdline[201]: dracut-dracut-053 Feb 12 20:47:36.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.166358 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:47:36.168521 kernel: audit: type=1130 audit(1707770856.160:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.231908 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:47:36.244872 kernel: iscsi: registered transport (tcp) Feb 12 20:47:36.269896 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:47:36.269948 kernel: QLogic iSCSI HBA Driver Feb 12 20:47:36.322490 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:47:36.324021 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:47:36.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.328873 kernel: audit: type=1130 audit(1707770856.322:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.413071 kernel: raid6: sse2x4 gen() 10854 MB/s Feb 12 20:47:36.429950 kernel: raid6: sse2x4 xor() 5157 MB/s Feb 12 20:47:36.446952 kernel: raid6: sse2x2 gen() 14437 MB/s Feb 12 20:47:36.463933 kernel: raid6: sse2x2 xor() 8843 MB/s Feb 12 20:47:36.480950 kernel: raid6: sse2x1 gen() 11153 MB/s Feb 12 20:47:36.498986 kernel: raid6: sse2x1 xor() 7021 MB/s Feb 12 20:47:36.499058 kernel: raid6: using algorithm sse2x2 gen() 14437 MB/s Feb 12 20:47:36.499086 kernel: raid6: .... xor() 8843 MB/s, rmw enabled Feb 12 20:47:36.499996 kernel: raid6: using ssse3x2 recovery algorithm Feb 12 20:47:36.514901 kernel: xor: measuring software checksum speed Feb 12 20:47:36.514963 kernel: prefetch64-sse : 18300 MB/sec Feb 12 20:47:36.517471 kernel: generic_sse : 16682 MB/sec Feb 12 20:47:36.517517 kernel: xor: using function: prefetch64-sse (18300 MB/sec) Feb 12 20:47:36.631919 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:47:36.648635 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:47:36.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.650000 audit: BPF prog-id=7 op=LOAD Feb 12 20:47:36.650000 audit: BPF prog-id=8 op=LOAD Feb 12 20:47:36.653266 systemd[1]: Starting systemd-udevd.service... Feb 12 20:47:36.676157 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 20:47:36.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.680966 systemd[1]: Started systemd-udevd.service. Feb 12 20:47:36.688190 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:47:36.705774 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 12 20:47:36.755802 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:47:36.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.758687 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:47:36.830664 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:47:36.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:36.915906 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 12 20:47:36.923881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:47:36.923939 kernel: GPT:17805311 != 41943039 Feb 12 20:47:36.923952 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:47:36.923963 kernel: GPT:17805311 != 41943039 Feb 12 20:47:36.923974 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:47:36.923985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:47:36.949887 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (437) Feb 12 20:47:36.951956 kernel: libata version 3.00 loaded. Feb 12 20:47:36.960979 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:47:36.962876 kernel: scsi host0: ata_piix Feb 12 20:47:36.963123 kernel: scsi host1: ata_piix Feb 12 20:47:36.963236 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 12 20:47:36.963251 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 12 20:47:36.964476 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:47:37.012861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:47:37.016534 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:47:37.017873 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:47:37.022414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:47:37.024207 systemd[1]: Starting disk-uuid.service... Feb 12 20:47:37.034297 disk-uuid[461]: Primary Header is updated. Feb 12 20:47:37.034297 disk-uuid[461]: Secondary Entries is updated. Feb 12 20:47:37.034297 disk-uuid[461]: Secondary Header is updated. Feb 12 20:47:37.041887 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:47:37.049893 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:47:38.062909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:47:38.063597 disk-uuid[462]: The operation has completed successfully. Feb 12 20:47:38.132154 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:47:38.134202 systemd[1]: Finished disk-uuid.service. Feb 12 20:47:38.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.150956 systemd[1]: Starting verity-setup.service... Feb 12 20:47:38.194930 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 12 20:47:38.300380 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:47:38.306344 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:47:38.312200 systemd[1]: Finished verity-setup.service. Feb 12 20:47:38.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.456035 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:47:38.456000 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:47:38.456584 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:47:38.457335 systemd[1]: Starting ignition-setup.service... Feb 12 20:47:38.462472 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:47:38.485788 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:47:38.485843 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:47:38.485869 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:47:38.512953 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:47:38.528898 systemd[1]: Finished ignition-setup.service. Feb 12 20:47:38.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.530417 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:47:38.621050 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:47:38.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.621000 audit: BPF prog-id=9 op=LOAD Feb 12 20:47:38.623300 systemd[1]: Starting systemd-networkd.service... Feb 12 20:47:38.647113 systemd-networkd[632]: lo: Link UP Feb 12 20:47:38.647124 systemd-networkd[632]: lo: Gained carrier Feb 12 20:47:38.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.647902 systemd-networkd[632]: Enumeration completed Feb 12 20:47:38.648318 systemd-networkd[632]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:47:38.649877 systemd-networkd[632]: eth0: Link UP Feb 12 20:47:38.649881 systemd-networkd[632]: eth0: Gained carrier Feb 12 20:47:38.650213 systemd[1]: Started systemd-networkd.service. Feb 12 20:47:38.652233 systemd[1]: Reached target network.target. Feb 12 20:47:38.654830 systemd[1]: Starting iscsiuio.service... Feb 12 20:47:38.665984 systemd-networkd[632]: eth0: DHCPv4 address 172.24.4.188/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:47:38.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.670330 systemd[1]: Started iscsiuio.service. Feb 12 20:47:38.671762 systemd[1]: Starting iscsid.service... Feb 12 20:47:38.678044 iscsid[641]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:47:38.678044 iscsid[641]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:47:38.678044 iscsid[641]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:47:38.678044 iscsid[641]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:47:38.678044 iscsid[641]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:47:38.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.685035 iscsid[641]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:47:38.680694 systemd[1]: Started iscsid.service. Feb 12 20:47:38.682650 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:47:38.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.695564 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:47:38.696305 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:47:38.697324 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:47:38.697979 systemd[1]: Reached target remote-fs.target. Feb 12 20:47:38.699536 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:47:38.712513 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:47:38.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.882807 ignition[566]: Ignition 2.14.0 Feb 12 20:47:38.884034 ignition[566]: Stage: fetch-offline Feb 12 20:47:38.884226 ignition[566]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:38.884274 ignition[566]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:38.886667 ignition[566]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:38.886925 ignition[566]: parsed url from cmdline: "" Feb 12 20:47:38.886934 ignition[566]: no config URL provided Feb 12 20:47:38.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:38.890605 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:47:38.886947 ignition[566]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:47:38.893502 systemd[1]: Starting ignition-fetch.service... Feb 12 20:47:38.886967 ignition[566]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:47:38.886978 ignition[566]: failed to fetch config: resource requires networking Feb 12 20:47:38.887957 ignition[566]: Ignition finished successfully Feb 12 20:47:38.911936 ignition[655]: Ignition 2.14.0 Feb 12 20:47:38.911963 ignition[655]: Stage: fetch Feb 12 20:47:38.912719 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:38.912763 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:38.915077 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:38.915312 ignition[655]: parsed url from cmdline: "" Feb 12 20:47:38.915322 ignition[655]: no config URL provided Feb 12 20:47:38.915342 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:47:38.915399 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:47:38.921222 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 12 20:47:38.921306 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 12 20:47:38.922141 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 12 20:47:39.267435 ignition[655]: GET result: OK Feb 12 20:47:39.267733 ignition[655]: parsing config with SHA512: a6c6c4d5a9eac365a79b6b5a0fa7b6b58a85c8a7aa04bbda868b20f6810095ad36b27bb33842dd41f5f5dbec0621316ec63efedf89024120fd20d4ac91519c3f Feb 12 20:47:39.399232 unknown[655]: fetched base config from "system" Feb 12 20:47:39.399287 unknown[655]: fetched base config from "system" Feb 12 20:47:39.399304 unknown[655]: fetched user config from "openstack" Feb 12 20:47:39.401135 ignition[655]: fetch: fetch complete Feb 12 20:47:39.401161 ignition[655]: fetch: fetch passed Feb 12 20:47:39.401251 ignition[655]: Ignition finished successfully Feb 12 20:47:39.404423 systemd[1]: Finished ignition-fetch.service. Feb 12 20:47:39.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.407526 systemd[1]: Starting ignition-kargs.service... Feb 12 20:47:39.417316 ignition[661]: Ignition 2.14.0 Feb 12 20:47:39.417333 ignition[661]: Stage: kargs Feb 12 20:47:39.417441 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:39.417461 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:39.422019 systemd[1]: Finished ignition-kargs.service. Feb 12 20:47:39.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.418358 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:39.419903 ignition[661]: kargs: kargs passed Feb 12 20:47:39.419944 ignition[661]: Ignition finished successfully Feb 12 20:47:39.433670 systemd[1]: Starting ignition-disks.service... Feb 12 20:47:39.444353 ignition[667]: Ignition 2.14.0 Feb 12 20:47:39.445155 ignition[667]: Stage: disks Feb 12 20:47:39.445739 ignition[667]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:39.446451 ignition[667]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:39.447450 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:39.449690 ignition[667]: disks: disks passed Feb 12 20:47:39.450228 ignition[667]: Ignition finished successfully Feb 12 20:47:39.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.452199 systemd[1]: Finished ignition-disks.service. Feb 12 20:47:39.452762 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:47:39.453244 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:47:39.454796 systemd[1]: Reached target local-fs.target. Feb 12 20:47:39.456508 systemd[1]: Reached target sysinit.target. Feb 12 20:47:39.458074 systemd[1]: Reached target basic.target. Feb 12 20:47:39.460415 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:47:39.486153 systemd-fsck[675]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 20:47:39.501709 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:47:39.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.504553 systemd[1]: Mounting sysroot.mount... Feb 12 20:47:39.528919 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:47:39.530122 systemd[1]: Mounted sysroot.mount. Feb 12 20:47:39.532563 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:47:39.537339 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:47:39.540659 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:47:39.544259 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 12 20:47:39.546990 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:47:39.547941 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:47:39.555066 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:47:39.564774 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:47:39.567596 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:47:39.588625 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:47:39.606887 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (682) Feb 12 20:47:39.611094 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:47:39.611129 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:47:39.611142 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:47:39.611210 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:47:39.619020 initrd-setup-root[719]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:47:39.627226 initrd-setup-root[729]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:47:39.630681 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:47:39.709596 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:47:39.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.710891 systemd[1]: Starting ignition-mount.service... Feb 12 20:47:39.714387 systemd[1]: Starting sysroot-boot.service... Feb 12 20:47:39.724411 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:47:39.724528 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:47:39.749682 ignition[750]: INFO : Ignition 2.14.0 Feb 12 20:47:39.749682 ignition[750]: INFO : Stage: mount Feb 12 20:47:39.752226 ignition[750]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:39.752226 ignition[750]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:39.752226 ignition[750]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:39.760433 coreos-metadata[681]: Feb 12 20:47:39.758 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 12 20:47:39.761660 ignition[750]: INFO : mount: mount passed Feb 12 20:47:39.761660 ignition[750]: INFO : Ignition finished successfully Feb 12 20:47:39.762732 systemd[1]: Finished ignition-mount.service. Feb 12 20:47:39.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.772306 systemd[1]: Finished sysroot-boot.service. Feb 12 20:47:39.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.774361 coreos-metadata[681]: Feb 12 20:47:39.774 INFO Fetch successful Feb 12 20:47:39.775006 coreos-metadata[681]: Feb 12 20:47:39.774 INFO wrote hostname ci-3510-3-2-f-bcfc1a2c45.novalocal to /sysroot/etc/hostname Feb 12 20:47:39.778733 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 12 20:47:39.778825 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 12 20:47:39.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:39.781243 systemd[1]: Starting ignition-files.service... Feb 12 20:47:39.788815 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:47:39.797912 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (759) Feb 12 20:47:39.801353 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:47:39.801377 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:47:39.801388 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:47:39.809470 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:47:39.819468 ignition[778]: INFO : Ignition 2.14.0 Feb 12 20:47:39.819468 ignition[778]: INFO : Stage: files Feb 12 20:47:39.820668 ignition[778]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:39.820668 ignition[778]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:39.820668 ignition[778]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:39.823144 ignition[778]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:47:39.824018 ignition[778]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:47:39.824018 ignition[778]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:47:39.828381 ignition[778]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:47:39.829693 ignition[778]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:47:39.831529 unknown[778]: wrote ssh authorized keys file for user: core Feb 12 20:47:39.832685 ignition[778]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:47:39.832685 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:47:39.832685 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:47:39.919146 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:47:40.541134 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:47:40.543562 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:47:40.543562 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 20:47:40.561024 systemd-networkd[632]: eth0: Gained IPv6LL Feb 12 20:47:41.123996 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:47:41.847237 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 20:47:41.847237 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 20:47:41.847237 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:47:41.855211 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:47:42.333611 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:47:42.815085 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 20:47:42.815085 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 20:47:42.815085 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:47:42.832361 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:47:42.832361 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:47:42.832361 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:47:42.994981 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:47:43.922290 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 20:47:43.923907 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:47:43.924777 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:47:43.925679 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:47:44.039643 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:47:45.029812 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 20:47:45.029812 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:47:45.029812 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:47:45.037576 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:47:45.142162 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:47:47.449422 ignition[778]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 20:47:47.450983 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:47:47.450983 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:47:47.450983 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:47:47.450983 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:47:47.450983 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:47:47.699059 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 12 20:47:48.144834 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:47:48.144834 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:47:48.149108 ignition[778]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:47:48.149108 ignition[778]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:47:48.149108 ignition[778]: INFO : files: op(11): op(12): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:47:48.149108 ignition[778]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 12 20:47:48.149108 ignition[778]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:47:48.149108 ignition[778]: INFO : files: op(13): [started] processing unit "coreos-metadata.service" Feb 12 20:47:48.149108 ignition[778]: INFO : files: op(13): op(14): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:47:48.212738 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 12 20:47:48.212788 kernel: audit: type=1130 audit(1707770868.155:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.212820 kernel: audit: type=1130 audit(1707770868.181:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.212848 kernel: audit: type=1131 audit(1707770868.181:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.212918 kernel: audit: type=1130 audit(1707770868.200:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(13): [finished] processing unit "coreos-metadata.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(15): [started] processing unit "containerd.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(15): [finished] processing unit "containerd.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(17): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(17): op(18): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(17): op(18): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(17): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(1b): [started] processing unit "prepare-helm.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:47:48.213242 ignition[778]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:47:48.247246 kernel: audit: type=1130 audit(1707770868.231:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.247280 kernel: audit: type=1131 audit(1707770868.231:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.154324 systemd[1]: Finished ignition-files.service. Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1b): [finished] processing unit "prepare-helm.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:47:48.248529 ignition[778]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:47:48.248529 ignition[778]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:47:48.248529 ignition[778]: INFO : files: files passed Feb 12 20:47:48.248529 ignition[778]: INFO : Ignition finished successfully Feb 12 20:47:48.273124 kernel: audit: type=1130 audit(1707770868.256:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.157534 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:47:48.273892 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:47:48.173396 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:47:48.175007 systemd[1]: Starting ignition-quench.service... Feb 12 20:47:48.181530 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:47:48.281248 kernel: audit: type=1131 audit(1707770868.276:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.181687 systemd[1]: Finished ignition-quench.service. Feb 12 20:47:48.182738 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:47:48.201095 systemd[1]: Reached target ignition-complete.target. Feb 12 20:47:48.213830 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:47:48.231041 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:47:48.231134 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:47:48.232357 systemd[1]: Reached target initrd-fs.target. Feb 12 20:47:48.240156 systemd[1]: Reached target initrd.target. Feb 12 20:47:48.241340 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:47:48.241998 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:47:48.252637 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:47:48.261107 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:47:48.295371 kernel: audit: type=1131 audit(1707770868.290:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.271629 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:47:48.274110 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:47:48.300405 kernel: audit: type=1131 audit(1707770868.295:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.275273 systemd[1]: Stopped target timers.target. Feb 12 20:47:48.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.276588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:47:48.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.276712 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:47:48.277603 systemd[1]: Stopped target initrd.target. Feb 12 20:47:48.281744 systemd[1]: Stopped target basic.target. Feb 12 20:47:48.307550 iscsid[641]: iscsid shutting down. Feb 12 20:47:48.282625 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:47:48.283609 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:47:48.284537 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:47:48.285454 systemd[1]: Stopped target remote-fs.target. Feb 12 20:47:48.286311 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:47:48.287192 systemd[1]: Stopped target sysinit.target. Feb 12 20:47:48.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.288148 systemd[1]: Stopped target local-fs.target. Feb 12 20:47:48.288966 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:47:48.290133 systemd[1]: Stopped target swap.target. Feb 12 20:47:48.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.317203 ignition[816]: INFO : Ignition 2.14.0 Feb 12 20:47:48.317203 ignition[816]: INFO : Stage: umount Feb 12 20:47:48.317203 ignition[816]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:47:48.317203 ignition[816]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 12 20:47:48.317203 ignition[816]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 12 20:47:48.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.290920 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:47:48.291064 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:47:48.291876 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:47:48.295846 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:47:48.296014 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:47:48.296809 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:47:48.296971 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:47:48.301049 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:47:48.301198 systemd[1]: Stopped ignition-files.service. Feb 12 20:47:48.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.302669 systemd[1]: Stopping ignition-mount.service... Feb 12 20:47:48.303363 systemd[1]: Stopping iscsid.service... Feb 12 20:47:48.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.331790 ignition[816]: INFO : umount: umount passed Feb 12 20:47:48.331790 ignition[816]: INFO : Ignition finished successfully Feb 12 20:47:48.306812 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:47:48.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.310949 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:47:48.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.312522 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:47:48.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.313482 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:47:48.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.313644 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:47:48.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.314671 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:47:48.319004 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:47:48.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.325884 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:47:48.326005 systemd[1]: Stopped iscsid.service. Feb 12 20:47:48.328688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:47:48.328770 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:47:48.333297 systemd[1]: Stopping iscsiuio.service... Feb 12 20:47:48.333891 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:47:48.333980 systemd[1]: Stopped ignition-mount.service. Feb 12 20:47:48.334540 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:47:48.334618 systemd[1]: Stopped iscsiuio.service. Feb 12 20:47:48.335735 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:47:48.335773 systemd[1]: Stopped ignition-disks.service. Feb 12 20:47:48.336616 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:47:48.336652 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:47:48.337474 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:47:48.337509 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:47:48.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.338621 systemd[1]: Stopped target network.target. Feb 12 20:47:48.339462 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:47:48.339500 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:47:48.340428 systemd[1]: Stopped target paths.target. Feb 12 20:47:48.341336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:47:48.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.345894 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:47:48.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.346758 systemd[1]: Stopped target slices.target. Feb 12 20:47:48.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.347877 systemd[1]: Stopped target sockets.target. Feb 12 20:47:48.348775 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:47:48.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.348807 systemd[1]: Closed iscsid.socket. Feb 12 20:47:48.349693 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:47:48.349721 systemd[1]: Closed iscsiuio.socket. Feb 12 20:47:48.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.350513 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:47:48.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.350552 systemd[1]: Stopped ignition-setup.service. Feb 12 20:47:48.351552 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:47:48.352826 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:47:48.354680 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:47:48.355134 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:47:48.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.355209 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:47:48.355845 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:47:48.355898 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:47:48.356145 systemd-networkd[632]: eth0: DHCPv6 lease lost Feb 12 20:47:48.374000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:47:48.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.357910 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:47:48.358002 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:47:48.376000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:47:48.358737 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:47:48.358765 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:47:48.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.361422 systemd[1]: Stopping network-cleanup.service... Feb 12 20:47:48.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.361910 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:47:48.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.361988 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:47:48.362562 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:47:48.362613 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:47:48.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.364765 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:47:48.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:48.364811 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:47:48.369424 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:47:48.371267 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:47:48.371707 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:47:48.371804 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:47:48.375068 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:47:48.375215 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:47:48.377425 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:47:48.377468 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:47:48.378495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:47:48.378533 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:47:48.379383 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:47:48.379462 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:47:48.380708 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:47:48.380750 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:47:48.381543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:47:48.381589 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:47:48.383149 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:47:48.384077 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:47:48.402000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:47:48.403000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:47:48.403000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:47:48.384124 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:47:48.390601 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:47:48.390692 systemd[1]: Stopped network-cleanup.service. Feb 12 20:47:48.391473 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:47:48.391560 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:47:48.392130 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:47:48.406000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:47:48.406000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:47:48.393747 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:47:48.402026 systemd[1]: Switching root. Feb 12 20:47:48.423505 systemd-journald[184]: Journal stopped Feb 12 20:47:52.662199 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 12 20:47:52.662254 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:47:52.662269 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:47:52.662281 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:47:52.662292 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:47:52.662303 kernel: SELinux: policy capability open_perms=1 Feb 12 20:47:52.662315 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:47:52.662328 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:47:52.662339 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:47:52.662350 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:47:52.662361 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:47:52.662371 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:47:52.662384 systemd[1]: Successfully loaded SELinux policy in 95.045ms. Feb 12 20:47:52.662405 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.233ms. Feb 12 20:47:52.662420 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:47:52.662434 systemd[1]: Detected virtualization kvm. Feb 12 20:47:52.662446 systemd[1]: Detected architecture x86-64. Feb 12 20:47:52.662470 systemd[1]: Detected first boot. Feb 12 20:47:52.662485 systemd[1]: Hostname set to . Feb 12 20:47:52.662502 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:47:52.662514 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:47:52.662526 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:47:52.662541 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:47:52.662557 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:47:52.662573 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:47:52.662586 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:47:52.662598 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:47:52.662611 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:47:52.662623 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:47:52.662638 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:47:52.663753 systemd[1]: Created slice system-getty.slice. Feb 12 20:47:52.663767 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:47:52.663780 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:47:52.663792 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:47:52.663805 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:47:52.663817 systemd[1]: Created slice user.slice. Feb 12 20:47:52.663830 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:47:52.663841 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:47:52.665366 systemd[1]: Set up automount boot.automount. Feb 12 20:47:52.665386 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:47:52.665399 systemd[1]: Reached target integritysetup.target. Feb 12 20:47:52.665411 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:47:52.665424 systemd[1]: Reached target remote-fs.target. Feb 12 20:47:52.665892 systemd[1]: Reached target slices.target. Feb 12 20:47:52.665908 systemd[1]: Reached target swap.target. Feb 12 20:47:52.665923 systemd[1]: Reached target torcx.target. Feb 12 20:47:52.665936 systemd[1]: Reached target veritysetup.target. Feb 12 20:47:52.665949 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:47:52.665961 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:47:52.665972 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:47:52.665984 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:47:52.665996 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:47:52.666009 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:47:52.666020 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:47:52.666034 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:47:52.666046 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:47:52.666059 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:47:52.666070 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:47:52.666082 systemd[1]: Mounting media.mount... Feb 12 20:47:52.666095 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:47:52.666107 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:47:52.666119 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:47:52.666131 systemd[1]: Mounting tmp.mount... Feb 12 20:47:52.666145 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:47:52.666157 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:47:52.666172 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:47:52.666184 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:47:52.666196 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:47:52.666209 systemd[1]: Starting modprobe@drm.service... Feb 12 20:47:52.666220 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:47:52.666232 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:47:52.666245 systemd[1]: Starting modprobe@loop.service... Feb 12 20:47:52.666259 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:47:52.666272 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 20:47:52.666284 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 20:47:52.666296 systemd[1]: Starting systemd-journald.service... Feb 12 20:47:52.666308 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:47:52.666320 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:47:52.666331 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:47:52.666343 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:47:52.666356 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:47:52.666370 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:47:52.666382 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:47:52.666394 systemd[1]: Mounted media.mount. Feb 12 20:47:52.666405 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:47:52.666417 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:47:52.666429 systemd[1]: Mounted tmp.mount. Feb 12 20:47:52.666441 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:47:52.666454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:47:52.666466 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:47:52.666480 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:47:52.666492 systemd[1]: Finished modprobe@drm.service. Feb 12 20:47:52.666504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:47:52.666520 systemd-journald[948]: Journal started Feb 12 20:47:52.666568 systemd-journald[948]: Runtime Journal (/run/log/journal/0feb2bc9a7be460d9f7f9c4b5db49401) is 4.9M, max 39.5M, 34.5M free. Feb 12 20:47:52.513000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:47:52.513000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:47:52.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.658000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:47:52.658000 audit[948]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd8bf03b40 a2=4000 a3=7ffd8bf03bdc items=0 ppid=1 pid=948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:47:52.658000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:47:52.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.683744 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:47:52.683797 systemd[1]: Started systemd-journald.service. Feb 12 20:47:52.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.675568 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:47:52.679415 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:47:52.680197 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:47:52.680944 systemd[1]: Reached target network-pre.target. Feb 12 20:47:52.681432 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:47:52.686914 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:47:52.688912 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:47:52.689461 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:47:52.690655 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:47:52.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.716270 systemd-journald[948]: Time spent on flushing to /var/log/journal/0feb2bc9a7be460d9f7f9c4b5db49401 is 41.356ms for 1056 entries. Feb 12 20:47:52.716270 systemd-journald[948]: System Journal (/var/log/journal/0feb2bc9a7be460d9f7f9c4b5db49401) is 8.0M, max 584.8M, 576.8M free. Feb 12 20:47:52.796019 systemd-journald[948]: Received client request to flush runtime journal. Feb 12 20:47:52.796068 kernel: loop: module loaded Feb 12 20:47:52.796099 kernel: fuse: init (API version 7.34) Feb 12 20:47:52.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.701535 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:47:52.705014 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:47:52.705271 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:47:52.707569 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:47:52.712177 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:47:52.730207 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:47:52.730477 systemd[1]: Finished modprobe@loop.service. Feb 12 20:47:52.731091 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:47:52.735008 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:47:52.742815 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:47:52.744059 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:47:52.746076 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:47:52.747894 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:47:52.748530 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:47:52.750365 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:47:52.797157 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:47:52.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.817622 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:47:52.819554 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:47:52.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.841153 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:47:52.843506 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:47:52.853254 udevadm[1009]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:47:52.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.881714 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:47:52.883447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:47:52.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:52.930714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:47:53.783537 kernel: kauditd_printk_skb: 77 callbacks suppressed Feb 12 20:47:53.783670 kernel: audit: type=1130 audit(1707770873.778:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:53.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:53.777116 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:47:53.780739 systemd[1]: Starting systemd-udevd.service... Feb 12 20:47:53.832563 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Feb 12 20:47:53.876123 systemd[1]: Started systemd-udevd.service. Feb 12 20:47:53.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:53.891893 kernel: audit: type=1130 audit(1707770873.876:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:53.892980 systemd[1]: Starting systemd-networkd.service... Feb 12 20:47:53.906099 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:47:53.973396 systemd[1]: Started systemd-userdbd.service. Feb 12 20:47:53.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:53.977908 kernel: audit: type=1130 audit(1707770873.973:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:53.978876 systemd[1]: Found device dev-ttyS0.device. Feb 12 20:47:54.032903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:47:54.048973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:47:54.049000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:47:54.059881 kernel: audit: type=1400 audit(1707770874.049:119): avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:47:54.072986 systemd-networkd[1032]: lo: Link UP Feb 12 20:47:54.072993 systemd-networkd[1032]: lo: Gained carrier Feb 12 20:47:54.074070 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:47:54.073784 systemd-networkd[1032]: Enumeration completed Feb 12 20:47:54.073927 systemd[1]: Started systemd-networkd.service. Feb 12 20:47:54.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.078652 systemd-networkd[1032]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:47:54.078918 kernel: audit: type=1130 audit(1707770874.073:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.080235 systemd-networkd[1032]: eth0: Link UP Feb 12 20:47:54.080308 systemd-networkd[1032]: eth0: Gained carrier Feb 12 20:47:54.093039 systemd-networkd[1032]: eth0: DHCPv4 address 172.24.4.188/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 12 20:47:54.049000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559222b9e950 a1=32194 a2=7f142d324bc5 a3=5 items=108 ppid=1016 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:47:54.109907 kernel: audit: type=1300 audit(1707770874.049:119): arch=c000003e syscall=175 success=yes exit=0 a0=559222b9e950 a1=32194 a2=7f142d324bc5 a3=5 items=108 ppid=1016 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:47:54.109944 kernel: audit: type=1307 audit(1707770874.049:119): cwd="/" Feb 12 20:47:54.109963 kernel: audit: type=1302 audit(1707770874.049:119): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: CWD cwd="/" Feb 12 20:47:54.049000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=1 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.117204 kernel: audit: type=1302 audit(1707770874.049:119): item=1 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=2 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.120807 kernel: audit: type=1302 audit(1707770874.049:119): item=2 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=3 name=(null) inode=13286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=4 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=5 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=6 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=7 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=8 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=9 name=(null) inode=13289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=10 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=11 name=(null) inode=13290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=12 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=13 name=(null) inode=13291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=14 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=15 name=(null) inode=13292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=16 name=(null) inode=13288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=17 name=(null) inode=13293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=18 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=19 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=20 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=21 name=(null) inode=13295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=22 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=23 name=(null) inode=13296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=24 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=25 name=(null) inode=13297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=26 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=27 name=(null) inode=13298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=28 name=(null) inode=13294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=29 name=(null) inode=13299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=30 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=31 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=32 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=33 name=(null) inode=13301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=34 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=35 name=(null) inode=13302 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=36 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=37 name=(null) inode=13303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=38 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=39 name=(null) inode=13304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=40 name=(null) inode=13300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=41 name=(null) inode=13305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=42 name=(null) inode=13285 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=43 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=44 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=45 name=(null) inode=13307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=46 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=47 name=(null) inode=13308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=48 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=49 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=50 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=51 name=(null) inode=13310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=52 name=(null) inode=13306 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=53 name=(null) inode=13311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=55 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=56 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=57 name=(null) inode=14337 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=58 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=59 name=(null) inode=14338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=60 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=61 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=62 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=63 name=(null) inode=14340 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=64 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=65 name=(null) inode=14341 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=66 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=67 name=(null) inode=14342 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=68 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=69 name=(null) inode=14343 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=70 name=(null) inode=14339 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=71 name=(null) inode=14344 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=72 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=73 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=74 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=75 name=(null) inode=14346 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=76 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=77 name=(null) inode=14347 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=78 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=79 name=(null) inode=14348 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=80 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=81 name=(null) inode=14349 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=82 name=(null) inode=14345 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=83 name=(null) inode=14350 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=84 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=85 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=86 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=87 name=(null) inode=14352 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=88 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=89 name=(null) inode=14353 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=90 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=91 name=(null) inode=14354 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=92 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=93 name=(null) inode=14355 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=94 name=(null) inode=14351 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=95 name=(null) inode=14356 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=96 name=(null) inode=13312 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=97 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=98 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=99 name=(null) inode=14358 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=100 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=101 name=(null) inode=14359 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=102 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=103 name=(null) inode=14360 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=104 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=105 name=(null) inode=14361 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=106 name=(null) inode=14357 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PATH item=107 name=(null) inode=14362 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:47:54.049000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:47:54.125872 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:47:54.129900 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:47:54.135891 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:47:54.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.177991 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:47:54.181831 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:47:54.217664 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:47:54.259660 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:47:54.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.261070 systemd[1]: Reached target cryptsetup.target. Feb 12 20:47:54.264530 systemd[1]: Starting lvm2-activation.service... Feb 12 20:47:54.275071 lvm[1048]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:47:54.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.313997 systemd[1]: Finished lvm2-activation.service. Feb 12 20:47:54.315354 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:47:54.316465 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:47:54.316508 systemd[1]: Reached target local-fs.target. Feb 12 20:47:54.317608 systemd[1]: Reached target machines.target. Feb 12 20:47:54.323159 systemd[1]: Starting ldconfig.service... Feb 12 20:47:54.325722 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:47:54.325809 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:47:54.329355 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:47:54.333206 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:47:54.339257 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:47:54.342122 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:47:54.342229 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:47:54.346630 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:47:54.358415 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1051 (bootctl) Feb 12 20:47:54.363291 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:47:54.387762 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:47:54.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.389548 systemd-tmpfiles[1054]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:47:54.418336 systemd-tmpfiles[1054]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:47:54.423123 systemd-tmpfiles[1054]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:47:54.772222 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:47:54.773684 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:47:54.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.930924 systemd-fsck[1060]: fsck.fat 4.2 (2021-01-31) Feb 12 20:47:54.930924 systemd-fsck[1060]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:47:54.934598 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:47:54.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:54.938689 systemd[1]: Mounting boot.mount... Feb 12 20:47:54.963581 systemd[1]: Mounted boot.mount. Feb 12 20:47:54.995998 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:47:54.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:55.073087 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:47:55.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:55.074979 systemd[1]: Starting audit-rules.service... Feb 12 20:47:55.076557 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:47:55.078227 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:47:55.086966 systemd[1]: Starting systemd-resolved.service... Feb 12 20:47:55.089005 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:47:55.090414 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:47:55.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:55.096741 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:47:55.100144 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:47:55.110000 audit[1073]: SYSTEM_BOOT pid=1073 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:47:55.113504 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:47:55.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:55.153354 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:47:55.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:47:55.174000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:47:55.174000 audit[1090]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd26d99650 a2=420 a3=0 items=0 ppid=1068 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:47:55.174000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:47:55.176311 augenrules[1090]: No rules Feb 12 20:47:55.176327 systemd[1]: Finished audit-rules.service. Feb 12 20:47:55.194133 systemd-resolved[1071]: Positive Trust Anchors: Feb 12 20:47:55.194146 systemd-resolved[1071]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:47:55.194182 systemd-resolved[1071]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:47:55.211838 systemd-resolved[1071]: Using system hostname 'ci-3510-3-2-f-bcfc1a2c45.novalocal'. Feb 12 20:47:55.214785 systemd[1]: Started systemd-resolved.service. Feb 12 20:47:55.215490 systemd[1]: Reached target network.target. Feb 12 20:47:55.215986 systemd[1]: Reached target nss-lookup.target. Feb 12 20:47:55.231252 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:47:55.231938 systemd[1]: Reached target time-set.target. Feb 12 20:47:55.924719 systemd-timesyncd[1072]: Contacted time server 45.128.41.10:123 (0.flatcar.pool.ntp.org). Feb 12 20:47:55.924834 systemd-resolved[1071]: Clock change detected. Flushing caches. Feb 12 20:47:55.925783 systemd-timesyncd[1072]: Initial clock synchronization to Mon 2024-02-12 20:47:55.924515 UTC. Feb 12 20:47:56.063608 ldconfig[1050]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:47:56.079857 systemd[1]: Finished ldconfig.service. Feb 12 20:47:56.083874 systemd[1]: Starting systemd-update-done.service... Feb 12 20:47:56.099565 systemd[1]: Finished systemd-update-done.service. Feb 12 20:47:56.100981 systemd[1]: Reached target sysinit.target. Feb 12 20:47:56.102228 systemd[1]: Started motdgen.path. Feb 12 20:47:56.103334 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:47:56.104921 systemd[1]: Started logrotate.timer. Feb 12 20:47:56.106358 systemd[1]: Started mdadm.timer. Feb 12 20:47:56.107410 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:47:56.108519 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:47:56.108584 systemd[1]: Reached target paths.target. Feb 12 20:47:56.109707 systemd[1]: Reached target timers.target. Feb 12 20:47:56.111681 systemd[1]: Listening on dbus.socket. Feb 12 20:47:56.115185 systemd[1]: Starting docker.socket... Feb 12 20:47:56.119209 systemd[1]: Listening on sshd.socket. Feb 12 20:47:56.120502 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:47:56.121435 systemd[1]: Listening on docker.socket. Feb 12 20:47:56.122536 systemd[1]: Reached target sockets.target. Feb 12 20:47:56.123680 systemd[1]: Reached target basic.target. Feb 12 20:47:56.125027 systemd[1]: System is tainted: cgroupsv1 Feb 12 20:47:56.125155 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:47:56.125208 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:47:56.127552 systemd[1]: Starting containerd.service... Feb 12 20:47:56.130940 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:47:56.134180 systemd[1]: Starting dbus.service... Feb 12 20:47:56.139557 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:47:56.141384 systemd[1]: Starting extend-filesystems.service... Feb 12 20:47:56.141894 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:47:56.143679 systemd[1]: Starting motdgen.service... Feb 12 20:47:56.147817 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:47:56.149406 systemd[1]: Starting prepare-critools.service... Feb 12 20:47:56.154077 systemd[1]: Starting prepare-helm.service... Feb 12 20:47:56.155844 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:47:56.160257 systemd[1]: Starting sshd-keygen.service... Feb 12 20:47:56.162655 systemd[1]: Starting systemd-logind.service... Feb 12 20:47:56.164833 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:47:56.164900 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:47:56.172159 systemd[1]: Starting update-engine.service... Feb 12 20:47:56.174626 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:47:56.186352 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:47:56.186590 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:47:56.225106 tar[1126]: linux-amd64/helm Feb 12 20:47:56.225606 tar[1124]: ./ Feb 12 20:47:56.225606 tar[1124]: ./macvlan Feb 12 20:47:56.225829 tar[1125]: crictl Feb 12 20:47:56.207363 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:47:56.226047 jq[1109]: false Feb 12 20:47:56.230212 jq[1122]: true Feb 12 20:47:56.207630 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:47:56.244277 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:47:56.246013 jq[1138]: true Feb 12 20:47:56.261936 systemd[1]: Finished motdgen.service. Feb 12 20:47:56.276249 systemd[1]: Started dbus.service. Feb 12 20:47:56.276088 dbus-daemon[1106]: [system] SELinux support is enabled Feb 12 20:47:56.278800 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:47:56.278825 systemd[1]: Reached target system-config.target. Feb 12 20:47:56.279316 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:47:56.279332 systemd[1]: Reached target user-config.target. Feb 12 20:47:56.306239 extend-filesystems[1110]: Found vda Feb 12 20:47:56.307714 extend-filesystems[1110]: Found vda1 Feb 12 20:47:56.309672 extend-filesystems[1110]: Found vda2 Feb 12 20:47:56.309672 extend-filesystems[1110]: Found vda3 Feb 12 20:47:56.309672 extend-filesystems[1110]: Found usr Feb 12 20:47:56.309672 extend-filesystems[1110]: Found vda4 Feb 12 20:47:56.309672 extend-filesystems[1110]: Found vda6 Feb 12 20:47:56.309672 extend-filesystems[1110]: Found vda7 Feb 12 20:47:56.309672 extend-filesystems[1110]: Found vda9 Feb 12 20:47:56.309672 extend-filesystems[1110]: Checking size of /dev/vda9 Feb 12 20:47:56.345370 env[1135]: time="2024-02-12T20:47:56.315874157Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:47:56.345601 coreos-metadata[1105]: Feb 12 20:47:56.324 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 12 20:47:56.358922 update_engine[1121]: I0212 20:47:56.354708 1121 main.cc:92] Flatcar Update Engine starting Feb 12 20:47:56.377342 update_engine[1121]: I0212 20:47:56.363163 1121 update_check_scheduler.cc:74] Next update check in 9m31s Feb 12 20:47:56.377380 extend-filesystems[1110]: Resized partition /dev/vda9 Feb 12 20:47:56.361313 systemd[1]: Started update-engine.service. Feb 12 20:47:56.378382 systemd[1]: Started locksmithd.service. Feb 12 20:47:56.392561 extend-filesystems[1178]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:47:56.417002 systemd-networkd[1032]: eth0: Gained IPv6LL Feb 12 20:47:56.422825 bash[1174]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:47:56.423215 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:47:56.435123 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 12 20:47:56.451654 systemd-logind[1120]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:47:56.451677 systemd-logind[1120]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:47:56.458016 systemd-logind[1120]: New seat seat0. Feb 12 20:47:56.460496 systemd[1]: Started systemd-logind.service. Feb 12 20:47:56.460683 tar[1124]: ./static Feb 12 20:47:56.478937 env[1135]: time="2024-02-12T20:47:56.475007392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:47:56.478937 env[1135]: time="2024-02-12T20:47:56.475162693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.480528517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.480567229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.480908018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.480928437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.480947823Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.480960136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.481037812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.481261341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.481404379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:47:56.481611 env[1135]: time="2024-02-12T20:47:56.481422083Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:47:56.481883 env[1135]: time="2024-02-12T20:47:56.481470974Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:47:56.481883 env[1135]: time="2024-02-12T20:47:56.481486032Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:47:56.590847 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590490380Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590552867Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590569749Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590619061Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590638177Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590654638Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590676669Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590692569Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590707197Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590761268Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590779041Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.696842 env[1135]: time="2024-02-12T20:47:56.590793258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:47:56.697561 coreos-metadata[1105]: Feb 12 20:47:56.685 INFO Fetch successful Feb 12 20:47:56.697561 coreos-metadata[1105]: Feb 12 20:47:56.685 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.700343610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.702043929Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.702910304Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.702981708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703021662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703148160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703203554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703239932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703300706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703333127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703365918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703397087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703427574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.704158 env[1135]: time="2024-02-12T20:47:56.703462880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:47:56.701758 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:47:56.705487 extend-filesystems[1178]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:47:56.705487 extend-filesystems[1178]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 12 20:47:56.705487 extend-filesystems[1178]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 12 20:47:56.735302 coreos-metadata[1105]: Feb 12 20:47:56.700 INFO Fetch successful Feb 12 20:47:56.702016 systemd[1]: Finished extend-filesystems.service. Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.711955929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712057189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712137489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712173727Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712253978Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712454283Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712542890Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:47:56.735593 env[1135]: time="2024-02-12T20:47:56.712671901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:47:56.736806 extend-filesystems[1110]: Resized filesystem in /dev/vda9 Feb 12 20:47:56.717816 unknown[1105]: wrote ssh authorized keys file for user: core Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.713487752Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.713711151Z" level=info msg="Connect containerd service" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.719524604Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.733465129Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.733792343Z" level=info msg="Start subscribing containerd event" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.733914562Z" level=info msg="Start recovering state" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.734119957Z" level=info msg="Start event monitor" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.734215146Z" level=info msg="Start snapshots syncer" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.734242126Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.734494009Z" level=info msg="Start streaming server" Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.736256124Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:47:56.742832 env[1135]: time="2024-02-12T20:47:56.737707826Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:47:56.755020 tar[1124]: ./vlan Feb 12 20:47:56.752856 systemd[1]: Started containerd.service. Feb 12 20:47:56.759752 env[1135]: time="2024-02-12T20:47:56.759706054Z" level=info msg="containerd successfully booted in 0.444504s" Feb 12 20:47:56.766935 update-ssh-keys[1190]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:47:56.767186 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:47:56.841376 tar[1124]: ./portmap Feb 12 20:47:56.882883 tar[1124]: ./host-local Feb 12 20:47:56.924021 tar[1124]: ./vrf Feb 12 20:47:56.989058 tar[1124]: ./bridge Feb 12 20:47:57.066073 tar[1124]: ./tuning Feb 12 20:47:57.130920 tar[1124]: ./firewall Feb 12 20:47:57.208638 tar[1124]: ./host-device Feb 12 20:47:57.279272 tar[1124]: ./sbr Feb 12 20:47:57.351100 tar[1124]: ./loopback Feb 12 20:47:57.411480 tar[1124]: ./dhcp Feb 12 20:47:57.469537 systemd[1]: Created slice system-sshd.slice. Feb 12 20:47:57.471296 tar[1126]: linux-amd64/LICENSE Feb 12 20:47:57.477823 tar[1126]: linux-amd64/README.md Feb 12 20:47:57.482186 systemd[1]: Finished prepare-helm.service. Feb 12 20:47:57.484456 locksmithd[1179]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:47:57.553292 systemd[1]: Finished prepare-critools.service. Feb 12 20:47:57.557834 tar[1124]: ./ptp Feb 12 20:47:57.591568 tar[1124]: ./ipvlan Feb 12 20:47:57.624669 tar[1124]: ./bandwidth Feb 12 20:47:57.725020 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:47:59.106496 sshd_keygen[1145]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:47:59.132263 systemd[1]: Finished sshd-keygen.service. Feb 12 20:47:59.134304 systemd[1]: Starting issuegen.service... Feb 12 20:47:59.135815 systemd[1]: Started sshd@0-172.24.4.188:22-172.24.4.1:53942.service. Feb 12 20:47:59.146076 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:47:59.146309 systemd[1]: Finished issuegen.service. Feb 12 20:47:59.148222 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:47:59.158197 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:47:59.160109 systemd[1]: Started getty@tty1.service. Feb 12 20:47:59.161716 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:47:59.162453 systemd[1]: Reached target getty.target. Feb 12 20:47:59.163064 systemd[1]: Reached target multi-user.target. Feb 12 20:47:59.164999 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:47:59.174898 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:47:59.175122 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:47:59.189730 systemd[1]: Startup finished in 13.917s (kernel) + 9.917s (userspace) = 23.834s. Feb 12 20:48:00.592945 sshd[1220]: Accepted publickey for core from 172.24.4.1 port 53942 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:48:00.597070 sshd[1220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:48:00.621465 systemd[1]: Created slice user-500.slice. Feb 12 20:48:00.623550 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:48:00.633220 systemd-logind[1120]: New session 1 of user core. Feb 12 20:48:00.648923 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:48:00.654009 systemd[1]: Starting user@500.service... Feb 12 20:48:00.664136 (systemd)[1234]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:48:00.787885 systemd[1234]: Queued start job for default target default.target. Feb 12 20:48:00.788506 systemd[1234]: Reached target paths.target. Feb 12 20:48:00.788526 systemd[1234]: Reached target sockets.target. Feb 12 20:48:00.788542 systemd[1234]: Reached target timers.target. Feb 12 20:48:00.788557 systemd[1234]: Reached target basic.target. Feb 12 20:48:00.788599 systemd[1234]: Reached target default.target. Feb 12 20:48:00.788622 systemd[1234]: Startup finished in 111ms. Feb 12 20:48:00.788848 systemd[1]: Started user@500.service. Feb 12 20:48:00.791221 systemd[1]: Started session-1.scope. Feb 12 20:48:01.326325 systemd[1]: Started sshd@1-172.24.4.188:22-172.24.4.1:53946.service. Feb 12 20:48:04.204382 sshd[1243]: Accepted publickey for core from 172.24.4.1 port 53946 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:48:04.207695 sshd[1243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:48:04.218012 systemd-logind[1120]: New session 2 of user core. Feb 12 20:48:04.218825 systemd[1]: Started session-2.scope. Feb 12 20:48:04.987549 sshd[1243]: pam_unix(sshd:session): session closed for user core Feb 12 20:48:04.990357 systemd[1]: Started sshd@2-172.24.4.188:22-172.24.4.1:47590.service. Feb 12 20:48:04.999335 systemd[1]: sshd@1-172.24.4.188:22-172.24.4.1:53946.service: Deactivated successfully. Feb 12 20:48:05.005318 systemd-logind[1120]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:48:05.005528 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:48:05.011214 systemd-logind[1120]: Removed session 2. Feb 12 20:48:06.278619 sshd[1248]: Accepted publickey for core from 172.24.4.1 port 47590 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:48:06.282965 sshd[1248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:48:06.293227 systemd-logind[1120]: New session 3 of user core. Feb 12 20:48:06.294099 systemd[1]: Started session-3.scope. Feb 12 20:48:06.927859 sshd[1248]: pam_unix(sshd:session): session closed for user core Feb 12 20:48:06.936598 systemd[1]: Started sshd@3-172.24.4.188:22-172.24.4.1:47600.service. Feb 12 20:48:06.946861 systemd[1]: sshd@2-172.24.4.188:22-172.24.4.1:47590.service: Deactivated successfully. Feb 12 20:48:06.952471 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:48:06.953229 systemd-logind[1120]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:48:06.955793 systemd-logind[1120]: Removed session 3. Feb 12 20:48:08.161041 sshd[1255]: Accepted publickey for core from 172.24.4.1 port 47600 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:48:08.166056 sshd[1255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:48:08.180460 systemd-logind[1120]: New session 4 of user core. Feb 12 20:48:08.181390 systemd[1]: Started session-4.scope. Feb 12 20:48:08.837013 systemd[1]: Started sshd@4-172.24.4.188:22-172.24.4.1:47616.service. Feb 12 20:48:08.839634 sshd[1255]: pam_unix(sshd:session): session closed for user core Feb 12 20:48:08.844993 systemd-logind[1120]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:48:08.845981 systemd[1]: sshd@3-172.24.4.188:22-172.24.4.1:47600.service: Deactivated successfully. Feb 12 20:48:08.847626 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:48:08.851221 systemd-logind[1120]: Removed session 4. Feb 12 20:48:10.212439 sshd[1262]: Accepted publickey for core from 172.24.4.1 port 47616 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:48:10.215145 sshd[1262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:48:10.225868 systemd-logind[1120]: New session 5 of user core. Feb 12 20:48:10.226628 systemd[1]: Started session-5.scope. Feb 12 20:48:10.858460 sudo[1268]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:48:10.859683 sudo[1268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:48:11.549229 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:48:11.563273 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:48:11.563588 systemd[1]: Reached target network-online.target. Feb 12 20:48:11.564982 systemd[1]: Starting docker.service... Feb 12 20:48:11.621648 env[1285]: time="2024-02-12T20:48:11.621582419Z" level=info msg="Starting up" Feb 12 20:48:11.623685 env[1285]: time="2024-02-12T20:48:11.623646571Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:48:11.623882 env[1285]: time="2024-02-12T20:48:11.623848129Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:48:11.624040 env[1285]: time="2024-02-12T20:48:11.624002859Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:48:11.624162 env[1285]: time="2024-02-12T20:48:11.624134075Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:48:11.627261 env[1285]: time="2024-02-12T20:48:11.627219011Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:48:11.627447 env[1285]: time="2024-02-12T20:48:11.627414698Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:48:11.627605 env[1285]: time="2024-02-12T20:48:11.627565120Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:48:11.627777 env[1285]: time="2024-02-12T20:48:11.627705984Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:48:11.751250 env[1285]: time="2024-02-12T20:48:11.751165062Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 20:48:11.751250 env[1285]: time="2024-02-12T20:48:11.751216999Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 20:48:11.751616 env[1285]: time="2024-02-12T20:48:11.751558209Z" level=info msg="Loading containers: start." Feb 12 20:48:11.979856 kernel: Initializing XFRM netlink socket Feb 12 20:48:12.028211 env[1285]: time="2024-02-12T20:48:12.028161518Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:48:12.132081 systemd-networkd[1032]: docker0: Link UP Feb 12 20:48:12.144799 env[1285]: time="2024-02-12T20:48:12.144703363Z" level=info msg="Loading containers: done." Feb 12 20:48:12.169997 env[1285]: time="2024-02-12T20:48:12.169926789Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:48:12.170725 env[1285]: time="2024-02-12T20:48:12.170691473Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:48:12.171163 env[1285]: time="2024-02-12T20:48:12.171132160Z" level=info msg="Daemon has completed initialization" Feb 12 20:48:12.200383 systemd[1]: Started docker.service. Feb 12 20:48:12.218116 env[1285]: time="2024-02-12T20:48:12.217961869Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:48:12.250046 systemd[1]: Reloading. Feb 12 20:48:12.356628 /usr/lib/systemd/system-generators/torcx-generator[1423]: time="2024-02-12T20:48:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:48:12.356659 /usr/lib/systemd/system-generators/torcx-generator[1423]: time="2024-02-12T20:48:12Z" level=info msg="torcx already run" Feb 12 20:48:12.431766 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:48:12.431935 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:48:12.455222 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:48:12.543468 systemd[1]: Started kubelet.service. Feb 12 20:48:12.627481 kubelet[1475]: E0212 20:48:12.627407 1475 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:48:12.633457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:48:12.633622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:48:13.613198 env[1135]: time="2024-02-12T20:48:13.612839201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 20:48:14.398833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776763828.mount: Deactivated successfully. Feb 12 20:48:17.271288 env[1135]: time="2024-02-12T20:48:17.271209283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:17.274408 env[1135]: time="2024-02-12T20:48:17.274356716Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:17.277449 env[1135]: time="2024-02-12T20:48:17.277390536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:17.280829 env[1135]: time="2024-02-12T20:48:17.280785443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:17.283208 env[1135]: time="2024-02-12T20:48:17.283150549Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 20:48:17.303440 env[1135]: time="2024-02-12T20:48:17.303375371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 20:48:20.358801 env[1135]: time="2024-02-12T20:48:20.358668302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:20.361347 env[1135]: time="2024-02-12T20:48:20.361319965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:20.364993 env[1135]: time="2024-02-12T20:48:20.364940595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:20.371221 env[1135]: time="2024-02-12T20:48:20.371186760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:20.372230 env[1135]: time="2024-02-12T20:48:20.372191685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 20:48:20.385535 env[1135]: time="2024-02-12T20:48:20.385495196Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 20:48:22.455092 env[1135]: time="2024-02-12T20:48:22.454970145Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:22.459715 env[1135]: time="2024-02-12T20:48:22.459642969Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:22.465850 env[1135]: time="2024-02-12T20:48:22.465798213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:22.470345 env[1135]: time="2024-02-12T20:48:22.470272966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:22.471893 env[1135]: time="2024-02-12T20:48:22.471824796Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 20:48:22.484327 env[1135]: time="2024-02-12T20:48:22.484294794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:48:22.700445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:48:22.701105 systemd[1]: Stopped kubelet.service. Feb 12 20:48:22.704862 systemd[1]: Started kubelet.service. Feb 12 20:48:22.812838 kubelet[1513]: E0212 20:48:22.812793 1513 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:48:22.820217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:48:22.820560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:48:24.239193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65646173.mount: Deactivated successfully. Feb 12 20:48:24.904960 env[1135]: time="2024-02-12T20:48:24.904920597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:24.908985 env[1135]: time="2024-02-12T20:48:24.908964591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:24.911619 env[1135]: time="2024-02-12T20:48:24.911550701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:24.913601 env[1135]: time="2024-02-12T20:48:24.913551474Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:24.914655 env[1135]: time="2024-02-12T20:48:24.914587457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 20:48:24.925740 env[1135]: time="2024-02-12T20:48:24.925676064Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:48:25.581711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280870283.mount: Deactivated successfully. Feb 12 20:48:25.593775 env[1135]: time="2024-02-12T20:48:25.593633878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:25.597451 env[1135]: time="2024-02-12T20:48:25.597369064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:25.601256 env[1135]: time="2024-02-12T20:48:25.601188928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:25.604983 env[1135]: time="2024-02-12T20:48:25.604892203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:25.607750 env[1135]: time="2024-02-12T20:48:25.607667238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:48:25.632021 env[1135]: time="2024-02-12T20:48:25.631945632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 20:48:26.721044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563120696.mount: Deactivated successfully. Feb 12 20:48:32.949596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 20:48:32.949877 systemd[1]: Stopped kubelet.service. Feb 12 20:48:32.951667 systemd[1]: Started kubelet.service. Feb 12 20:48:33.091675 kubelet[1534]: E0212 20:48:33.091628 1534 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:48:33.093681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:48:33.093865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:48:33.280464 env[1135]: time="2024-02-12T20:48:33.280074219Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:33.286418 env[1135]: time="2024-02-12T20:48:33.286343871Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:33.292805 env[1135]: time="2024-02-12T20:48:33.291600087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:33.295660 env[1135]: time="2024-02-12T20:48:33.295595065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:33.297569 env[1135]: time="2024-02-12T20:48:33.297507311Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 20:48:33.319392 env[1135]: time="2024-02-12T20:48:33.319328518Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 20:48:34.055810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3517798236.mount: Deactivated successfully. Feb 12 20:48:35.061370 env[1135]: time="2024-02-12T20:48:35.061254266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:35.066423 env[1135]: time="2024-02-12T20:48:35.064542864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:35.070782 env[1135]: time="2024-02-12T20:48:35.069518743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:35.072912 env[1135]: time="2024-02-12T20:48:35.072874192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:35.074096 env[1135]: time="2024-02-12T20:48:35.074072650Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 20:48:38.322263 systemd[1]: Stopped kubelet.service. Feb 12 20:48:38.347610 systemd[1]: Reloading. Feb 12 20:48:38.428876 /usr/lib/systemd/system-generators/torcx-generator[1627]: time="2024-02-12T20:48:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:48:38.428906 /usr/lib/systemd/system-generators/torcx-generator[1627]: time="2024-02-12T20:48:38Z" level=info msg="torcx already run" Feb 12 20:48:38.583936 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:48:38.583956 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:48:38.606934 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:48:38.703454 systemd[1]: Started kubelet.service. Feb 12 20:48:38.801871 kubelet[1680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:48:38.802237 kubelet[1680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:48:38.802361 kubelet[1680]: I0212 20:48:38.802331 1680 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:48:38.803784 kubelet[1680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:48:38.803843 kubelet[1680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:48:39.497638 kubelet[1680]: I0212 20:48:39.497589 1680 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:48:39.497964 kubelet[1680]: I0212 20:48:39.497935 1680 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:48:39.498582 kubelet[1680]: I0212 20:48:39.498551 1680 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:48:39.508040 kubelet[1680]: E0212 20:48:39.507994 1680 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.508193 kubelet[1680]: I0212 20:48:39.508122 1680 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:48:39.515343 kubelet[1680]: I0212 20:48:39.515308 1680 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:48:39.516305 kubelet[1680]: I0212 20:48:39.516274 1680 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:48:39.516602 kubelet[1680]: I0212 20:48:39.516573 1680 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:48:39.516949 kubelet[1680]: I0212 20:48:39.516920 1680 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:48:39.517123 kubelet[1680]: I0212 20:48:39.517100 1680 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:48:39.517433 kubelet[1680]: I0212 20:48:39.517405 1680 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:48:39.521580 kubelet[1680]: I0212 20:48:39.521475 1680 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:48:39.521580 kubelet[1680]: I0212 20:48:39.521497 1680 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:48:39.521580 kubelet[1680]: I0212 20:48:39.521520 1680 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:48:39.521580 kubelet[1680]: I0212 20:48:39.521538 1680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:48:39.524578 kubelet[1680]: I0212 20:48:39.524548 1680 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:48:39.525257 kubelet[1680]: W0212 20:48:39.525228 1680 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:48:39.526291 kubelet[1680]: I0212 20:48:39.526260 1680 server.go:1186] "Started kubelet" Feb 12 20:48:39.526990 kubelet[1680]: W0212 20:48:39.526671 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f-bcfc1a2c45.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.527202 kubelet[1680]: E0212 20:48:39.527177 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f-bcfc1a2c45.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.527413 kubelet[1680]: I0212 20:48:39.527387 1680 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:48:39.529331 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:48:39.529455 kubelet[1680]: I0212 20:48:39.529411 1680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:48:39.530251 kubelet[1680]: I0212 20:48:39.530219 1680 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:48:39.532984 kubelet[1680]: W0212 20:48:39.532918 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.533177 kubelet[1680]: E0212 20:48:39.533154 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.534243 kubelet[1680]: I0212 20:48:39.534201 1680 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:48:39.535333 kubelet[1680]: I0212 20:48:39.535299 1680 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:48:39.539685 kubelet[1680]: E0212 20:48:39.539495 1680 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a681a2e96", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 526215318, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 526215318, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.188:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.188:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:48:39.540519 kubelet[1680]: W0212 20:48:39.540451 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.540833 kubelet[1680]: E0212 20:48:39.540803 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.541338 kubelet[1680]: E0212 20:48:39.541298 1680 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f-bcfc1a2c45.novalocal?timeout=10s": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.545023 kubelet[1680]: E0212 20:48:39.544988 1680 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:48:39.545241 kubelet[1680]: E0212 20:48:39.545217 1680 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:48:39.599039 kubelet[1680]: I0212 20:48:39.599011 1680 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:48:39.616464 kubelet[1680]: I0212 20:48:39.616447 1680 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:48:39.616602 kubelet[1680]: I0212 20:48:39.616592 1680 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:48:39.616672 kubelet[1680]: I0212 20:48:39.616662 1680 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:48:39.626455 kubelet[1680]: I0212 20:48:39.626440 1680 policy_none.go:49] "None policy: Start" Feb 12 20:48:39.627000 kubelet[1680]: I0212 20:48:39.626989 1680 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:48:39.627079 kubelet[1680]: I0212 20:48:39.627070 1680 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:48:39.628760 kubelet[1680]: I0212 20:48:39.628707 1680 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:48:39.628760 kubelet[1680]: I0212 20:48:39.628759 1680 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:48:39.628853 kubelet[1680]: I0212 20:48:39.628778 1680 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:48:39.628853 kubelet[1680]: E0212 20:48:39.628846 1680 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:48:39.631813 kubelet[1680]: I0212 20:48:39.631797 1680 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:48:39.632651 kubelet[1680]: I0212 20:48:39.632637 1680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:48:39.636318 kubelet[1680]: E0212 20:48:39.636294 1680 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" not found" Feb 12 20:48:39.637391 kubelet[1680]: W0212 20:48:39.637357 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.637436 kubelet[1680]: E0212 20:48:39.637395 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.638957 kubelet[1680]: I0212 20:48:39.638932 1680 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.639480 kubelet[1680]: E0212 20:48:39.639457 1680 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.729885 kubelet[1680]: I0212 20:48:39.729861 1680 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:48:39.731632 kubelet[1680]: I0212 20:48:39.731603 1680 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:48:39.733316 kubelet[1680]: I0212 20:48:39.733291 1680 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:48:39.734873 kubelet[1680]: I0212 20:48:39.734848 1680 status_manager.go:698] "Failed to get status for pod" podUID=4c1b8497ad18a5bd8d855b6b4e14d78e pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" err="Get \"https://172.24.4.188:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\": dial tcp 172.24.4.188:6443: connect: connection refused" Feb 12 20:48:39.740097 kubelet[1680]: I0212 20:48:39.740070 1680 status_manager.go:698] "Failed to get status for pod" podUID=b633ab8ba4ffba251612527810f06ef0 pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" err="Get \"https://172.24.4.188:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\": dial tcp 172.24.4.188:6443: connect: connection refused" Feb 12 20:48:39.740952 kubelet[1680]: I0212 20:48:39.740929 1680 status_manager.go:698] "Failed to get status for pod" podUID=0641849d2231f70e4b6521321e35584d pod="kube-system/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal" err="Get \"https://172.24.4.188:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal\": dial tcp 172.24.4.188:6443: connect: connection refused" Feb 12 20:48:39.753211 kubelet[1680]: E0212 20:48:39.749594 1680 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f-bcfc1a2c45.novalocal?timeout=10s": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:39.837316 kubelet[1680]: I0212 20:48:39.837237 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c1b8497ad18a5bd8d855b6b4e14d78e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"4c1b8497ad18a5bd8d855b6b4e14d78e\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838038 kubelet[1680]: I0212 20:48:39.837453 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838038 kubelet[1680]: I0212 20:48:39.837590 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838038 kubelet[1680]: I0212 20:48:39.837871 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838038 kubelet[1680]: I0212 20:48:39.837967 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0641849d2231f70e4b6521321e35584d-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"0641849d2231f70e4b6521321e35584d\") " pod="kube-system/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838317 kubelet[1680]: I0212 20:48:39.838114 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c1b8497ad18a5bd8d855b6b4e14d78e-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"4c1b8497ad18a5bd8d855b6b4e14d78e\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838317 kubelet[1680]: I0212 20:48:39.838303 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c1b8497ad18a5bd8d855b6b4e14d78e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"4c1b8497ad18a5bd8d855b6b4e14d78e\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838493 kubelet[1680]: I0212 20:48:39.838401 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.838649 kubelet[1680]: I0212 20:48:39.838548 1680 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.843191 kubelet[1680]: I0212 20:48:39.843122 1680 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:39.843797 kubelet[1680]: E0212 20:48:39.843717 1680 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:40.051808 env[1135]: time="2024-02-12T20:48:40.050095996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal,Uid:0641849d2231f70e4b6521321e35584d,Namespace:kube-system,Attempt:0,}" Feb 12 20:48:40.051808 env[1135]: time="2024-02-12T20:48:40.050266006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal,Uid:4c1b8497ad18a5bd8d855b6b4e14d78e,Namespace:kube-system,Attempt:0,}" Feb 12 20:48:40.052632 env[1135]: time="2024-02-12T20:48:40.052413165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal,Uid:b633ab8ba4ffba251612527810f06ef0,Namespace:kube-system,Attempt:0,}" Feb 12 20:48:40.150673 kubelet[1680]: E0212 20:48:40.150594 1680 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f-bcfc1a2c45.novalocal?timeout=10s": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.247712 kubelet[1680]: I0212 20:48:40.247570 1680 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:40.248369 kubelet[1680]: E0212 20:48:40.248328 1680 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:40.379807 kubelet[1680]: W0212 20:48:40.379483 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f-bcfc1a2c45.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.379807 kubelet[1680]: E0212 20:48:40.379596 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f-bcfc1a2c45.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.702603 kubelet[1680]: W0212 20:48:40.702484 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.702874 kubelet[1680]: E0212 20:48:40.702620 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.711398 kubelet[1680]: W0212 20:48:40.711316 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.711398 kubelet[1680]: E0212 20:48:40.711387 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.731858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040102555.mount: Deactivated successfully. Feb 12 20:48:40.744105 env[1135]: time="2024-02-12T20:48:40.744033397Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.746676 env[1135]: time="2024-02-12T20:48:40.746627113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.753608 env[1135]: time="2024-02-12T20:48:40.753499368Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.758235 env[1135]: time="2024-02-12T20:48:40.758158174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.764438 env[1135]: time="2024-02-12T20:48:40.764382370Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.767169 env[1135]: time="2024-02-12T20:48:40.767081712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.773293 env[1135]: time="2024-02-12T20:48:40.773241674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.775793 env[1135]: time="2024-02-12T20:48:40.775691851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.781729 env[1135]: time="2024-02-12T20:48:40.781627175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.783571 env[1135]: time="2024-02-12T20:48:40.783498199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.793385 env[1135]: time="2024-02-12T20:48:40.793284431Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.796178 env[1135]: time="2024-02-12T20:48:40.796117233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:48:40.822319 kubelet[1680]: W0212 20:48:40.821227 1680 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.822319 kubelet[1680]: E0212 20:48:40.821343 1680 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.188:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.864892 env[1135]: time="2024-02-12T20:48:40.864468996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:48:40.864892 env[1135]: time="2024-02-12T20:48:40.864509625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:48:40.864892 env[1135]: time="2024-02-12T20:48:40.864533472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:48:40.865111 env[1135]: time="2024-02-12T20:48:40.864930132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05278724d95697abafc06084fc936234164365ddff5a6bc14b0ae1ffe6e6d3be pid=1756 runtime=io.containerd.runc.v2 Feb 12 20:48:40.871095 env[1135]: time="2024-02-12T20:48:40.871021399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:48:40.871262 env[1135]: time="2024-02-12T20:48:40.871216969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:48:40.871262 env[1135]: time="2024-02-12T20:48:40.871239463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:48:40.871567 env[1135]: time="2024-02-12T20:48:40.871517212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbe5d3095a6fcfdc6275542a4bd3d426beef1e3108f773629877260c811a24c7 pid=1775 runtime=io.containerd.runc.v2 Feb 12 20:48:40.874084 env[1135]: time="2024-02-12T20:48:40.874011967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:48:40.874165 env[1135]: time="2024-02-12T20:48:40.874090529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:48:40.874165 env[1135]: time="2024-02-12T20:48:40.874120427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:48:40.874636 env[1135]: time="2024-02-12T20:48:40.874592012Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/67659881fce609f720c8e8a2b0a0cbe7848e6046abc1f172e0d432b92a8a4c31 pid=1764 runtime=io.containerd.runc.v2 Feb 12 20:48:40.951333 kubelet[1680]: E0212 20:48:40.951270 1680 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f-bcfc1a2c45.novalocal?timeout=10s": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:40.982317 env[1135]: time="2024-02-12T20:48:40.982146900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal,Uid:b633ab8ba4ffba251612527810f06ef0,Namespace:kube-system,Attempt:0,} returns sandbox id \"05278724d95697abafc06084fc936234164365ddff5a6bc14b0ae1ffe6e6d3be\"" Feb 12 20:48:40.994987 env[1135]: time="2024-02-12T20:48:40.994950663Z" level=info msg="CreateContainer within sandbox \"05278724d95697abafc06084fc936234164365ddff5a6bc14b0ae1ffe6e6d3be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:48:40.995346 env[1135]: time="2024-02-12T20:48:40.995309589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal,Uid:0641849d2231f70e4b6521321e35584d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbe5d3095a6fcfdc6275542a4bd3d426beef1e3108f773629877260c811a24c7\"" Feb 12 20:48:40.997942 env[1135]: time="2024-02-12T20:48:40.997903427Z" level=info msg="CreateContainer within sandbox \"fbe5d3095a6fcfdc6275542a4bd3d426beef1e3108f773629877260c811a24c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:48:41.000964 env[1135]: time="2024-02-12T20:48:41.000830089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal,Uid:4c1b8497ad18a5bd8d855b6b4e14d78e,Namespace:kube-system,Attempt:0,} returns sandbox id \"67659881fce609f720c8e8a2b0a0cbe7848e6046abc1f172e0d432b92a8a4c31\"" Feb 12 20:48:41.003565 env[1135]: time="2024-02-12T20:48:41.003540665Z" level=info msg="CreateContainer within sandbox \"67659881fce609f720c8e8a2b0a0cbe7848e6046abc1f172e0d432b92a8a4c31\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:48:41.051887 kubelet[1680]: I0212 20:48:41.051411 1680 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:41.051887 kubelet[1680]: E0212 20:48:41.051870 1680 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.188:6443/api/v1/nodes\": dial tcp 172.24.4.188:6443: connect: connection refused" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:41.224586 env[1135]: time="2024-02-12T20:48:41.224464786Z" level=info msg="CreateContainer within sandbox \"67659881fce609f720c8e8a2b0a0cbe7848e6046abc1f172e0d432b92a8a4c31\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0c6865f254c5fde220522e4eb8e9c551813afb6b99b848afa200c1f4876adbdc\"" Feb 12 20:48:41.226224 env[1135]: time="2024-02-12T20:48:41.225978549Z" level=info msg="StartContainer for \"0c6865f254c5fde220522e4eb8e9c551813afb6b99b848afa200c1f4876adbdc\"" Feb 12 20:48:41.249854 env[1135]: time="2024-02-12T20:48:41.249602413Z" level=info msg="CreateContainer within sandbox \"fbe5d3095a6fcfdc6275542a4bd3d426beef1e3108f773629877260c811a24c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a8bdb759395d9725d8cfb55bef62d98b782cfcd95cd33226341b8716cac088c8\"" Feb 12 20:48:41.250540 env[1135]: time="2024-02-12T20:48:41.250344171Z" level=info msg="CreateContainer within sandbox \"05278724d95697abafc06084fc936234164365ddff5a6bc14b0ae1ffe6e6d3be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"74b76c7975268c1be79bec971758e4195c164856daa6e73537399a310e36c04e\"" Feb 12 20:48:41.251939 env[1135]: time="2024-02-12T20:48:41.251642947Z" level=info msg="StartContainer for \"a8bdb759395d9725d8cfb55bef62d98b782cfcd95cd33226341b8716cac088c8\"" Feb 12 20:48:41.252515 env[1135]: time="2024-02-12T20:48:41.252460320Z" level=info msg="StartContainer for \"74b76c7975268c1be79bec971758e4195c164856daa6e73537399a310e36c04e\"" Feb 12 20:48:41.382238 env[1135]: time="2024-02-12T20:48:41.382172886Z" level=info msg="StartContainer for \"0c6865f254c5fde220522e4eb8e9c551813afb6b99b848afa200c1f4876adbdc\" returns successfully" Feb 12 20:48:41.385712 update_engine[1121]: I0212 20:48:41.384841 1121 update_attempter.cc:509] Updating boot flags... Feb 12 20:48:41.479437 env[1135]: time="2024-02-12T20:48:41.479393936Z" level=info msg="StartContainer for \"74b76c7975268c1be79bec971758e4195c164856daa6e73537399a310e36c04e\" returns successfully" Feb 12 20:48:41.480866 env[1135]: time="2024-02-12T20:48:41.480840869Z" level=info msg="StartContainer for \"a8bdb759395d9725d8cfb55bef62d98b782cfcd95cd33226341b8716cac088c8\" returns successfully" Feb 12 20:48:41.530445 kubelet[1680]: E0212 20:48:41.530359 1680 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:41.646081 kubelet[1680]: I0212 20:48:41.646041 1680 status_manager.go:698] "Failed to get status for pod" podUID=b633ab8ba4ffba251612527810f06ef0 pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" err="Get \"https://172.24.4.188:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\": dial tcp 172.24.4.188:6443: connect: connection refused" Feb 12 20:48:41.648184 kubelet[1680]: I0212 20:48:41.648166 1680 status_manager.go:698] "Failed to get status for pod" podUID=0641849d2231f70e4b6521321e35584d pod="kube-system/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal" err="Get \"https://172.24.4.188:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal\": dial tcp 172.24.4.188:6443: connect: connection refused" Feb 12 20:48:41.722760 kubelet[1680]: I0212 20:48:41.722742 1680 status_manager.go:698] "Failed to get status for pod" podUID=4c1b8497ad18a5bd8d855b6b4e14d78e pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" err="Get \"https://172.24.4.188:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\": dial tcp 172.24.4.188:6443: connect: connection refused" Feb 12 20:48:42.551556 kubelet[1680]: E0212 20:48:42.551524 1680 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.24.4.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f-bcfc1a2c45.novalocal?timeout=10s": dial tcp 172.24.4.188:6443: connect: connection refused Feb 12 20:48:42.654641 kubelet[1680]: I0212 20:48:42.654594 1680 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:45.500754 kubelet[1680]: I0212 20:48:45.500686 1680 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:45.531547 kubelet[1680]: I0212 20:48:45.531507 1680 apiserver.go:52] "Watching apiserver" Feb 12 20:48:45.580015 kubelet[1680]: E0212 20:48:45.579908 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a681a2e96", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 526215318, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 526215318, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:45.636111 kubelet[1680]: I0212 20:48:45.636086 1680 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:48:45.642047 kubelet[1680]: E0212 20:48:45.641944 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a693bafe4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 545188324, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 545188324, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:45.682829 kubelet[1680]: I0212 20:48:45.682783 1680 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:48:45.696093 kubelet[1680]: E0212 20:48:45.695991 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d74201a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 615995930, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 615995930, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:45.752305 kubelet[1680]: E0212 20:48:45.752034 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d7432c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616000709, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616000709, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:45.811354 kubelet[1680]: E0212 20:48:45.811060 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d744353", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616004947, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616004947, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:45.871304 kubelet[1680]: E0212 20:48:45.871066 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6e825103", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 633703171, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 633703171, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:45.935873 kubelet[1680]: E0212 20:48:45.935667 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d74201a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 615995930, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 638900406, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:46.002650 kubelet[1680]: E0212 20:48:46.002354 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d7432c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616000709, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 638905766, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:46.064408 kubelet[1680]: E0212 20:48:46.064247 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d744353", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616004947, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 638908401, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:46.238020 kubelet[1680]: E0212 20:48:46.237302 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d74201a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 615995930, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 731494384, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:46.634567 kubelet[1680]: E0212 20:48:46.634415 1680 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal.17b3389a6d7432c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f-bcfc1a2c45.novalocal", UID:"ci-3510-3-2-f-bcfc1a2c45.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-f-bcfc1a2c45.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f-bcfc1a2c45.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 616000709, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 48, 39, 731504814, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 12 20:48:48.478421 systemd[1]: Reloading. Feb 12 20:48:48.604997 /usr/lib/systemd/system-generators/torcx-generator[2027]: time="2024-02-12T20:48:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:48:48.605360 /usr/lib/systemd/system-generators/torcx-generator[2027]: time="2024-02-12T20:48:48Z" level=info msg="torcx already run" Feb 12 20:48:48.696027 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:48:48.696184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:48:48.719071 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:48:48.825600 systemd[1]: Stopping kubelet.service... Feb 12 20:48:48.845403 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:48:48.845755 systemd[1]: Stopped kubelet.service. Feb 12 20:48:48.847873 systemd[1]: Started kubelet.service. Feb 12 20:48:48.933699 kubelet[2075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:48:48.934159 kubelet[2075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:48:48.934373 kubelet[2075]: I0212 20:48:48.934322 2075 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:48:48.936208 kubelet[2075]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:48:48.936502 kubelet[2075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:48:48.942456 kubelet[2075]: I0212 20:48:48.942434 2075 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:48:48.942771 kubelet[2075]: I0212 20:48:48.942756 2075 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:48:48.943488 kubelet[2075]: I0212 20:48:48.943475 2075 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:48:48.948021 kubelet[2075]: I0212 20:48:48.947457 2075 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:48:48.949809 sudo[2086]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:48:48.950014 sudo[2086]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:48:48.951258 kubelet[2075]: I0212 20:48:48.951237 2075 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:48:48.960307 kubelet[2075]: I0212 20:48:48.960285 2075 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:48:48.960979 kubelet[2075]: I0212 20:48:48.960967 2075 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:48:48.961138 kubelet[2075]: I0212 20:48:48.961126 2075 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:48:48.961280 kubelet[2075]: I0212 20:48:48.961268 2075 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:48:48.961350 kubelet[2075]: I0212 20:48:48.961340 2075 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:48:48.961453 kubelet[2075]: I0212 20:48:48.961442 2075 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:48:48.967298 kubelet[2075]: I0212 20:48:48.967279 2075 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:48:48.967433 kubelet[2075]: I0212 20:48:48.967423 2075 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:48:48.967509 kubelet[2075]: I0212 20:48:48.967499 2075 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:48:48.967579 kubelet[2075]: I0212 20:48:48.967570 2075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:48:48.981050 kubelet[2075]: I0212 20:48:48.981020 2075 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:48:48.986185 kubelet[2075]: I0212 20:48:48.986170 2075 server.go:1186] "Started kubelet" Feb 12 20:48:48.990916 kubelet[2075]: I0212 20:48:48.990896 2075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:48:48.992297 kubelet[2075]: E0212 20:48:48.992282 2075 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:48:48.992401 kubelet[2075]: E0212 20:48:48.992390 2075 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:48:48.993957 kubelet[2075]: I0212 20:48:48.993943 2075 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:48:48.994591 kubelet[2075]: I0212 20:48:48.994578 2075 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:48:48.999400 kubelet[2075]: I0212 20:48:48.999383 2075 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:48:49.002275 kubelet[2075]: I0212 20:48:49.002255 2075 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:48:49.029078 kubelet[2075]: I0212 20:48:49.029059 2075 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:48:49.078906 kubelet[2075]: I0212 20:48:49.077651 2075 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:48:49.079049 kubelet[2075]: I0212 20:48:49.079037 2075 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:48:49.079157 kubelet[2075]: I0212 20:48:49.079137 2075 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:48:49.079290 kubelet[2075]: E0212 20:48:49.079279 2075 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:48:49.111969 kubelet[2075]: I0212 20:48:49.111952 2075 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.122120 kubelet[2075]: I0212 20:48:49.122087 2075 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.122464 kubelet[2075]: I0212 20:48:49.122428 2075 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.130104 kubelet[2075]: I0212 20:48:49.130086 2075 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:48:49.130234 kubelet[2075]: I0212 20:48:49.130223 2075 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:48:49.130298 kubelet[2075]: I0212 20:48:49.130289 2075 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:48:49.130466 kubelet[2075]: I0212 20:48:49.130454 2075 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:48:49.130545 kubelet[2075]: I0212 20:48:49.130536 2075 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:48:49.130606 kubelet[2075]: I0212 20:48:49.130596 2075 policy_none.go:49] "None policy: Start" Feb 12 20:48:49.131247 kubelet[2075]: I0212 20:48:49.131235 2075 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:48:49.131331 kubelet[2075]: I0212 20:48:49.131321 2075 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:48:49.131498 kubelet[2075]: I0212 20:48:49.131486 2075 state_mem.go:75] "Updated machine memory state" Feb 12 20:48:49.133688 kubelet[2075]: I0212 20:48:49.133676 2075 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:48:49.137313 kubelet[2075]: I0212 20:48:49.137274 2075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:48:49.180390 kubelet[2075]: I0212 20:48:49.180370 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:48:49.180581 kubelet[2075]: I0212 20:48:49.180569 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:48:49.180681 kubelet[2075]: I0212 20:48:49.180670 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:48:49.205238 kubelet[2075]: I0212 20:48:49.205213 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.205435 kubelet[2075]: I0212 20:48:49.205419 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.205528 kubelet[2075]: I0212 20:48:49.205518 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c1b8497ad18a5bd8d855b6b4e14d78e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"4c1b8497ad18a5bd8d855b6b4e14d78e\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.205621 kubelet[2075]: I0212 20:48:49.205611 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c1b8497ad18a5bd8d855b6b4e14d78e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"4c1b8497ad18a5bd8d855b6b4e14d78e\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.205714 kubelet[2075]: I0212 20:48:49.205705 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.205961 kubelet[2075]: I0212 20:48:49.205950 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.206060 kubelet[2075]: I0212 20:48:49.206051 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0641849d2231f70e4b6521321e35584d-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"0641849d2231f70e4b6521321e35584d\") " pod="kube-system/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.206150 kubelet[2075]: I0212 20:48:49.206141 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c1b8497ad18a5bd8d855b6b4e14d78e-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"4c1b8497ad18a5bd8d855b6b4e14d78e\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.206238 kubelet[2075]: I0212 20:48:49.206228 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b633ab8ba4ffba251612527810f06ef0-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" (UID: \"b633ab8ba4ffba251612527810f06ef0\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:49.615805 sudo[2086]: pam_unix(sudo:session): session closed for user root Feb 12 20:48:49.997024 kubelet[2075]: I0212 20:48:49.996982 2075 apiserver.go:52] "Watching apiserver" Feb 12 20:48:50.003235 kubelet[2075]: I0212 20:48:50.003200 2075 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:48:50.012963 kubelet[2075]: I0212 20:48:50.012935 2075 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:48:50.382954 kubelet[2075]: E0212 20:48:50.382299 2075 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:50.579201 kubelet[2075]: E0212 20:48:50.579147 2075 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:50.776830 kubelet[2075]: E0212 20:48:50.776693 2075 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal" Feb 12 20:48:51.383897 kubelet[2075]: I0212 20:48:51.383821 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-f-bcfc1a2c45.novalocal" podStartSLOduration=2.38269239 pod.CreationTimestamp="2024-02-12 20:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:48:51.004971013 +0000 UTC m=+2.146558827" watchObservedRunningTime="2024-02-12 20:48:51.38269239 +0000 UTC m=+2.524280204" Feb 12 20:48:51.776551 kubelet[2075]: I0212 20:48:51.776469 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-f-bcfc1a2c45.novalocal" podStartSLOduration=2.7764070910000003 pod.CreationTimestamp="2024-02-12 20:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:48:51.384922154 +0000 UTC m=+2.526509968" watchObservedRunningTime="2024-02-12 20:48:51.776407091 +0000 UTC m=+2.917994855" Feb 12 20:48:51.821196 kubelet[2075]: I0212 20:48:51.776592 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-f-bcfc1a2c45.novalocal" podStartSLOduration=2.776570033 pod.CreationTimestamp="2024-02-12 20:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:48:51.775927857 +0000 UTC m=+2.917515632" watchObservedRunningTime="2024-02-12 20:48:51.776570033 +0000 UTC m=+2.918157797" Feb 12 20:48:52.288047 sudo[1268]: pam_unix(sudo:session): session closed for user root Feb 12 20:48:52.611904 sshd[1262]: pam_unix(sshd:session): session closed for user core Feb 12 20:48:52.617807 systemd[1]: sshd@4-172.24.4.188:22-172.24.4.1:47616.service: Deactivated successfully. Feb 12 20:48:52.620716 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:48:52.620971 systemd-logind[1120]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:48:52.623966 systemd-logind[1120]: Removed session 5. Feb 12 20:49:00.743001 kubelet[2075]: I0212 20:49:00.742963 2075 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:49:00.743899 env[1135]: time="2024-02-12T20:49:00.743872519Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:49:00.744371 kubelet[2075]: I0212 20:49:00.744354 2075 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:49:01.432133 kubelet[2075]: I0212 20:49:01.432070 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:49:01.459176 kubelet[2075]: I0212 20:49:01.459134 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:49:01.477410 kubelet[2075]: W0212 20:49:01.477382 2075 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-2-f-bcfc1a2c45.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f-bcfc1a2c45.novalocal' and this object Feb 12 20:49:01.477605 kubelet[2075]: E0212 20:49:01.477576 2075 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-2-f-bcfc1a2c45.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f-bcfc1a2c45.novalocal' and this object Feb 12 20:49:01.477726 kubelet[2075]: W0212 20:49:01.477712 2075 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-f-bcfc1a2c45.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f-bcfc1a2c45.novalocal' and this object Feb 12 20:49:01.477830 kubelet[2075]: E0212 20:49:01.477818 2075 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-f-bcfc1a2c45.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f-bcfc1a2c45.novalocal' and this object Feb 12 20:49:01.477930 kubelet[2075]: W0212 20:49:01.477917 2075 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-2-f-bcfc1a2c45.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f-bcfc1a2c45.novalocal' and this object Feb 12 20:49:01.478005 kubelet[2075]: E0212 20:49:01.477994 2075 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-2-f-bcfc1a2c45.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f-bcfc1a2c45.novalocal' and this object Feb 12 20:49:01.498322 kubelet[2075]: I0212 20:49:01.498296 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cni-path\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.498562 kubelet[2075]: I0212 20:49:01.498551 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-xtables-lock\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.498701 kubelet[2075]: I0212 20:49:01.498690 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hr4\" (UniqueName: \"kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-kube-api-access-74hr4\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.498831 kubelet[2075]: I0212 20:49:01.498820 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2c7218a-4f43-46fa-b8b0-c40c642f5952-kube-proxy\") pod \"kube-proxy-w8kpl\" (UID: \"f2c7218a-4f43-46fa-b8b0-c40c642f5952\") " pod="kube-system/kube-proxy-w8kpl" Feb 12 20:49:01.498944 kubelet[2075]: I0212 20:49:01.498931 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-hostproc\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499046 kubelet[2075]: I0212 20:49:01.499032 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-bpf-maps\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499145 kubelet[2075]: I0212 20:49:01.499135 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-cgroup\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499228 kubelet[2075]: I0212 20:49:01.499219 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-net\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499315 kubelet[2075]: I0212 20:49:01.499306 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-hubble-tls\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499425 kubelet[2075]: I0212 20:49:01.499411 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-config-path\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499557 kubelet[2075]: I0212 20:49:01.499544 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-etc-cni-netd\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499662 kubelet[2075]: I0212 20:49:01.499651 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clh65\" (UniqueName: \"kubernetes.io/projected/f2c7218a-4f43-46fa-b8b0-c40c642f5952-kube-api-access-clh65\") pod \"kube-proxy-w8kpl\" (UID: \"f2c7218a-4f43-46fa-b8b0-c40c642f5952\") " pod="kube-system/kube-proxy-w8kpl" Feb 12 20:49:01.499791 kubelet[2075]: I0212 20:49:01.499780 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-lib-modules\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499899 kubelet[2075]: I0212 20:49:01.499888 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2520c3fc-ab18-42cc-8378-e8af564097f6-clustermesh-secrets\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.499997 kubelet[2075]: I0212 20:49:01.499987 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-kernel\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.500096 kubelet[2075]: I0212 20:49:01.500086 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2c7218a-4f43-46fa-b8b0-c40c642f5952-lib-modules\") pod \"kube-proxy-w8kpl\" (UID: \"f2c7218a-4f43-46fa-b8b0-c40c642f5952\") " pod="kube-system/kube-proxy-w8kpl" Feb 12 20:49:01.500256 kubelet[2075]: I0212 20:49:01.500226 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-run\") pod \"cilium-xdzp9\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " pod="kube-system/cilium-xdzp9" Feb 12 20:49:01.500316 kubelet[2075]: I0212 20:49:01.500280 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2c7218a-4f43-46fa-b8b0-c40c642f5952-xtables-lock\") pod \"kube-proxy-w8kpl\" (UID: \"f2c7218a-4f43-46fa-b8b0-c40c642f5952\") " pod="kube-system/kube-proxy-w8kpl" Feb 12 20:49:01.615689 kubelet[2075]: I0212 20:49:01.615605 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:49:01.701628 kubelet[2075]: I0212 20:49:01.701594 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db658cc7-262e-4298-a5ff-38d7be249a75-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-6grqq\" (UID: \"db658cc7-262e-4298-a5ff-38d7be249a75\") " pod="kube-system/cilium-operator-f59cbd8c6-6grqq" Feb 12 20:49:01.701804 kubelet[2075]: I0212 20:49:01.701648 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpd9h\" (UniqueName: \"kubernetes.io/projected/db658cc7-262e-4298-a5ff-38d7be249a75-kube-api-access-rpd9h\") pod \"cilium-operator-f59cbd8c6-6grqq\" (UID: \"db658cc7-262e-4298-a5ff-38d7be249a75\") " pod="kube-system/cilium-operator-f59cbd8c6-6grqq" Feb 12 20:49:01.743768 env[1135]: time="2024-02-12T20:49:01.743215359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8kpl,Uid:f2c7218a-4f43-46fa-b8b0-c40c642f5952,Namespace:kube-system,Attempt:0,}" Feb 12 20:49:01.769279 env[1135]: time="2024-02-12T20:49:01.769195880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:49:01.769655 env[1135]: time="2024-02-12T20:49:01.769278356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:49:01.769655 env[1135]: time="2024-02-12T20:49:01.769305287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:49:01.769655 env[1135]: time="2024-02-12T20:49:01.769536194Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02909b42574b61e7a8ed37addc4c97a99a4ec4b7802a5b18edef09ce3e2408bc pid=2178 runtime=io.containerd.runc.v2 Feb 12 20:49:01.829102 env[1135]: time="2024-02-12T20:49:01.829045571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w8kpl,Uid:f2c7218a-4f43-46fa-b8b0-c40c642f5952,Namespace:kube-system,Attempt:0,} returns sandbox id \"02909b42574b61e7a8ed37addc4c97a99a4ec4b7802a5b18edef09ce3e2408bc\"" Feb 12 20:49:01.835326 env[1135]: time="2024-02-12T20:49:01.835276503Z" level=info msg="CreateContainer within sandbox \"02909b42574b61e7a8ed37addc4c97a99a4ec4b7802a5b18edef09ce3e2408bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:49:01.856778 env[1135]: time="2024-02-12T20:49:01.856720348Z" level=info msg="CreateContainer within sandbox \"02909b42574b61e7a8ed37addc4c97a99a4ec4b7802a5b18edef09ce3e2408bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef42b4823cb0b7c30edea0e46a383b9b543974798050b346a772d947697d50b4\"" Feb 12 20:49:01.858418 env[1135]: time="2024-02-12T20:49:01.858033433Z" level=info msg="StartContainer for \"ef42b4823cb0b7c30edea0e46a383b9b543974798050b346a772d947697d50b4\"" Feb 12 20:49:01.920005 env[1135]: time="2024-02-12T20:49:01.919959114Z" level=info msg="StartContainer for \"ef42b4823cb0b7c30edea0e46a383b9b543974798050b346a772d947697d50b4\" returns successfully" Feb 12 20:49:02.607890 kubelet[2075]: E0212 20:49:02.607807 2075 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:49:02.608400 kubelet[2075]: E0212 20:49:02.608027 2075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-config-path podName:2520c3fc-ab18-42cc-8378-e8af564097f6 nodeName:}" failed. No retries permitted until 2024-02-12 20:49:03.107971651 +0000 UTC m=+14.249559455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-config-path") pod "cilium-xdzp9" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:49:02.608850 kubelet[2075]: E0212 20:49:02.608817 2075 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 12 20:49:02.608902 kubelet[2075]: E0212 20:49:02.608858 2075 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-xdzp9: failed to sync secret cache: timed out waiting for the condition Feb 12 20:49:02.608992 kubelet[2075]: E0212 20:49:02.608975 2075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-hubble-tls podName:2520c3fc-ab18-42cc-8378-e8af564097f6 nodeName:}" failed. No retries permitted until 2024-02-12 20:49:03.108947347 +0000 UTC m=+14.250535151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-hubble-tls") pod "cilium-xdzp9" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6") : failed to sync secret cache: timed out waiting for the condition Feb 12 20:49:02.609563 kubelet[2075]: E0212 20:49:02.609536 2075 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 12 20:49:02.609690 kubelet[2075]: E0212 20:49:02.609656 2075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2520c3fc-ab18-42cc-8378-e8af564097f6-clustermesh-secrets podName:2520c3fc-ab18-42cc-8378-e8af564097f6 nodeName:}" failed. No retries permitted until 2024-02-12 20:49:03.10962578 +0000 UTC m=+14.251213584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2520c3fc-ab18-42cc-8378-e8af564097f6-clustermesh-secrets") pod "cilium-xdzp9" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6") : failed to sync secret cache: timed out waiting for the condition Feb 12 20:49:02.803042 kubelet[2075]: E0212 20:49:02.802999 2075 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:49:02.803258 kubelet[2075]: E0212 20:49:02.803245 2075 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db658cc7-262e-4298-a5ff-38d7be249a75-cilium-config-path podName:db658cc7-262e-4298-a5ff-38d7be249a75 nodeName:}" failed. No retries permitted until 2024-02-12 20:49:03.303226971 +0000 UTC m=+14.444814735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/db658cc7-262e-4298-a5ff-38d7be249a75-cilium-config-path") pod "cilium-operator-f59cbd8c6-6grqq" (UID: "db658cc7-262e-4298-a5ff-38d7be249a75") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:49:02.847974 kubelet[2075]: I0212 20:49:02.847899 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w8kpl" podStartSLOduration=1.8477088 pod.CreationTimestamp="2024-02-12 20:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:49:02.846911753 +0000 UTC m=+13.988499567" watchObservedRunningTime="2024-02-12 20:49:02.8477088 +0000 UTC m=+13.989296614" Feb 12 20:49:03.272245 env[1135]: time="2024-02-12T20:49:03.267796420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdzp9,Uid:2520c3fc-ab18-42cc-8378-e8af564097f6,Namespace:kube-system,Attempt:0,}" Feb 12 20:49:03.318668 env[1135]: time="2024-02-12T20:49:03.318230086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:49:03.318668 env[1135]: time="2024-02-12T20:49:03.318325907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:49:03.318668 env[1135]: time="2024-02-12T20:49:03.318359400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:49:03.319076 env[1135]: time="2024-02-12T20:49:03.318729611Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741 pid=2364 runtime=io.containerd.runc.v2 Feb 12 20:49:03.371158 env[1135]: time="2024-02-12T20:49:03.371105298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xdzp9,Uid:2520c3fc-ab18-42cc-8378-e8af564097f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\"" Feb 12 20:49:03.373970 env[1135]: time="2024-02-12T20:49:03.373861648Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:49:03.424618 env[1135]: time="2024-02-12T20:49:03.424544796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6grqq,Uid:db658cc7-262e-4298-a5ff-38d7be249a75,Namespace:kube-system,Attempt:0,}" Feb 12 20:49:03.460468 env[1135]: time="2024-02-12T20:49:03.460289512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:49:03.460468 env[1135]: time="2024-02-12T20:49:03.460381355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:49:03.460468 env[1135]: time="2024-02-12T20:49:03.460414489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:49:03.462813 env[1135]: time="2024-02-12T20:49:03.461235520Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1 pid=2404 runtime=io.containerd.runc.v2 Feb 12 20:49:03.554462 env[1135]: time="2024-02-12T20:49:03.554057613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-6grqq,Uid:db658cc7-262e-4298-a5ff-38d7be249a75,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\"" Feb 12 20:49:10.020592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586373796.mount: Deactivated successfully. Feb 12 20:49:14.130110 env[1135]: time="2024-02-12T20:49:14.129999593Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:49:14.136880 env[1135]: time="2024-02-12T20:49:14.135988790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:49:14.138915 env[1135]: time="2024-02-12T20:49:14.138886867Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:49:14.139110 env[1135]: time="2024-02-12T20:49:14.139090690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:49:14.140013 env[1135]: time="2024-02-12T20:49:14.139979645Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:49:14.142803 env[1135]: time="2024-02-12T20:49:14.142776341Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:49:14.157522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount166704571.mount: Deactivated successfully. Feb 12 20:49:14.163936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25080524.mount: Deactivated successfully. Feb 12 20:49:14.170826 env[1135]: time="2024-02-12T20:49:14.170790588Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\"" Feb 12 20:49:14.171596 env[1135]: time="2024-02-12T20:49:14.171574374Z" level=info msg="StartContainer for \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\"" Feb 12 20:49:14.419078 env[1135]: time="2024-02-12T20:49:14.418180716Z" level=info msg="StartContainer for \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\" returns successfully" Feb 12 20:49:15.158957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5-rootfs.mount: Deactivated successfully. Feb 12 20:49:15.255490 env[1135]: time="2024-02-12T20:49:15.255304818Z" level=info msg="shim disconnected" id=5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5 Feb 12 20:49:15.256366 env[1135]: time="2024-02-12T20:49:15.256316873Z" level=warning msg="cleaning up after shim disconnected" id=5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5 namespace=k8s.io Feb 12 20:49:15.256527 env[1135]: time="2024-02-12T20:49:15.256491852Z" level=info msg="cleaning up dead shim" Feb 12 20:49:15.293713 env[1135]: time="2024-02-12T20:49:15.293637065Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:49:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2489 runtime=io.containerd.runc.v2\n" Feb 12 20:49:16.188651 env[1135]: time="2024-02-12T20:49:16.188547675Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:49:16.244576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765229879.mount: Deactivated successfully. Feb 12 20:49:16.278396 env[1135]: time="2024-02-12T20:49:16.278354882Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\"" Feb 12 20:49:16.294896 env[1135]: time="2024-02-12T20:49:16.294844986Z" level=info msg="StartContainer for \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\"" Feb 12 20:49:16.416310 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:49:16.418132 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:49:16.418382 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:49:16.420070 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:49:16.426371 env[1135]: time="2024-02-12T20:49:16.426327334Z" level=info msg="StartContainer for \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\" returns successfully" Feb 12 20:49:16.442463 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:49:16.581968 env[1135]: time="2024-02-12T20:49:16.581905040Z" level=info msg="shim disconnected" id=3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae Feb 12 20:49:16.588095 env[1135]: time="2024-02-12T20:49:16.581951617Z" level=warning msg="cleaning up after shim disconnected" id=3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae namespace=k8s.io Feb 12 20:49:16.588095 env[1135]: time="2024-02-12T20:49:16.582108783Z" level=info msg="cleaning up dead shim" Feb 12 20:49:16.601633 env[1135]: time="2024-02-12T20:49:16.601577714Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:49:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2554 runtime=io.containerd.runc.v2\n" Feb 12 20:49:17.238824 env[1135]: time="2024-02-12T20:49:17.228279750Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:49:17.233984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae-rootfs.mount: Deactivated successfully. Feb 12 20:49:17.260572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693678503.mount: Deactivated successfully. Feb 12 20:49:17.266077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872578635.mount: Deactivated successfully. Feb 12 20:49:17.284330 env[1135]: time="2024-02-12T20:49:17.284293033Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\"" Feb 12 20:49:17.286471 env[1135]: time="2024-02-12T20:49:17.285339032Z" level=info msg="StartContainer for \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\"" Feb 12 20:49:17.359453 env[1135]: time="2024-02-12T20:49:17.359408209Z" level=info msg="StartContainer for \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\" returns successfully" Feb 12 20:49:17.644219 env[1135]: time="2024-02-12T20:49:17.644075285Z" level=info msg="shim disconnected" id=ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a Feb 12 20:49:17.644219 env[1135]: time="2024-02-12T20:49:17.644176755Z" level=warning msg="cleaning up after shim disconnected" id=ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a namespace=k8s.io Feb 12 20:49:17.645249 env[1135]: time="2024-02-12T20:49:17.645178490Z" level=info msg="cleaning up dead shim" Feb 12 20:49:17.663446 env[1135]: time="2024-02-12T20:49:17.663372063Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:49:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2613 runtime=io.containerd.runc.v2\n" Feb 12 20:49:17.796868 env[1135]: time="2024-02-12T20:49:17.796777136Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:49:17.803934 env[1135]: time="2024-02-12T20:49:17.803713437Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:49:17.810446 env[1135]: time="2024-02-12T20:49:17.810371074Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:49:17.810799 env[1135]: time="2024-02-12T20:49:17.810767189Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:49:17.818077 env[1135]: time="2024-02-12T20:49:17.818043381Z" level=info msg="CreateContainer within sandbox \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:49:17.840716 env[1135]: time="2024-02-12T20:49:17.840673421Z" level=info msg="CreateContainer within sandbox \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\"" Feb 12 20:49:17.843149 env[1135]: time="2024-02-12T20:49:17.843118402Z" level=info msg="StartContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\"" Feb 12 20:49:17.913939 env[1135]: time="2024-02-12T20:49:17.913854477Z" level=info msg="StartContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" returns successfully" Feb 12 20:49:18.213959 env[1135]: time="2024-02-12T20:49:18.213904085Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:49:18.264560 env[1135]: time="2024-02-12T20:49:18.264507901Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\"" Feb 12 20:49:18.265195 env[1135]: time="2024-02-12T20:49:18.265162413Z" level=info msg="StartContainer for \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\"" Feb 12 20:49:18.390698 env[1135]: time="2024-02-12T20:49:18.390647255Z" level=info msg="StartContainer for \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\" returns successfully" Feb 12 20:49:18.431418 env[1135]: time="2024-02-12T20:49:18.431370063Z" level=info msg="shim disconnected" id=84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c Feb 12 20:49:18.431674 env[1135]: time="2024-02-12T20:49:18.431652906Z" level=warning msg="cleaning up after shim disconnected" id=84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c namespace=k8s.io Feb 12 20:49:18.431795 env[1135]: time="2024-02-12T20:49:18.431778511Z" level=info msg="cleaning up dead shim" Feb 12 20:49:18.450526 env[1135]: time="2024-02-12T20:49:18.450478199Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:49:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2704 runtime=io.containerd.runc.v2\n" Feb 12 20:49:19.210196 env[1135]: time="2024-02-12T20:49:19.210123323Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:49:19.228810 kubelet[2075]: I0212 20:49:19.227168 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-6grqq" podStartSLOduration=-9.22337201862767e+09 pod.CreationTimestamp="2024-02-12 20:49:01 +0000 UTC" firstStartedPulling="2024-02-12 20:49:03.555408988 +0000 UTC m=+14.696996762" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:49:18.322551573 +0000 UTC m=+29.464139337" watchObservedRunningTime="2024-02-12 20:49:19.227105825 +0000 UTC m=+30.368693619" Feb 12 20:49:19.235333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c-rootfs.mount: Deactivated successfully. Feb 12 20:49:19.247165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265127732.mount: Deactivated successfully. Feb 12 20:49:19.263798 env[1135]: time="2024-02-12T20:49:19.263645601Z" level=info msg="CreateContainer within sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\"" Feb 12 20:49:19.269327 env[1135]: time="2024-02-12T20:49:19.268383863Z" level=info msg="StartContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\"" Feb 12 20:49:19.346726 env[1135]: time="2024-02-12T20:49:19.346667833Z" level=info msg="StartContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" returns successfully" Feb 12 20:49:19.595275 kubelet[2075]: I0212 20:49:19.593596 2075 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:49:19.661946 kubelet[2075]: I0212 20:49:19.661893 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:49:19.670907 kubelet[2075]: I0212 20:49:19.670874 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:49:19.736612 kubelet[2075]: I0212 20:49:19.736589 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6txjd\" (UniqueName: \"kubernetes.io/projected/132e9841-7c7f-4bc3-afba-ce3e7ee2d0f3-kube-api-access-6txjd\") pod \"coredns-787d4945fb-v2lxd\" (UID: \"132e9841-7c7f-4bc3-afba-ce3e7ee2d0f3\") " pod="kube-system/coredns-787d4945fb-v2lxd" Feb 12 20:49:19.736825 kubelet[2075]: I0212 20:49:19.736811 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/132e9841-7c7f-4bc3-afba-ce3e7ee2d0f3-config-volume\") pod \"coredns-787d4945fb-v2lxd\" (UID: \"132e9841-7c7f-4bc3-afba-ce3e7ee2d0f3\") " pod="kube-system/coredns-787d4945fb-v2lxd" Feb 12 20:49:19.736956 kubelet[2075]: I0212 20:49:19.736942 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfctj\" (UniqueName: \"kubernetes.io/projected/7ba354b3-ae61-4117-9275-997bdbfb60c0-kube-api-access-xfctj\") pod \"coredns-787d4945fb-snr4r\" (UID: \"7ba354b3-ae61-4117-9275-997bdbfb60c0\") " pod="kube-system/coredns-787d4945fb-snr4r" Feb 12 20:49:19.737086 kubelet[2075]: I0212 20:49:19.737075 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ba354b3-ae61-4117-9275-997bdbfb60c0-config-volume\") pod \"coredns-787d4945fb-snr4r\" (UID: \"7ba354b3-ae61-4117-9275-997bdbfb60c0\") " pod="kube-system/coredns-787d4945fb-snr4r" Feb 12 20:49:19.967043 env[1135]: time="2024-02-12T20:49:19.966570617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v2lxd,Uid:132e9841-7c7f-4bc3-afba-ce3e7ee2d0f3,Namespace:kube-system,Attempt:0,}" Feb 12 20:49:19.979289 env[1135]: time="2024-02-12T20:49:19.979208165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-snr4r,Uid:7ba354b3-ae61-4117-9275-997bdbfb60c0,Namespace:kube-system,Attempt:0,}" Feb 12 20:49:20.249798 kubelet[2075]: I0212 20:49:20.249635 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xdzp9" podStartSLOduration=-9.223372017605223e+09 pod.CreationTimestamp="2024-02-12 20:49:01 +0000 UTC" firstStartedPulling="2024-02-12 20:49:03.372122561 +0000 UTC m=+14.513710325" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:49:20.243249187 +0000 UTC m=+31.384836961" watchObservedRunningTime="2024-02-12 20:49:20.249553892 +0000 UTC m=+31.391141656" Feb 12 20:49:22.259008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:49:22.263051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:49:22.261044 systemd-networkd[1032]: cilium_host: Link UP Feb 12 20:49:22.261429 systemd-networkd[1032]: cilium_net: Link UP Feb 12 20:49:22.263500 systemd-networkd[1032]: cilium_net: Gained carrier Feb 12 20:49:22.270015 systemd-networkd[1032]: cilium_host: Gained carrier Feb 12 20:49:22.319223 systemd-networkd[1032]: cilium_net: Gained IPv6LL Feb 12 20:49:22.381097 systemd-networkd[1032]: cilium_vxlan: Link UP Feb 12 20:49:22.381107 systemd-networkd[1032]: cilium_vxlan: Gained carrier Feb 12 20:49:22.432906 systemd-networkd[1032]: cilium_host: Gained IPv6LL Feb 12 20:49:23.175816 kernel: NET: Registered PF_ALG protocol family Feb 12 20:49:23.968984 systemd-networkd[1032]: cilium_vxlan: Gained IPv6LL Feb 12 20:49:24.023915 systemd-networkd[1032]: lxc_health: Link UP Feb 12 20:49:24.038516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:49:24.036854 systemd-networkd[1032]: lxc_health: Gained carrier Feb 12 20:49:24.562302 systemd-networkd[1032]: lxc1f8b3a122c0c: Link UP Feb 12 20:49:24.572855 kernel: eth0: renamed from tmpeaece Feb 12 20:49:24.584550 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1f8b3a122c0c: link becomes ready Feb 12 20:49:24.585021 systemd-networkd[1032]: lxc1f8b3a122c0c: Gained carrier Feb 12 20:49:24.616674 systemd-networkd[1032]: lxc22a7c7201d85: Link UP Feb 12 20:49:24.625816 kernel: eth0: renamed from tmp2d631 Feb 12 20:49:24.644786 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc22a7c7201d85: link becomes ready Feb 12 20:49:24.636714 systemd-networkd[1032]: lxc22a7c7201d85: Gained carrier Feb 12 20:49:25.633933 systemd-networkd[1032]: lxc_health: Gained IPv6LL Feb 12 20:49:26.400917 systemd-networkd[1032]: lxc1f8b3a122c0c: Gained IPv6LL Feb 12 20:49:26.593152 systemd-networkd[1032]: lxc22a7c7201d85: Gained IPv6LL Feb 12 20:49:28.346006 kubelet[2075]: I0212 20:49:28.345979 2075 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:49:29.206713 env[1135]: time="2024-02-12T20:49:29.206603139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:49:29.207180 env[1135]: time="2024-02-12T20:49:29.207153974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:49:29.207305 env[1135]: time="2024-02-12T20:49:29.207281784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:49:29.207592 env[1135]: time="2024-02-12T20:49:29.207566609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eaece761c86bad18acc622bff6e98921ec2e41791ea7a40d4d5b849a0bbefc3d pid=3238 runtime=io.containerd.runc.v2 Feb 12 20:49:29.217052 env[1135]: time="2024-02-12T20:49:29.216964111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:49:29.217273 env[1135]: time="2024-02-12T20:49:29.217216825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:49:29.217402 env[1135]: time="2024-02-12T20:49:29.217379321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:49:29.217663 env[1135]: time="2024-02-12T20:49:29.217637966Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6317abb3765a92e8067d9e7070da18576f848e1bd6e9c2a294832b03f952fc pid=3241 runtime=io.containerd.runc.v2 Feb 12 20:49:29.348022 env[1135]: time="2024-02-12T20:49:29.347967275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-v2lxd,Uid:132e9841-7c7f-4bc3-afba-ce3e7ee2d0f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaece761c86bad18acc622bff6e98921ec2e41791ea7a40d4d5b849a0bbefc3d\"" Feb 12 20:49:29.360220 env[1135]: time="2024-02-12T20:49:29.358142406Z" level=info msg="CreateContainer within sandbox \"eaece761c86bad18acc622bff6e98921ec2e41791ea7a40d4d5b849a0bbefc3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:49:29.387406 env[1135]: time="2024-02-12T20:49:29.387359054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-snr4r,Uid:7ba354b3-ae61-4117-9275-997bdbfb60c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d6317abb3765a92e8067d9e7070da18576f848e1bd6e9c2a294832b03f952fc\"" Feb 12 20:49:29.398594 env[1135]: time="2024-02-12T20:49:29.398502746Z" level=info msg="CreateContainer within sandbox \"2d6317abb3765a92e8067d9e7070da18576f848e1bd6e9c2a294832b03f952fc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:49:29.408096 env[1135]: time="2024-02-12T20:49:29.408059907Z" level=info msg="CreateContainer within sandbox \"eaece761c86bad18acc622bff6e98921ec2e41791ea7a40d4d5b849a0bbefc3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ecb619dadef1eaa3bfc60193d9dd0098de07069898fc7c366dbeb775ea95fa4\"" Feb 12 20:49:29.410677 env[1135]: time="2024-02-12T20:49:29.408875349Z" level=info msg="StartContainer for \"8ecb619dadef1eaa3bfc60193d9dd0098de07069898fc7c366dbeb775ea95fa4\"" Feb 12 20:49:29.425401 env[1135]: time="2024-02-12T20:49:29.425251508Z" level=info msg="CreateContainer within sandbox \"2d6317abb3765a92e8067d9e7070da18576f848e1bd6e9c2a294832b03f952fc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eca98e8f9599ae62df31686f96aa4ffb4ea8cb0beb451250f19567c1b4cf007b\"" Feb 12 20:49:29.429203 env[1135]: time="2024-02-12T20:49:29.429036448Z" level=info msg="StartContainer for \"eca98e8f9599ae62df31686f96aa4ffb4ea8cb0beb451250f19567c1b4cf007b\"" Feb 12 20:49:29.569334 env[1135]: time="2024-02-12T20:49:29.568142080Z" level=info msg="StartContainer for \"eca98e8f9599ae62df31686f96aa4ffb4ea8cb0beb451250f19567c1b4cf007b\" returns successfully" Feb 12 20:49:29.570395 env[1135]: time="2024-02-12T20:49:29.569701379Z" level=info msg="StartContainer for \"8ecb619dadef1eaa3bfc60193d9dd0098de07069898fc7c366dbeb775ea95fa4\" returns successfully" Feb 12 20:49:30.227229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880966906.mount: Deactivated successfully. Feb 12 20:49:30.290089 kubelet[2075]: I0212 20:49:30.290015 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-v2lxd" podStartSLOduration=29.289899991 pod.CreationTimestamp="2024-02-12 20:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:49:30.281596566 +0000 UTC m=+41.423184400" watchObservedRunningTime="2024-02-12 20:49:30.289899991 +0000 UTC m=+41.431487805" Feb 12 20:49:30.350105 kubelet[2075]: I0212 20:49:30.350046 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-snr4r" podStartSLOduration=29.350004135 pod.CreationTimestamp="2024-02-12 20:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:49:30.348976665 +0000 UTC m=+41.490564439" watchObservedRunningTime="2024-02-12 20:49:30.350004135 +0000 UTC m=+41.491591909" Feb 12 20:49:54.108870 systemd[1]: Started sshd@5-172.24.4.188:22-172.24.4.1:35860.service. Feb 12 20:49:55.705383 sshd[3438]: Accepted publickey for core from 172.24.4.1 port 35860 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:49:55.709478 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:49:55.724865 systemd-logind[1120]: New session 6 of user core. Feb 12 20:49:55.726642 systemd[1]: Started session-6.scope. Feb 12 20:49:56.630704 sshd[3438]: pam_unix(sshd:session): session closed for user core Feb 12 20:49:56.636693 systemd[1]: sshd@5-172.24.4.188:22-172.24.4.1:35860.service: Deactivated successfully. Feb 12 20:49:56.638458 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:49:56.640254 systemd-logind[1120]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:49:56.643801 systemd-logind[1120]: Removed session 6. Feb 12 20:50:01.637158 systemd[1]: Started sshd@6-172.24.4.188:22-172.24.4.1:58162.service. Feb 12 20:50:02.827919 sshd[3453]: Accepted publickey for core from 172.24.4.1 port 58162 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:02.830683 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:02.842644 systemd[1]: Started session-7.scope. Feb 12 20:50:02.844718 systemd-logind[1120]: New session 7 of user core. Feb 12 20:50:03.754416 sshd[3453]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:03.761449 systemd[1]: sshd@6-172.24.4.188:22-172.24.4.1:58162.service: Deactivated successfully. Feb 12 20:50:03.763293 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:50:03.765026 systemd-logind[1120]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:50:03.769521 systemd-logind[1120]: Removed session 7. Feb 12 20:50:08.762471 systemd[1]: Started sshd@7-172.24.4.188:22-172.24.4.1:53522.service. Feb 12 20:50:10.379108 sshd[3469]: Accepted publickey for core from 172.24.4.1 port 53522 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:10.381868 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:10.394621 systemd[1]: Started session-8.scope. Feb 12 20:50:10.395154 systemd-logind[1120]: New session 8 of user core. Feb 12 20:50:11.170965 sshd[3469]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:11.176344 systemd[1]: sshd@7-172.24.4.188:22-172.24.4.1:53522.service: Deactivated successfully. Feb 12 20:50:11.177843 systemd-logind[1120]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:50:11.179597 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:50:11.182695 systemd-logind[1120]: Removed session 8. Feb 12 20:50:16.178607 systemd[1]: Started sshd@8-172.24.4.188:22-172.24.4.1:47418.service. Feb 12 20:50:17.317394 sshd[3483]: Accepted publickey for core from 172.24.4.1 port 47418 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:17.323308 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:17.338175 systemd[1]: Started session-9.scope. Feb 12 20:50:17.339277 systemd-logind[1120]: New session 9 of user core. Feb 12 20:50:18.096069 sshd[3483]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:18.100838 systemd[1]: Started sshd@9-172.24.4.188:22-172.24.4.1:47420.service. Feb 12 20:50:18.103883 systemd[1]: sshd@8-172.24.4.188:22-172.24.4.1:47418.service: Deactivated successfully. Feb 12 20:50:18.114289 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:50:18.115611 systemd-logind[1120]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:50:18.119058 systemd-logind[1120]: Removed session 9. Feb 12 20:50:19.444472 sshd[3495]: Accepted publickey for core from 172.24.4.1 port 47420 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:19.446951 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:19.457455 systemd-logind[1120]: New session 10 of user core. Feb 12 20:50:19.457934 systemd[1]: Started session-10.scope. Feb 12 20:50:21.246288 sshd[3495]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:21.250806 systemd[1]: Started sshd@10-172.24.4.188:22-172.24.4.1:47426.service. Feb 12 20:50:21.270486 systemd[1]: sshd@9-172.24.4.188:22-172.24.4.1:47420.service: Deactivated successfully. Feb 12 20:50:21.272885 systemd-logind[1120]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:50:21.275471 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:50:21.279863 systemd-logind[1120]: Removed session 10. Feb 12 20:50:22.664303 sshd[3506]: Accepted publickey for core from 172.24.4.1 port 47426 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:22.669313 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:22.685501 systemd-logind[1120]: New session 11 of user core. Feb 12 20:50:22.686926 systemd[1]: Started session-11.scope. Feb 12 20:50:23.481417 sshd[3506]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:23.486391 systemd[1]: sshd@10-172.24.4.188:22-172.24.4.1:47426.service: Deactivated successfully. Feb 12 20:50:23.488214 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:50:23.492216 systemd-logind[1120]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:50:23.495430 systemd-logind[1120]: Removed session 11. Feb 12 20:50:28.486339 systemd[1]: Started sshd@11-172.24.4.188:22-172.24.4.1:45058.service. Feb 12 20:50:29.691187 sshd[3520]: Accepted publickey for core from 172.24.4.1 port 45058 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:29.693937 sshd[3520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:29.704684 systemd-logind[1120]: New session 12 of user core. Feb 12 20:50:29.705775 systemd[1]: Started session-12.scope. Feb 12 20:50:30.509276 sshd[3520]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:30.515424 systemd[1]: Started sshd@12-172.24.4.188:22-172.24.4.1:45072.service. Feb 12 20:50:30.518352 systemd[1]: sshd@11-172.24.4.188:22-172.24.4.1:45058.service: Deactivated successfully. Feb 12 20:50:30.528028 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:50:30.528658 systemd-logind[1120]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:50:30.537975 systemd-logind[1120]: Removed session 12. Feb 12 20:50:31.745103 sshd[3531]: Accepted publickey for core from 172.24.4.1 port 45072 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:31.750898 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:31.764654 systemd[1]: Started session-13.scope. Feb 12 20:50:31.765180 systemd-logind[1120]: New session 13 of user core. Feb 12 20:50:33.492570 sshd[3531]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:33.495442 systemd[1]: Started sshd@13-172.24.4.188:22-172.24.4.1:45076.service. Feb 12 20:50:33.502868 systemd[1]: sshd@12-172.24.4.188:22-172.24.4.1:45072.service: Deactivated successfully. Feb 12 20:50:33.511971 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:50:33.513858 systemd-logind[1120]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:50:33.516562 systemd-logind[1120]: Removed session 13. Feb 12 20:50:34.732030 sshd[3544]: Accepted publickey for core from 172.24.4.1 port 45076 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:34.734988 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:34.746832 systemd[1]: Started session-14.scope. Feb 12 20:50:34.747347 systemd-logind[1120]: New session 14 of user core. Feb 12 20:50:36.856476 sshd[3544]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:36.858471 systemd[1]: Started sshd@14-172.24.4.188:22-172.24.4.1:41696.service. Feb 12 20:50:36.866355 systemd[1]: sshd@13-172.24.4.188:22-172.24.4.1:45076.service: Deactivated successfully. Feb 12 20:50:36.869061 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:50:36.871857 systemd-logind[1120]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:50:36.879006 systemd-logind[1120]: Removed session 14. Feb 12 20:50:38.271628 sshd[3610]: Accepted publickey for core from 172.24.4.1 port 41696 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:38.273455 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:38.285249 systemd-logind[1120]: New session 15 of user core. Feb 12 20:50:38.286942 systemd[1]: Started session-15.scope. Feb 12 20:50:39.622655 sshd[3610]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:39.629875 systemd[1]: Started sshd@15-172.24.4.188:22-172.24.4.1:41708.service. Feb 12 20:50:39.639941 systemd[1]: sshd@14-172.24.4.188:22-172.24.4.1:41696.service: Deactivated successfully. Feb 12 20:50:39.647879 systemd-logind[1120]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:50:39.648163 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:50:39.651184 systemd-logind[1120]: Removed session 15. Feb 12 20:50:40.877065 sshd[3621]: Accepted publickey for core from 172.24.4.1 port 41708 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:40.879592 sshd[3621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:40.890218 systemd-logind[1120]: New session 16 of user core. Feb 12 20:50:40.891885 systemd[1]: Started session-16.scope. Feb 12 20:50:41.844013 sshd[3621]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:41.849348 systemd[1]: sshd@15-172.24.4.188:22-172.24.4.1:41708.service: Deactivated successfully. Feb 12 20:50:41.851693 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:50:41.851828 systemd-logind[1120]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:50:41.854229 systemd-logind[1120]: Removed session 16. Feb 12 20:50:46.851212 systemd[1]: Started sshd@16-172.24.4.188:22-172.24.4.1:53878.service. Feb 12 20:50:48.305055 sshd[3663]: Accepted publickey for core from 172.24.4.1 port 53878 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:48.308071 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:48.319858 systemd-logind[1120]: New session 17 of user core. Feb 12 20:50:48.320517 systemd[1]: Started session-17.scope. Feb 12 20:50:49.093614 sshd[3663]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:49.099492 systemd[1]: sshd@16-172.24.4.188:22-172.24.4.1:53878.service: Deactivated successfully. Feb 12 20:50:49.101506 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:50:49.105469 systemd-logind[1120]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:50:49.107553 systemd-logind[1120]: Removed session 17. Feb 12 20:50:54.101860 systemd[1]: Started sshd@17-172.24.4.188:22-172.24.4.1:53886.service. Feb 12 20:50:55.429374 sshd[3678]: Accepted publickey for core from 172.24.4.1 port 53886 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:50:55.434144 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:50:55.446066 systemd-logind[1120]: New session 18 of user core. Feb 12 20:50:55.448836 systemd[1]: Started session-18.scope. Feb 12 20:50:56.175699 sshd[3678]: pam_unix(sshd:session): session closed for user core Feb 12 20:50:56.182282 systemd[1]: sshd@17-172.24.4.188:22-172.24.4.1:53886.service: Deactivated successfully. Feb 12 20:50:56.183870 systemd-logind[1120]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:50:56.185409 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:50:56.187399 systemd-logind[1120]: Removed session 18. Feb 12 20:51:01.182508 systemd[1]: Started sshd@18-172.24.4.188:22-172.24.4.1:55054.service. Feb 12 20:51:02.445623 sshd[3691]: Accepted publickey for core from 172.24.4.1 port 55054 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:51:02.448475 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:51:02.460813 systemd[1]: Started session-19.scope. Feb 12 20:51:02.461271 systemd-logind[1120]: New session 19 of user core. Feb 12 20:51:03.305180 sshd[3691]: pam_unix(sshd:session): session closed for user core Feb 12 20:51:03.309120 systemd[1]: Started sshd@19-172.24.4.188:22-172.24.4.1:55062.service. Feb 12 20:51:03.314899 systemd[1]: sshd@18-172.24.4.188:22-172.24.4.1:55054.service: Deactivated successfully. Feb 12 20:51:03.318544 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:51:03.319252 systemd-logind[1120]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:51:03.329592 systemd-logind[1120]: Removed session 19. Feb 12 20:51:04.628266 sshd[3704]: Accepted publickey for core from 172.24.4.1 port 55062 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:51:04.630871 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:51:04.648057 systemd[1]: Started session-20.scope. Feb 12 20:51:04.648656 systemd-logind[1120]: New session 20 of user core. Feb 12 20:51:07.067088 env[1135]: time="2024-02-12T20:51:07.067022390Z" level=info msg="StopContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" with timeout 30 (s)" Feb 12 20:51:07.067764 env[1135]: time="2024-02-12T20:51:07.067458317Z" level=info msg="Stop container \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" with signal terminated" Feb 12 20:51:07.086016 systemd[1]: run-containerd-runc-k8s.io-536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87-runc.tUEWjX.mount: Deactivated successfully. Feb 12 20:51:07.110894 env[1135]: time="2024-02-12T20:51:07.110822833Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:51:07.117004 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000-rootfs.mount: Deactivated successfully. Feb 12 20:51:07.121389 env[1135]: time="2024-02-12T20:51:07.121357799Z" level=info msg="StopContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" with timeout 1 (s)" Feb 12 20:51:07.124636 env[1135]: time="2024-02-12T20:51:07.121994652Z" level=info msg="Stop container \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" with signal terminated" Feb 12 20:51:07.128475 systemd-networkd[1032]: lxc_health: Link DOWN Feb 12 20:51:07.128483 systemd-networkd[1032]: lxc_health: Lost carrier Feb 12 20:51:07.133645 env[1135]: time="2024-02-12T20:51:07.133604031Z" level=info msg="shim disconnected" id=c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000 Feb 12 20:51:07.133942 env[1135]: time="2024-02-12T20:51:07.133921214Z" level=warning msg="cleaning up after shim disconnected" id=c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000 namespace=k8s.io Feb 12 20:51:07.134049 env[1135]: time="2024-02-12T20:51:07.134033816Z" level=info msg="cleaning up dead shim" Feb 12 20:51:07.158314 env[1135]: time="2024-02-12T20:51:07.157164658Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3759 runtime=io.containerd.runc.v2\n" Feb 12 20:51:07.167411 env[1135]: time="2024-02-12T20:51:07.167376788Z" level=info msg="StopContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" returns successfully" Feb 12 20:51:07.168236 env[1135]: time="2024-02-12T20:51:07.168212744Z" level=info msg="StopPodSandbox for \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\"" Feb 12 20:51:07.168400 env[1135]: time="2024-02-12T20:51:07.168370129Z" level=info msg="Container to stop \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:07.170636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1-shm.mount: Deactivated successfully. Feb 12 20:51:07.197661 env[1135]: time="2024-02-12T20:51:07.197614834Z" level=info msg="shim disconnected" id=536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87 Feb 12 20:51:07.198127 env[1135]: time="2024-02-12T20:51:07.198098711Z" level=warning msg="cleaning up after shim disconnected" id=536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87 namespace=k8s.io Feb 12 20:51:07.198217 env[1135]: time="2024-02-12T20:51:07.198201614Z" level=info msg="cleaning up dead shim" Feb 12 20:51:07.207445 env[1135]: time="2024-02-12T20:51:07.207407760Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3802 runtime=io.containerd.runc.v2\n" Feb 12 20:51:07.209944 env[1135]: time="2024-02-12T20:51:07.209908213Z" level=info msg="StopContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" returns successfully" Feb 12 20:51:07.210650 env[1135]: time="2024-02-12T20:51:07.210626610Z" level=info msg="StopPodSandbox for \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\"" Feb 12 20:51:07.210963 env[1135]: time="2024-02-12T20:51:07.210889973Z" level=info msg="Container to stop \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:07.211055 env[1135]: time="2024-02-12T20:51:07.211034574Z" level=info msg="Container to stop \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:07.211127 env[1135]: time="2024-02-12T20:51:07.211108563Z" level=info msg="Container to stop \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:07.211196 env[1135]: time="2024-02-12T20:51:07.211177261Z" level=info msg="Container to stop \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:07.211265 env[1135]: time="2024-02-12T20:51:07.211246801Z" level=info msg="Container to stop \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:07.214716 env[1135]: time="2024-02-12T20:51:07.214672128Z" level=info msg="shim disconnected" id=a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1 Feb 12 20:51:07.215045 env[1135]: time="2024-02-12T20:51:07.215025700Z" level=warning msg="cleaning up after shim disconnected" id=a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1 namespace=k8s.io Feb 12 20:51:07.215135 env[1135]: time="2024-02-12T20:51:07.215120949Z" level=info msg="cleaning up dead shim" Feb 12 20:51:07.231979 env[1135]: time="2024-02-12T20:51:07.231942973Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Feb 12 20:51:07.232441 env[1135]: time="2024-02-12T20:51:07.232415197Z" level=info msg="TearDown network for sandbox \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\" successfully" Feb 12 20:51:07.232525 env[1135]: time="2024-02-12T20:51:07.232506318Z" level=info msg="StopPodSandbox for \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\" returns successfully" Feb 12 20:51:07.253287 env[1135]: time="2024-02-12T20:51:07.253245179Z" level=info msg="shim disconnected" id=d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741 Feb 12 20:51:07.253591 env[1135]: time="2024-02-12T20:51:07.253561451Z" level=warning msg="cleaning up after shim disconnected" id=d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741 namespace=k8s.io Feb 12 20:51:07.253670 env[1135]: time="2024-02-12T20:51:07.253654065Z" level=info msg="cleaning up dead shim" Feb 12 20:51:07.262028 env[1135]: time="2024-02-12T20:51:07.261517596Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3857 runtime=io.containerd.runc.v2\n" Feb 12 20:51:07.262187 env[1135]: time="2024-02-12T20:51:07.262032521Z" level=info msg="TearDown network for sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" successfully" Feb 12 20:51:07.262187 env[1135]: time="2024-02-12T20:51:07.262059550Z" level=info msg="StopPodSandbox for \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" returns successfully" Feb 12 20:51:07.397034 kubelet[2075]: I0212 20:51:07.393875 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpd9h\" (UniqueName: \"kubernetes.io/projected/db658cc7-262e-4298-a5ff-38d7be249a75-kube-api-access-rpd9h\") pod \"db658cc7-262e-4298-a5ff-38d7be249a75\" (UID: \"db658cc7-262e-4298-a5ff-38d7be249a75\") " Feb 12 20:51:07.419872 kubelet[2075]: I0212 20:51:07.419820 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-hostproc\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.420024 kubelet[2075]: I0212 20:51:07.419910 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-bpf-maps\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.420024 kubelet[2075]: I0212 20:51:07.419981 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74hr4\" (UniqueName: \"kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-kube-api-access-74hr4\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.420166 kubelet[2075]: I0212 20:51:07.420034 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-run\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.420166 kubelet[2075]: I0212 20:51:07.420093 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db658cc7-262e-4298-a5ff-38d7be249a75-cilium-config-path\") pod \"db658cc7-262e-4298-a5ff-38d7be249a75\" (UID: \"db658cc7-262e-4298-a5ff-38d7be249a75\") " Feb 12 20:51:07.439116 kubelet[2075]: W0212 20:51:07.439050 2075 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/db658cc7-262e-4298-a5ff-38d7be249a75/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:51:07.456610 kubelet[2075]: I0212 20:51:07.456505 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-hostproc" (OuterVolumeSpecName: "hostproc") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.457122 kubelet[2075]: I0212 20:51:07.444852 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db658cc7-262e-4298-a5ff-38d7be249a75-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db658cc7-262e-4298-a5ff-38d7be249a75" (UID: "db658cc7-262e-4298-a5ff-38d7be249a75"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:51:07.460547 kubelet[2075]: I0212 20:51:07.460472 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.460965 kubelet[2075]: I0212 20:51:07.460886 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.477678 kubelet[2075]: I0212 20:51:07.477601 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db658cc7-262e-4298-a5ff-38d7be249a75-kube-api-access-rpd9h" (OuterVolumeSpecName: "kube-api-access-rpd9h") pod "db658cc7-262e-4298-a5ff-38d7be249a75" (UID: "db658cc7-262e-4298-a5ff-38d7be249a75"). InnerVolumeSpecName "kube-api-access-rpd9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:51:07.478016 kubelet[2075]: I0212 20:51:07.477686 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-kube-api-access-74hr4" (OuterVolumeSpecName: "kube-api-access-74hr4") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "kube-api-access-74hr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:51:07.520916 kubelet[2075]: I0212 20:51:07.520868 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-lib-modules\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.521241 kubelet[2075]: I0212 20:51:07.521214 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-kernel\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.521639 kubelet[2075]: I0212 20:51:07.520996 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.521639 kubelet[2075]: I0212 20:51:07.521293 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.521956 kubelet[2075]: I0212 20:51:07.521928 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-config-path\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.522185 kubelet[2075]: I0212 20:51:07.522160 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2520c3fc-ab18-42cc-8378-e8af564097f6-clustermesh-secrets\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.522390 kubelet[2075]: I0212 20:51:07.522368 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-xtables-lock\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.522586 kubelet[2075]: I0212 20:51:07.522563 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-cgroup\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.522827 kubelet[2075]: I0212 20:51:07.522801 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-net\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.523042 kubelet[2075]: I0212 20:51:07.523019 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-hubble-tls\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.523236 kubelet[2075]: I0212 20:51:07.523214 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cni-path\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.523425 kubelet[2075]: I0212 20:51:07.523403 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-etc-cni-netd\") pod \"2520c3fc-ab18-42cc-8378-e8af564097f6\" (UID: \"2520c3fc-ab18-42cc-8378-e8af564097f6\") " Feb 12 20:51:07.525107 kubelet[2075]: I0212 20:51:07.525074 2075 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-lib-modules\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.525496 kubelet[2075]: I0212 20:51:07.525436 2075 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-kernel\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.525828 kubelet[2075]: I0212 20:51:07.525802 2075 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rpd9h\" (UniqueName: \"kubernetes.io/projected/db658cc7-262e-4298-a5ff-38d7be249a75-kube-api-access-rpd9h\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.526196 kubelet[2075]: I0212 20:51:07.526171 2075 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-hostproc\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.526615 kubelet[2075]: I0212 20:51:07.526587 2075 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-bpf-maps\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.526855 kubelet[2075]: I0212 20:51:07.526829 2075 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-74hr4\" (UniqueName: \"kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-kube-api-access-74hr4\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.527049 kubelet[2075]: I0212 20:51:07.527027 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-run\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.527558 kubelet[2075]: I0212 20:51:07.527525 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db658cc7-262e-4298-a5ff-38d7be249a75-cilium-config-path\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.527910 kubelet[2075]: I0212 20:51:07.527869 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.528256 kubelet[2075]: W0212 20:51:07.521979 2075 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2520c3fc-ab18-42cc-8378-e8af564097f6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:51:07.538488 kubelet[2075]: I0212 20:51:07.538378 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:51:07.539136 kubelet[2075]: I0212 20:51:07.539030 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.540208 kubelet[2075]: I0212 20:51:07.540166 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.540542 kubelet[2075]: I0212 20:51:07.540465 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.540938 kubelet[2075]: I0212 20:51:07.540863 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cni-path" (OuterVolumeSpecName: "cni-path") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:07.541696 kubelet[2075]: I0212 20:51:07.541645 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2520c3fc-ab18-42cc-8378-e8af564097f6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:51:07.546916 kubelet[2075]: I0212 20:51:07.546863 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2520c3fc-ab18-42cc-8378-e8af564097f6" (UID: "2520c3fc-ab18-42cc-8378-e8af564097f6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:51:07.600877 kubelet[2075]: I0212 20:51:07.600843 2075 scope.go:115] "RemoveContainer" containerID="c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000" Feb 12 20:51:07.608914 env[1135]: time="2024-02-12T20:51:07.608842643Z" level=info msg="RemoveContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\"" Feb 12 20:51:07.626640 env[1135]: time="2024-02-12T20:51:07.626557349Z" level=info msg="RemoveContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" returns successfully" Feb 12 20:51:07.631027 kubelet[2075]: I0212 20:51:07.630994 2075 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-host-proc-sys-net\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.631573 kubelet[2075]: I0212 20:51:07.631545 2075 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2520c3fc-ab18-42cc-8378-e8af564097f6-hubble-tls\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.631934 kubelet[2075]: I0212 20:51:07.631910 2075 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cni-path\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.632287 kubelet[2075]: I0212 20:51:07.632226 2075 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-xtables-lock\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.632515 kubelet[2075]: I0212 20:51:07.632491 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-cgroup\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.632868 kubelet[2075]: I0212 20:51:07.632806 2075 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2520c3fc-ab18-42cc-8378-e8af564097f6-etc-cni-netd\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.633057 kubelet[2075]: I0212 20:51:07.633033 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2520c3fc-ab18-42cc-8378-e8af564097f6-cilium-config-path\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.633278 kubelet[2075]: I0212 20:51:07.633211 2075 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2520c3fc-ab18-42cc-8378-e8af564097f6-clustermesh-secrets\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:07.634421 kubelet[2075]: I0212 20:51:07.634387 2075 scope.go:115] "RemoveContainer" containerID="c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000" Feb 12 20:51:07.635671 env[1135]: time="2024-02-12T20:51:07.635461519Z" level=error msg="ContainerStatus for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\": not found" Feb 12 20:51:07.636164 kubelet[2075]: E0212 20:51:07.636088 2075 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\": not found" containerID="c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000" Feb 12 20:51:07.640861 kubelet[2075]: I0212 20:51:07.640804 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000} err="failed to get container status \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\": rpc error: code = NotFound desc = an error occurred when try to find container \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\": not found" Feb 12 20:51:07.640861 kubelet[2075]: I0212 20:51:07.640862 2075 scope.go:115] "RemoveContainer" containerID="536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87" Feb 12 20:51:07.650393 env[1135]: time="2024-02-12T20:51:07.649026521Z" level=info msg="RemoveContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\"" Feb 12 20:51:07.666860 env[1135]: time="2024-02-12T20:51:07.666066914Z" level=info msg="RemoveContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" returns successfully" Feb 12 20:51:07.668265 kubelet[2075]: I0212 20:51:07.668227 2075 scope.go:115] "RemoveContainer" containerID="84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c" Feb 12 20:51:07.673227 env[1135]: time="2024-02-12T20:51:07.673158219Z" level=info msg="RemoveContainer for \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\"" Feb 12 20:51:07.678370 env[1135]: time="2024-02-12T20:51:07.678251781Z" level=info msg="RemoveContainer for \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\" returns successfully" Feb 12 20:51:07.684161 kubelet[2075]: I0212 20:51:07.684122 2075 scope.go:115] "RemoveContainer" containerID="ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a" Feb 12 20:51:07.687722 env[1135]: time="2024-02-12T20:51:07.687470280Z" level=info msg="RemoveContainer for \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\"" Feb 12 20:51:07.690863 env[1135]: time="2024-02-12T20:51:07.690836225Z" level=info msg="RemoveContainer for \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\" returns successfully" Feb 12 20:51:07.691178 kubelet[2075]: I0212 20:51:07.691164 2075 scope.go:115] "RemoveContainer" containerID="3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae" Feb 12 20:51:07.692812 env[1135]: time="2024-02-12T20:51:07.692787611Z" level=info msg="RemoveContainer for \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\"" Feb 12 20:51:07.696244 env[1135]: time="2024-02-12T20:51:07.696212337Z" level=info msg="RemoveContainer for \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\" returns successfully" Feb 12 20:51:07.698331 kubelet[2075]: I0212 20:51:07.697371 2075 scope.go:115] "RemoveContainer" containerID="5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5" Feb 12 20:51:07.700203 env[1135]: time="2024-02-12T20:51:07.700170852Z" level=info msg="RemoveContainer for \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\"" Feb 12 20:51:07.703324 env[1135]: time="2024-02-12T20:51:07.703278353Z" level=info msg="RemoveContainer for \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\" returns successfully" Feb 12 20:51:07.703583 kubelet[2075]: I0212 20:51:07.703476 2075 scope.go:115] "RemoveContainer" containerID="536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87" Feb 12 20:51:07.703851 env[1135]: time="2024-02-12T20:51:07.703730651Z" level=error msg="ContainerStatus for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\": not found" Feb 12 20:51:07.704017 kubelet[2075]: E0212 20:51:07.703990 2075 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\": not found" containerID="536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87" Feb 12 20:51:07.705782 kubelet[2075]: I0212 20:51:07.705716 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87} err="failed to get container status \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\": rpc error: code = NotFound desc = an error occurred when try to find container \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\": not found" Feb 12 20:51:07.706483 kubelet[2075]: I0212 20:51:07.706464 2075 scope.go:115] "RemoveContainer" containerID="84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c" Feb 12 20:51:07.706853 env[1135]: time="2024-02-12T20:51:07.706787978Z" level=error msg="ContainerStatus for \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\": not found" Feb 12 20:51:07.707046 kubelet[2075]: E0212 20:51:07.707033 2075 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\": not found" containerID="84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c" Feb 12 20:51:07.707111 kubelet[2075]: I0212 20:51:07.707063 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c} err="failed to get container status \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\": rpc error: code = NotFound desc = an error occurred when try to find container \"84afdcd0b6346339f77ddfdaf5f81a097abf8e2d2b4bb081d72d30fd94c8565c\": not found" Feb 12 20:51:07.707111 kubelet[2075]: I0212 20:51:07.707073 2075 scope.go:115] "RemoveContainer" containerID="ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a" Feb 12 20:51:07.707243 env[1135]: time="2024-02-12T20:51:07.707200000Z" level=error msg="ContainerStatus for \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\": not found" Feb 12 20:51:07.707329 kubelet[2075]: E0212 20:51:07.707317 2075 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\": not found" containerID="ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a" Feb 12 20:51:07.707393 kubelet[2075]: I0212 20:51:07.707343 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a} err="failed to get container status \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee2f67a3ff24da1183faeaeb99627cdbbd648f06d51cc237de3a419dae3e496a\": not found" Feb 12 20:51:07.707393 kubelet[2075]: I0212 20:51:07.707353 2075 scope.go:115] "RemoveContainer" containerID="3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae" Feb 12 20:51:07.707605 env[1135]: time="2024-02-12T20:51:07.707558451Z" level=error msg="ContainerStatus for \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\": not found" Feb 12 20:51:07.707811 kubelet[2075]: E0212 20:51:07.707788 2075 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\": not found" containerID="3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae" Feb 12 20:51:07.707869 kubelet[2075]: I0212 20:51:07.707815 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae} err="failed to get container status \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\": rpc error: code = NotFound desc = an error occurred when try to find container \"3342693ecc0d3a377efad36fbd3ad049d7ca549a20dce4597cca6b0f5fe366ae\": not found" Feb 12 20:51:07.707869 kubelet[2075]: I0212 20:51:07.707824 2075 scope.go:115] "RemoveContainer" containerID="5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5" Feb 12 20:51:07.707981 env[1135]: time="2024-02-12T20:51:07.707938162Z" level=error msg="ContainerStatus for \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\": not found" Feb 12 20:51:07.708073 kubelet[2075]: E0212 20:51:07.708059 2075 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\": not found" containerID="5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5" Feb 12 20:51:07.708142 kubelet[2075]: I0212 20:51:07.708089 2075 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5} err="failed to get container status \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cd7ac09585d08378c7f88638c0148f8d0a77fde9c006c5ce81fd757776f78c5\": not found" Feb 12 20:51:08.076214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87-rootfs.mount: Deactivated successfully. Feb 12 20:51:08.076708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1-rootfs.mount: Deactivated successfully. Feb 12 20:51:08.077247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741-rootfs.mount: Deactivated successfully. Feb 12 20:51:08.077590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741-shm.mount: Deactivated successfully. Feb 12 20:51:08.078061 systemd[1]: var-lib-kubelet-pods-2520c3fc\x2dab18\x2d42cc\x2d8378\x2de8af564097f6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:51:08.078406 systemd[1]: var-lib-kubelet-pods-2520c3fc\x2dab18\x2d42cc\x2d8378\x2de8af564097f6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:51:08.078852 systemd[1]: var-lib-kubelet-pods-db658cc7\x2d262e\x2d4298\x2da5ff\x2d38d7be249a75-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpd9h.mount: Deactivated successfully. Feb 12 20:51:08.079349 systemd[1]: var-lib-kubelet-pods-2520c3fc\x2dab18\x2d42cc\x2d8378\x2de8af564097f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d74hr4.mount: Deactivated successfully. Feb 12 20:51:09.086464 env[1135]: time="2024-02-12T20:51:09.085849691Z" level=info msg="StopContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" with timeout 1 (s)" Feb 12 20:51:09.086464 env[1135]: time="2024-02-12T20:51:09.085989252Z" level=error msg="StopContainer for \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\": not found" Feb 12 20:51:09.088682 kubelet[2075]: E0212 20:51:09.087298 2075 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87\": not found" containerID="536b66c07f30ab32f82c87bc9f2bdd088abc83b385108b24e305affab1047b87" Feb 12 20:51:09.089832 env[1135]: time="2024-02-12T20:51:09.089383662Z" level=info msg="StopPodSandbox for \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\"" Feb 12 20:51:09.089832 env[1135]: time="2024-02-12T20:51:09.089587122Z" level=info msg="TearDown network for sandbox \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" successfully" Feb 12 20:51:09.089832 env[1135]: time="2024-02-12T20:51:09.089664017Z" level=info msg="StopPodSandbox for \"d252b5167dda4085a68c984d5bc31260ed707866a24091578550e7f67f4aa741\" returns successfully" Feb 12 20:51:09.090198 env[1135]: time="2024-02-12T20:51:09.089927390Z" level=info msg="StopContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" with timeout 1 (s)" Feb 12 20:51:09.092704 env[1135]: time="2024-02-12T20:51:09.089997903Z" level=error msg="StopContainer for \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\": not found" Feb 12 20:51:09.094540 kubelet[2075]: E0212 20:51:09.094480 2075 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000\": not found" containerID="c460895dbbd40e101827103fd50b9f5a6fdfe2f84e47dd7c08dd1599a9b09000" Feb 12 20:51:09.095696 kubelet[2075]: I0212 20:51:09.095642 2075 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2520c3fc-ab18-42cc-8378-e8af564097f6 path="/var/lib/kubelet/pods/2520c3fc-ab18-42cc-8378-e8af564097f6/volumes" Feb 12 20:51:09.097439 env[1135]: time="2024-02-12T20:51:09.097370634Z" level=info msg="StopPodSandbox for \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\"" Feb 12 20:51:09.097636 env[1135]: time="2024-02-12T20:51:09.097539320Z" level=info msg="TearDown network for sandbox \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\" successfully" Feb 12 20:51:09.097842 env[1135]: time="2024-02-12T20:51:09.097626814Z" level=info msg="StopPodSandbox for \"a4464419c2caae4f1ab9904c69f8a4f0a19e7c32059e5e3cbbcd5132cf8032d1\" returns successfully" Feb 12 20:51:09.101695 kubelet[2075]: I0212 20:51:09.101614 2075 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=db658cc7-262e-4298-a5ff-38d7be249a75 path="/var/lib/kubelet/pods/db658cc7-262e-4298-a5ff-38d7be249a75/volumes" Feb 12 20:51:09.114147 sshd[3704]: pam_unix(sshd:session): session closed for user core Feb 12 20:51:09.117548 systemd[1]: Started sshd@20-172.24.4.188:22-172.24.4.1:34040.service. Feb 12 20:51:09.121167 systemd[1]: sshd@19-172.24.4.188:22-172.24.4.1:55062.service: Deactivated successfully. Feb 12 20:51:09.126998 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:51:09.130044 systemd-logind[1120]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:51:09.134223 systemd-logind[1120]: Removed session 20. Feb 12 20:51:09.182039 kubelet[2075]: E0212 20:51:09.181999 2075 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:51:10.575327 sshd[3875]: Accepted publickey for core from 172.24.4.1 port 34040 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:51:10.581357 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:51:10.594899 systemd-logind[1120]: New session 21 of user core. Feb 12 20:51:10.595969 systemd[1]: Started session-21.scope. Feb 12 20:51:11.785785 kubelet[2075]: I0212 20:51:11.785745 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:51:11.787388 kubelet[2075]: E0212 20:51:11.787373 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2520c3fc-ab18-42cc-8378-e8af564097f6" containerName="cilium-agent" Feb 12 20:51:11.787486 kubelet[2075]: E0212 20:51:11.787475 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2520c3fc-ab18-42cc-8378-e8af564097f6" containerName="mount-cgroup" Feb 12 20:51:11.787584 kubelet[2075]: E0212 20:51:11.787573 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2520c3fc-ab18-42cc-8378-e8af564097f6" containerName="apply-sysctl-overwrites" Feb 12 20:51:11.787694 kubelet[2075]: E0212 20:51:11.787679 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2520c3fc-ab18-42cc-8378-e8af564097f6" containerName="mount-bpf-fs" Feb 12 20:51:11.787791 kubelet[2075]: E0212 20:51:11.787780 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db658cc7-262e-4298-a5ff-38d7be249a75" containerName="cilium-operator" Feb 12 20:51:11.787884 kubelet[2075]: E0212 20:51:11.787873 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2520c3fc-ab18-42cc-8378-e8af564097f6" containerName="clean-cilium-state" Feb 12 20:51:11.788000 kubelet[2075]: I0212 20:51:11.787988 2075 memory_manager.go:346] "RemoveStaleState removing state" podUID="2520c3fc-ab18-42cc-8378-e8af564097f6" containerName="cilium-agent" Feb 12 20:51:11.788098 kubelet[2075]: I0212 20:51:11.788088 2075 memory_manager.go:346] "RemoveStaleState removing state" podUID="db658cc7-262e-4298-a5ff-38d7be249a75" containerName="cilium-operator" Feb 12 20:51:11.864600 kubelet[2075]: I0212 20:51:11.864549 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-kernel\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864766 kubelet[2075]: I0212 20:51:11.864638 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlzg\" (UniqueName: \"kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-kube-api-access-2dlzg\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864766 kubelet[2075]: I0212 20:51:11.864687 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-hostproc\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864766 kubelet[2075]: I0212 20:51:11.864726 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-cgroup\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864877 kubelet[2075]: I0212 20:51:11.864794 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-hubble-tls\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864877 kubelet[2075]: I0212 20:51:11.864833 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cni-path\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864877 kubelet[2075]: I0212 20:51:11.864875 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-clustermesh-secrets\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864965 kubelet[2075]: I0212 20:51:11.864917 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-xtables-lock\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.864965 kubelet[2075]: I0212 20:51:11.864959 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-net\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.865027 kubelet[2075]: I0212 20:51:11.865003 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-etc-cni-netd\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.865061 kubelet[2075]: I0212 20:51:11.865043 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-ipsec-secrets\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.865093 kubelet[2075]: I0212 20:51:11.865086 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-bpf-maps\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.865146 kubelet[2075]: I0212 20:51:11.865125 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-lib-modules\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.865216 kubelet[2075]: I0212 20:51:11.865197 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-run\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.865387 kubelet[2075]: I0212 20:51:11.865367 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-config-path\") pod \"cilium-htcpt\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " pod="kube-system/cilium-htcpt" Feb 12 20:51:11.886706 kubelet[2075]: I0212 20:51:11.886672 2075 setters.go:548] "Node became not ready" node="ci-3510-3-2-f-bcfc1a2c45.novalocal" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:51:11.885661475 +0000 UTC m=+143.027249239 LastTransitionTime:2024-02-12 20:51:11.885661475 +0000 UTC m=+143.027249239 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:51:11.948395 sshd[3875]: pam_unix(sshd:session): session closed for user core Feb 12 20:51:11.952117 systemd[1]: Started sshd@21-172.24.4.188:22-172.24.4.1:34054.service. Feb 12 20:51:11.956100 systemd[1]: sshd@20-172.24.4.188:22-172.24.4.1:34040.service: Deactivated successfully. Feb 12 20:51:11.957941 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:51:11.959191 systemd-logind[1120]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:51:11.964104 systemd-logind[1120]: Removed session 21. Feb 12 20:51:12.104604 env[1135]: time="2024-02-12T20:51:12.103272492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-htcpt,Uid:1a720a76-e04e-46be-8c44-b076a79d7cf7,Namespace:kube-system,Attempt:0,}" Feb 12 20:51:12.142098 env[1135]: time="2024-02-12T20:51:12.141967976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:51:12.142419 env[1135]: time="2024-02-12T20:51:12.142057222Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:51:12.142419 env[1135]: time="2024-02-12T20:51:12.142092068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:51:12.142649 env[1135]: time="2024-02-12T20:51:12.142476549Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c pid=3902 runtime=io.containerd.runc.v2 Feb 12 20:51:12.215974 env[1135]: time="2024-02-12T20:51:12.215937322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-htcpt,Uid:1a720a76-e04e-46be-8c44-b076a79d7cf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\"" Feb 12 20:51:12.224223 env[1135]: time="2024-02-12T20:51:12.224189973Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:51:12.245696 env[1135]: time="2024-02-12T20:51:12.245641972Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04\"" Feb 12 20:51:12.246428 env[1135]: time="2024-02-12T20:51:12.246382449Z" level=info msg="StartContainer for \"9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04\"" Feb 12 20:51:12.316203 env[1135]: time="2024-02-12T20:51:12.316148901Z" level=info msg="StartContainer for \"9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04\" returns successfully" Feb 12 20:51:12.386914 env[1135]: time="2024-02-12T20:51:12.386802356Z" level=info msg="shim disconnected" id=9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04 Feb 12 20:51:12.386914 env[1135]: time="2024-02-12T20:51:12.386854053Z" level=warning msg="cleaning up after shim disconnected" id=9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04 namespace=k8s.io Feb 12 20:51:12.386914 env[1135]: time="2024-02-12T20:51:12.386865143Z" level=info msg="cleaning up dead shim" Feb 12 20:51:12.394388 env[1135]: time="2024-02-12T20:51:12.394345577Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3985 runtime=io.containerd.runc.v2\n" Feb 12 20:51:12.641010 env[1135]: time="2024-02-12T20:51:12.640818327Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:51:12.671835 env[1135]: time="2024-02-12T20:51:12.671530184Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164\"" Feb 12 20:51:12.678370 env[1135]: time="2024-02-12T20:51:12.678266003Z" level=info msg="StartContainer for \"7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164\"" Feb 12 20:51:12.773596 env[1135]: time="2024-02-12T20:51:12.773559638Z" level=info msg="StartContainer for \"7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164\" returns successfully" Feb 12 20:51:12.804785 env[1135]: time="2024-02-12T20:51:12.804728521Z" level=info msg="shim disconnected" id=7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164 Feb 12 20:51:12.805033 env[1135]: time="2024-02-12T20:51:12.805001503Z" level=warning msg="cleaning up after shim disconnected" id=7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164 namespace=k8s.io Feb 12 20:51:12.805115 env[1135]: time="2024-02-12T20:51:12.805098535Z" level=info msg="cleaning up dead shim" Feb 12 20:51:12.811996 env[1135]: time="2024-02-12T20:51:12.811969366Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4047 runtime=io.containerd.runc.v2\n" Feb 12 20:51:13.501897 sshd[3887]: Accepted publickey for core from 172.24.4.1 port 34054 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:51:13.504617 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:51:13.515866 systemd-logind[1120]: New session 22 of user core. Feb 12 20:51:13.518673 systemd[1]: Started session-22.scope. Feb 12 20:51:13.662872 env[1135]: time="2024-02-12T20:51:13.656049232Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:51:13.700431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959071956.mount: Deactivated successfully. Feb 12 20:51:13.708032 env[1135]: time="2024-02-12T20:51:13.707959603Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f\"" Feb 12 20:51:13.712226 env[1135]: time="2024-02-12T20:51:13.712185651Z" level=info msg="StartContainer for \"7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f\"" Feb 12 20:51:13.791898 env[1135]: time="2024-02-12T20:51:13.791804853Z" level=info msg="StartContainer for \"7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f\" returns successfully" Feb 12 20:51:13.825647 env[1135]: time="2024-02-12T20:51:13.825033205Z" level=info msg="shim disconnected" id=7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f Feb 12 20:51:13.825647 env[1135]: time="2024-02-12T20:51:13.825080684Z" level=warning msg="cleaning up after shim disconnected" id=7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f namespace=k8s.io Feb 12 20:51:13.825647 env[1135]: time="2024-02-12T20:51:13.825092445Z" level=info msg="cleaning up dead shim" Feb 12 20:51:13.836249 env[1135]: time="2024-02-12T20:51:13.835056452Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4109 runtime=io.containerd.runc.v2\n" Feb 12 20:51:13.983309 systemd[1]: run-containerd-runc-k8s.io-7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f-runc.gE72uH.mount: Deactivated successfully. Feb 12 20:51:13.983635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f-rootfs.mount: Deactivated successfully. Feb 12 20:51:14.093931 sshd[3887]: pam_unix(sshd:session): session closed for user core Feb 12 20:51:14.102461 systemd[1]: Started sshd@22-172.24.4.188:22-172.24.4.1:34066.service. Feb 12 20:51:14.104620 systemd[1]: sshd@21-172.24.4.188:22-172.24.4.1:34054.service: Deactivated successfully. Feb 12 20:51:14.114282 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:51:14.117445 systemd-logind[1120]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:51:14.120283 systemd-logind[1120]: Removed session 22. Feb 12 20:51:14.183714 kubelet[2075]: E0212 20:51:14.183677 2075 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:51:14.667334 env[1135]: time="2024-02-12T20:51:14.667206171Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:51:14.709463 env[1135]: time="2024-02-12T20:51:14.709321402Z" level=info msg="CreateContainer within sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570\"" Feb 12 20:51:14.710906 env[1135]: time="2024-02-12T20:51:14.710831902Z" level=info msg="StartContainer for \"1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570\"" Feb 12 20:51:14.779072 env[1135]: time="2024-02-12T20:51:14.779032222Z" level=info msg="StartContainer for \"1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570\" returns successfully" Feb 12 20:51:14.805558 env[1135]: time="2024-02-12T20:51:14.805508395Z" level=info msg="shim disconnected" id=1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570 Feb 12 20:51:14.805804 env[1135]: time="2024-02-12T20:51:14.805782539Z" level=warning msg="cleaning up after shim disconnected" id=1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570 namespace=k8s.io Feb 12 20:51:14.805901 env[1135]: time="2024-02-12T20:51:14.805884300Z" level=info msg="cleaning up dead shim" Feb 12 20:51:14.814084 env[1135]: time="2024-02-12T20:51:14.814051781Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4176 runtime=io.containerd.runc.v2\n" Feb 12 20:51:14.984435 systemd[1]: run-containerd-runc-k8s.io-1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570-runc.RuZICO.mount: Deactivated successfully. Feb 12 20:51:14.984892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570-rootfs.mount: Deactivated successfully. Feb 12 20:51:15.478799 sshd[4129]: Accepted publickey for core from 172.24.4.1 port 34066 ssh2: RSA SHA256:ssFkN0BQQLPS6axJWzE8mlMTpPrpsisU+V19L5AVtX4 Feb 12 20:51:15.480913 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:51:15.495717 systemd-logind[1120]: New session 23 of user core. Feb 12 20:51:15.496632 systemd[1]: Started session-23.scope. Feb 12 20:51:15.666459 env[1135]: time="2024-02-12T20:51:15.660240985Z" level=info msg="StopPodSandbox for \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\"" Feb 12 20:51:15.666459 env[1135]: time="2024-02-12T20:51:15.660384233Z" level=info msg="Container to stop \"1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:15.666459 env[1135]: time="2024-02-12T20:51:15.660424879Z" level=info msg="Container to stop \"7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:15.666459 env[1135]: time="2024-02-12T20:51:15.660463873Z" level=info msg="Container to stop \"7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:15.666459 env[1135]: time="2024-02-12T20:51:15.660496293Z" level=info msg="Container to stop \"9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:51:15.667108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c-shm.mount: Deactivated successfully. Feb 12 20:51:15.749844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c-rootfs.mount: Deactivated successfully. Feb 12 20:51:15.757931 env[1135]: time="2024-02-12T20:51:15.757874059Z" level=info msg="shim disconnected" id=8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c Feb 12 20:51:15.758242 env[1135]: time="2024-02-12T20:51:15.757930424Z" level=warning msg="cleaning up after shim disconnected" id=8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c namespace=k8s.io Feb 12 20:51:15.758242 env[1135]: time="2024-02-12T20:51:15.757941686Z" level=info msg="cleaning up dead shim" Feb 12 20:51:15.765988 env[1135]: time="2024-02-12T20:51:15.765947473Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4212 runtime=io.containerd.runc.v2\n" Feb 12 20:51:15.766400 env[1135]: time="2024-02-12T20:51:15.766372841Z" level=info msg="TearDown network for sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" successfully" Feb 12 20:51:15.766486 env[1135]: time="2024-02-12T20:51:15.766467508Z" level=info msg="StopPodSandbox for \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" returns successfully" Feb 12 20:51:15.916248 kubelet[2075]: I0212 20:51:15.916199 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-bpf-maps\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917100 kubelet[2075]: I0212 20:51:15.916974 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-hubble-tls\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917100 kubelet[2075]: I0212 20:51:15.917060 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-config-path\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917260 kubelet[2075]: I0212 20:51:15.917115 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cni-path\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917260 kubelet[2075]: I0212 20:51:15.917167 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-run\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917260 kubelet[2075]: I0212 20:51:15.917218 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-etc-cni-netd\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917457 kubelet[2075]: I0212 20:51:15.917304 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dlzg\" (UniqueName: \"kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-kube-api-access-2dlzg\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917457 kubelet[2075]: I0212 20:51:15.917361 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-xtables-lock\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917457 kubelet[2075]: I0212 20:51:15.917414 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-net\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917648 kubelet[2075]: I0212 20:51:15.917467 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-kernel\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917648 kubelet[2075]: I0212 20:51:15.917520 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-lib-modules\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917648 kubelet[2075]: I0212 20:51:15.917578 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-clustermesh-secrets\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917648 kubelet[2075]: I0212 20:51:15.917630 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-hostproc\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917991 kubelet[2075]: I0212 20:51:15.917681 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-cgroup\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.917991 kubelet[2075]: I0212 20:51:15.917771 2075 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-ipsec-secrets\") pod \"1a720a76-e04e-46be-8c44-b076a79d7cf7\" (UID: \"1a720a76-e04e-46be-8c44-b076a79d7cf7\") " Feb 12 20:51:15.918717 kubelet[2075]: I0212 20:51:15.918624 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.918924 kubelet[2075]: I0212 20:51:15.918730 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.918924 kubelet[2075]: I0212 20:51:15.918824 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.918924 kubelet[2075]: I0212 20:51:15.918866 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.919284 kubelet[2075]: I0212 20:51:15.919222 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-hostproc" (OuterVolumeSpecName: "hostproc") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.919394 kubelet[2075]: I0212 20:51:15.919290 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.919953 kubelet[2075]: W0212 20:51:15.919834 2075 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1a720a76-e04e-46be-8c44-b076a79d7cf7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:51:15.920146 kubelet[2075]: I0212 20:51:15.920101 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cni-path" (OuterVolumeSpecName: "cni-path") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.920348 kubelet[2075]: I0212 20:51:15.920312 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.921156 kubelet[2075]: I0212 20:51:15.920513 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.921330 kubelet[2075]: I0212 20:51:15.916362 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:51:15.929226 systemd[1]: var-lib-kubelet-pods-1a720a76\x2de04e\x2d46be\x2d8c44\x2db076a79d7cf7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2dlzg.mount: Deactivated successfully. Feb 12 20:51:15.934293 kubelet[2075]: I0212 20:51:15.933388 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:51:15.935418 kubelet[2075]: I0212 20:51:15.935333 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-kube-api-access-2dlzg" (OuterVolumeSpecName: "kube-api-access-2dlzg") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "kube-api-access-2dlzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:51:15.943793 systemd[1]: var-lib-kubelet-pods-1a720a76\x2de04e\x2d46be\x2d8c44\x2db076a79d7cf7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:51:15.946336 kubelet[2075]: I0212 20:51:15.946160 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:51:15.946484 kubelet[2075]: I0212 20:51:15.946434 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:51:15.951192 kubelet[2075]: I0212 20:51:15.951135 2075 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1a720a76-e04e-46be-8c44-b076a79d7cf7" (UID: "1a720a76-e04e-46be-8c44-b076a79d7cf7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:51:15.987228 systemd[1]: var-lib-kubelet-pods-1a720a76\x2de04e\x2d46be\x2d8c44\x2db076a79d7cf7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:51:15.988551 systemd[1]: var-lib-kubelet-pods-1a720a76\x2de04e\x2d46be\x2d8c44\x2db076a79d7cf7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:51:16.018731 kubelet[2075]: I0212 20:51:16.018468 2075 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-clustermesh-secrets\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.018731 kubelet[2075]: I0212 20:51:16.018568 2075 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-xtables-lock\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.018731 kubelet[2075]: I0212 20:51:16.018605 2075 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-net\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.018731 kubelet[2075]: I0212 20:51:16.018652 2075 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-host-proc-sys-kernel\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.018731 kubelet[2075]: I0212 20:51:16.018694 2075 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-lib-modules\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.018784 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-ipsec-secrets\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.018841 2075 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-hostproc\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.018872 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-cgroup\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.018945 2075 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-bpf-maps\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.018977 2075 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-hubble-tls\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.019006 2075 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cni-path\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.019045 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-run\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019264 kubelet[2075]: I0212 20:51:16.019078 2075 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a720a76-e04e-46be-8c44-b076a79d7cf7-cilium-config-path\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019857 kubelet[2075]: I0212 20:51:16.019107 2075 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a720a76-e04e-46be-8c44-b076a79d7cf7-etc-cni-netd\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.019857 kubelet[2075]: I0212 20:51:16.019140 2075 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2dlzg\" (UniqueName: \"kubernetes.io/projected/1a720a76-e04e-46be-8c44-b076a79d7cf7-kube-api-access-2dlzg\") on node \"ci-3510-3-2-f-bcfc1a2c45.novalocal\" DevicePath \"\"" Feb 12 20:51:16.667179 kubelet[2075]: I0212 20:51:16.667107 2075 scope.go:115] "RemoveContainer" containerID="1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570" Feb 12 20:51:16.678633 env[1135]: time="2024-02-12T20:51:16.678113248Z" level=info msg="RemoveContainer for \"1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570\"" Feb 12 20:51:16.697838 env[1135]: time="2024-02-12T20:51:16.697015391Z" level=info msg="RemoveContainer for \"1ac854a51a07bbd6cfeb703364e77db7bda64005c25ee7b8850fd7860b5ce570\" returns successfully" Feb 12 20:51:16.698380 kubelet[2075]: I0212 20:51:16.698342 2075 scope.go:115] "RemoveContainer" containerID="7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f" Feb 12 20:51:16.704551 env[1135]: time="2024-02-12T20:51:16.704205070Z" level=info msg="RemoveContainer for \"7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f\"" Feb 12 20:51:16.714674 env[1135]: time="2024-02-12T20:51:16.714576641Z" level=info msg="RemoveContainer for \"7667a84ab356598c51cca1685aeae2abdce743405f89d786c763a9eb291e473f\" returns successfully" Feb 12 20:51:16.715536 kubelet[2075]: I0212 20:51:16.715472 2075 scope.go:115] "RemoveContainer" containerID="7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164" Feb 12 20:51:16.718562 env[1135]: time="2024-02-12T20:51:16.718508978Z" level=info msg="RemoveContainer for \"7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164\"" Feb 12 20:51:16.725289 env[1135]: time="2024-02-12T20:51:16.725206937Z" level=info msg="RemoveContainer for \"7c5ff54951467ea4fc790596bd27c7470c1faeba04aef20d13ac9743da28b164\" returns successfully" Feb 12 20:51:16.725867 kubelet[2075]: I0212 20:51:16.725835 2075 scope.go:115] "RemoveContainer" containerID="9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04" Feb 12 20:51:16.728370 env[1135]: time="2024-02-12T20:51:16.728318507Z" level=info msg="RemoveContainer for \"9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04\"" Feb 12 20:51:16.733975 kubelet[2075]: I0212 20:51:16.733958 2075 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:51:16.734178 kubelet[2075]: E0212 20:51:16.734147 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a720a76-e04e-46be-8c44-b076a79d7cf7" containerName="apply-sysctl-overwrites" Feb 12 20:51:16.734270 kubelet[2075]: E0212 20:51:16.734260 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a720a76-e04e-46be-8c44-b076a79d7cf7" containerName="mount-bpf-fs" Feb 12 20:51:16.734360 kubelet[2075]: E0212 20:51:16.734351 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a720a76-e04e-46be-8c44-b076a79d7cf7" containerName="mount-cgroup" Feb 12 20:51:16.734450 kubelet[2075]: E0212 20:51:16.734441 2075 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a720a76-e04e-46be-8c44-b076a79d7cf7" containerName="clean-cilium-state" Feb 12 20:51:16.734593 kubelet[2075]: I0212 20:51:16.734561 2075 memory_manager.go:346] "RemoveStaleState removing state" podUID="1a720a76-e04e-46be-8c44-b076a79d7cf7" containerName="clean-cilium-state" Feb 12 20:51:16.744722 env[1135]: time="2024-02-12T20:51:16.744664140Z" level=info msg="RemoveContainer for \"9c17be10989f66f092152905ddcfda4218837f26362d3b16b69d3c8ece773d04\" returns successfully" Feb 12 20:51:16.825592 kubelet[2075]: I0212 20:51:16.825541 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-etc-cni-netd\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825795 kubelet[2075]: I0212 20:51:16.825617 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-xtables-lock\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825795 kubelet[2075]: I0212 20:51:16.825672 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trd2h\" (UniqueName: \"kubernetes.io/projected/c00c5231-579e-46ce-ad1c-73f4758eb74f-kube-api-access-trd2h\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825795 kubelet[2075]: I0212 20:51:16.825723 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-bpf-maps\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825795 kubelet[2075]: I0212 20:51:16.825796 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-cni-path\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825950 kubelet[2075]: I0212 20:51:16.825861 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c00c5231-579e-46ce-ad1c-73f4758eb74f-cilium-config-path\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825950 kubelet[2075]: I0212 20:51:16.825907 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-host-proc-sys-kernel\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.825950 kubelet[2075]: I0212 20:51:16.825950 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-hostproc\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826041 kubelet[2075]: I0212 20:51:16.825993 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-host-proc-sys-net\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826041 kubelet[2075]: I0212 20:51:16.826036 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-lib-modules\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826100 kubelet[2075]: I0212 20:51:16.826082 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-cilium-run\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826133 kubelet[2075]: I0212 20:51:16.826125 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c00c5231-579e-46ce-ad1c-73f4758eb74f-cilium-ipsec-secrets\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826212 kubelet[2075]: I0212 20:51:16.826169 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c00c5231-579e-46ce-ad1c-73f4758eb74f-cilium-cgroup\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826441 kubelet[2075]: I0212 20:51:16.826421 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c00c5231-579e-46ce-ad1c-73f4758eb74f-clustermesh-secrets\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:16.826702 kubelet[2075]: I0212 20:51:16.826678 2075 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c00c5231-579e-46ce-ad1c-73f4758eb74f-hubble-tls\") pod \"cilium-gpvq8\" (UID: \"c00c5231-579e-46ce-ad1c-73f4758eb74f\") " pod="kube-system/cilium-gpvq8" Feb 12 20:51:17.038910 env[1135]: time="2024-02-12T20:51:17.038860575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpvq8,Uid:c00c5231-579e-46ce-ad1c-73f4758eb74f,Namespace:kube-system,Attempt:0,}" Feb 12 20:51:17.058148 env[1135]: time="2024-02-12T20:51:17.057912850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:51:17.058148 env[1135]: time="2024-02-12T20:51:17.057967112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:51:17.058148 env[1135]: time="2024-02-12T20:51:17.057990636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:51:17.058659 env[1135]: time="2024-02-12T20:51:17.058562427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57 pid=4244 runtime=io.containerd.runc.v2 Feb 12 20:51:17.085663 env[1135]: time="2024-02-12T20:51:17.085606074Z" level=info msg="StopPodSandbox for \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\"" Feb 12 20:51:17.086296 env[1135]: time="2024-02-12T20:51:17.086189186Z" level=info msg="TearDown network for sandbox \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" successfully" Feb 12 20:51:17.086478 env[1135]: time="2024-02-12T20:51:17.086438543Z" level=info msg="StopPodSandbox for \"8a73141995c3964d7bcd818ee2df5451fc16985f80785dac250ee89d00a3735c\" returns successfully" Feb 12 20:51:17.087781 kubelet[2075]: I0212 20:51:17.087723 2075 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1a720a76-e04e-46be-8c44-b076a79d7cf7 path="/var/lib/kubelet/pods/1a720a76-e04e-46be-8c44-b076a79d7cf7/volumes" Feb 12 20:51:17.117937 env[1135]: time="2024-02-12T20:51:17.117887603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gpvq8,Uid:c00c5231-579e-46ce-ad1c-73f4758eb74f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\"" Feb 12 20:51:17.122213 env[1135]: time="2024-02-12T20:51:17.122175246Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:51:17.138149 env[1135]: time="2024-02-12T20:51:17.138105902Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dfefd1b59df1d031ad7e96a7ef968612b6905890d7f0d3fcf5539b30f31d2301\"" Feb 12 20:51:17.139959 env[1135]: time="2024-02-12T20:51:17.139937544Z" level=info msg="StartContainer for \"dfefd1b59df1d031ad7e96a7ef968612b6905890d7f0d3fcf5539b30f31d2301\"" Feb 12 20:51:17.194286 env[1135]: time="2024-02-12T20:51:17.194241773Z" level=info msg="StartContainer for \"dfefd1b59df1d031ad7e96a7ef968612b6905890d7f0d3fcf5539b30f31d2301\" returns successfully" Feb 12 20:51:17.222246 env[1135]: time="2024-02-12T20:51:17.222183642Z" level=info msg="shim disconnected" id=dfefd1b59df1d031ad7e96a7ef968612b6905890d7f0d3fcf5539b30f31d2301 Feb 12 20:51:17.222246 env[1135]: time="2024-02-12T20:51:17.222232324Z" level=warning msg="cleaning up after shim disconnected" id=dfefd1b59df1d031ad7e96a7ef968612b6905890d7f0d3fcf5539b30f31d2301 namespace=k8s.io Feb 12 20:51:17.222246 env[1135]: time="2024-02-12T20:51:17.222243314Z" level=info msg="cleaning up dead shim" Feb 12 20:51:17.229935 env[1135]: time="2024-02-12T20:51:17.229893077Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4331 runtime=io.containerd.runc.v2\n" Feb 12 20:51:17.697650 env[1135]: time="2024-02-12T20:51:17.697578508Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:51:17.757791 env[1135]: time="2024-02-12T20:51:17.757710786Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e5d675ade6fb08fb8d820242ad0acab3f31a2754df768fe7d743bbe4ae5557cf\"" Feb 12 20:51:17.758842 env[1135]: time="2024-02-12T20:51:17.758820866Z" level=info msg="StartContainer for \"e5d675ade6fb08fb8d820242ad0acab3f31a2754df768fe7d743bbe4ae5557cf\"" Feb 12 20:51:17.812537 env[1135]: time="2024-02-12T20:51:17.812498101Z" level=info msg="StartContainer for \"e5d675ade6fb08fb8d820242ad0acab3f31a2754df768fe7d743bbe4ae5557cf\" returns successfully" Feb 12 20:51:17.839128 env[1135]: time="2024-02-12T20:51:17.839082768Z" level=info msg="shim disconnected" id=e5d675ade6fb08fb8d820242ad0acab3f31a2754df768fe7d743bbe4ae5557cf Feb 12 20:51:17.839354 env[1135]: time="2024-02-12T20:51:17.839334569Z" level=warning msg="cleaning up after shim disconnected" id=e5d675ade6fb08fb8d820242ad0acab3f31a2754df768fe7d743bbe4ae5557cf namespace=k8s.io Feb 12 20:51:17.839427 env[1135]: time="2024-02-12T20:51:17.839410902Z" level=info msg="cleaning up dead shim" Feb 12 20:51:17.847039 env[1135]: time="2024-02-12T20:51:17.847015420Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4393 runtime=io.containerd.runc.v2\n" Feb 12 20:51:17.986432 systemd[1]: run-containerd-runc-k8s.io-6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57-runc.nDCFEE.mount: Deactivated successfully. Feb 12 20:51:18.693959 env[1135]: time="2024-02-12T20:51:18.693499354Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:51:18.732917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260203672.mount: Deactivated successfully. Feb 12 20:51:18.740478 env[1135]: time="2024-02-12T20:51:18.740377762Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61\"" Feb 12 20:51:18.744786 env[1135]: time="2024-02-12T20:51:18.743646786Z" level=info msg="StartContainer for \"30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61\"" Feb 12 20:51:18.828542 env[1135]: time="2024-02-12T20:51:18.828501774Z" level=info msg="StartContainer for \"30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61\" returns successfully" Feb 12 20:51:18.854618 env[1135]: time="2024-02-12T20:51:18.854554354Z" level=info msg="shim disconnected" id=30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61 Feb 12 20:51:18.854618 env[1135]: time="2024-02-12T20:51:18.854600510Z" level=warning msg="cleaning up after shim disconnected" id=30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61 namespace=k8s.io Feb 12 20:51:18.854618 env[1135]: time="2024-02-12T20:51:18.854613484Z" level=info msg="cleaning up dead shim" Feb 12 20:51:18.862764 env[1135]: time="2024-02-12T20:51:18.862695486Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4451 runtime=io.containerd.runc.v2\n" Feb 12 20:51:18.986778 systemd[1]: run-containerd-runc-k8s.io-30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61-runc.qibKzV.mount: Deactivated successfully. Feb 12 20:51:18.987139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30daf68b2844fe6c53580d43b98ea88d0b7ff094c5c4d76c9e663d027bffaf61-rootfs.mount: Deactivated successfully. Feb 12 20:51:19.185781 kubelet[2075]: E0212 20:51:19.185628 2075 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:51:19.705628 env[1135]: time="2024-02-12T20:51:19.704422851Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:51:19.738160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886410221.mount: Deactivated successfully. Feb 12 20:51:19.755216 env[1135]: time="2024-02-12T20:51:19.753136046Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950\"" Feb 12 20:51:19.756715 env[1135]: time="2024-02-12T20:51:19.756626135Z" level=info msg="StartContainer for \"37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950\"" Feb 12 20:51:19.818635 env[1135]: time="2024-02-12T20:51:19.818586280Z" level=info msg="StartContainer for \"37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950\" returns successfully" Feb 12 20:51:19.841399 env[1135]: time="2024-02-12T20:51:19.841296559Z" level=error msg="collecting metrics for 37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950" error="cgroups: cgroup deleted: unknown" Feb 12 20:51:19.845446 env[1135]: time="2024-02-12T20:51:19.843810228Z" level=info msg="shim disconnected" id=37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950 Feb 12 20:51:19.845446 env[1135]: time="2024-02-12T20:51:19.843849452Z" level=warning msg="cleaning up after shim disconnected" id=37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950 namespace=k8s.io Feb 12 20:51:19.845446 env[1135]: time="2024-02-12T20:51:19.843859110Z" level=info msg="cleaning up dead shim" Feb 12 20:51:19.853216 env[1135]: time="2024-02-12T20:51:19.853166457Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:51:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4508 runtime=io.containerd.runc.v2\n" Feb 12 20:51:19.986722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37ab085572101062e2f420281b4e3074180dc4b398f134fe4fdea34f2387f950-rootfs.mount: Deactivated successfully. Feb 12 20:51:20.735412 env[1135]: time="2024-02-12T20:51:20.735286179Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:51:20.781389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1009501894.mount: Deactivated successfully. Feb 12 20:51:20.794838 env[1135]: time="2024-02-12T20:51:20.794794662Z" level=info msg="CreateContainer within sandbox \"6a43ebd14b7e01e4d9360f71ac1bffc36ae1ed288ab0d24093df54b0aa331a57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8\"" Feb 12 20:51:20.795370 env[1135]: time="2024-02-12T20:51:20.795343871Z" level=info msg="StartContainer for \"b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8\"" Feb 12 20:51:20.854805 env[1135]: time="2024-02-12T20:51:20.854756163Z" level=info msg="StartContainer for \"b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8\" returns successfully" Feb 12 20:51:21.799786 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:51:21.844884 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 12 20:51:22.581863 systemd[1]: run-containerd-runc-k8s.io-b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8-runc.oH4KaZ.mount: Deactivated successfully. Feb 12 20:51:24.842322 systemd[1]: run-containerd-runc-k8s.io-b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8-runc.BugYsW.mount: Deactivated successfully. Feb 12 20:51:25.053403 systemd-networkd[1032]: lxc_health: Link UP Feb 12 20:51:25.065408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:51:25.064922 systemd-networkd[1032]: lxc_health: Gained carrier Feb 12 20:51:25.103925 kubelet[2075]: I0212 20:51:25.103836 2075 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gpvq8" podStartSLOduration=9.100272839 pod.CreationTimestamp="2024-02-12 20:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:51:21.767959213 +0000 UTC m=+152.909546997" watchObservedRunningTime="2024-02-12 20:51:25.100272839 +0000 UTC m=+156.241860613" Feb 12 20:51:26.356125 systemd-networkd[1032]: lxc_health: Gained IPv6LL Feb 12 20:51:27.134337 systemd[1]: run-containerd-runc-k8s.io-b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8-runc.SYCkO4.mount: Deactivated successfully. Feb 12 20:51:29.345132 systemd[1]: run-containerd-runc-k8s.io-b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8-runc.Kk3GBQ.mount: Deactivated successfully. Feb 12 20:51:31.539803 systemd[1]: run-containerd-runc-k8s.io-b9d59d590c18488e81d82894022b9aa41bad3491acc3f735ffdbf243de32c3d8-runc.ID16iP.mount: Deactivated successfully. Feb 12 20:51:31.959724 sshd[4129]: pam_unix(sshd:session): session closed for user core Feb 12 20:51:31.965469 systemd[1]: sshd@22-172.24.4.188:22-172.24.4.1:34066.service: Deactivated successfully. Feb 12 20:51:31.968431 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:51:31.969570 systemd-logind[1120]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:51:31.972152 systemd-logind[1120]: Removed session 23.