Feb 9 19:12:33.940245 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:12:33.940293 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:12:33.940320 kernel: BIOS-provided physical RAM map: Feb 9 19:12:33.940336 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:12:33.940397 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:12:33.940416 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:12:33.940435 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 9 19:12:33.940451 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 9 19:12:33.940472 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 19:12:33.940488 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:12:33.940504 kernel: NX (Execute Disable) protection: active Feb 9 19:12:33.940519 kernel: SMBIOS 2.8 present. Feb 9 19:12:33.940535 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 9 19:12:33.940551 kernel: Hypervisor detected: KVM Feb 9 19:12:33.940571 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:12:33.940592 kernel: kvm-clock: cpu 0, msr 45faa001, primary cpu clock Feb 9 19:12:33.940609 kernel: kvm-clock: using sched offset of 7251518620 cycles Feb 9 19:12:33.940627 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:12:33.940645 kernel: tsc: Detected 1996.249 MHz processor Feb 9 19:12:33.940663 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:12:33.940681 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:12:33.940699 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 9 19:12:33.940717 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:12:33.940738 kernel: ACPI: Early table checksum verification disabled Feb 9 19:12:33.940755 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 9 19:12:33.940773 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:12:33.940791 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:12:33.940808 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:12:33.940825 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 19:12:33.940843 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:12:33.940860 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:12:33.940878 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 9 19:12:33.940898 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 9 19:12:33.940915 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 19:12:33.940933 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 9 19:12:33.940950 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 9 19:12:33.940967 kernel: No NUMA configuration found Feb 9 19:12:33.940984 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 9 19:12:33.941001 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 9 19:12:33.941019 kernel: Zone ranges: Feb 9 19:12:33.941046 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:12:33.941064 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 9 19:12:33.941082 kernel: Normal empty Feb 9 19:12:33.941101 kernel: Movable zone start for each node Feb 9 19:12:33.941119 kernel: Early memory node ranges Feb 9 19:12:33.941137 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:12:33.941159 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 9 19:12:33.941177 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 9 19:12:33.941195 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:12:33.941213 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:12:33.941232 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 9 19:12:33.941250 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 19:12:33.941268 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:12:33.941286 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:12:33.941304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:12:33.941325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:12:33.941343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:12:33.941467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:12:33.941487 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:12:33.941505 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:12:33.941523 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:12:33.941542 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 19:12:33.941560 kernel: Booting paravirtualized kernel on KVM Feb 9 19:12:33.941578 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:12:33.941597 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:12:33.941621 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:12:33.941639 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:12:33.941657 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:12:33.941674 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 19:12:33.941693 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 19:12:33.941711 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 9 19:12:33.941729 kernel: Policy zone: DMA32 Feb 9 19:12:33.941750 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:12:33.941773 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:12:33.941792 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:12:33.941810 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:12:33.941828 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:12:33.941847 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 19:12:33.941866 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:12:33.941884 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:12:33.941901 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:12:33.941920 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:12:33.941939 kernel: rcu: RCU event tracing is enabled. Feb 9 19:12:33.941957 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:12:33.941974 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:12:33.941991 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:12:33.942008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:12:33.942025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:12:33.942042 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:12:33.942059 kernel: Console: colour VGA+ 80x25 Feb 9 19:12:33.942076 kernel: printk: console [tty0] enabled Feb 9 19:12:33.942096 kernel: printk: console [ttyS0] enabled Feb 9 19:12:33.942113 kernel: ACPI: Core revision 20210730 Feb 9 19:12:33.942130 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:12:33.942147 kernel: x2apic enabled Feb 9 19:12:33.942163 kernel: Switched APIC routing to physical x2apic. Feb 9 19:12:33.942181 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:12:33.942198 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:12:33.942215 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 9 19:12:33.942232 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 19:12:33.942252 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 19:12:33.942270 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:12:33.942287 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:12:33.942304 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:12:33.942321 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:12:33.942338 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:12:33.943445 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 9 19:12:33.943469 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:12:33.943514 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:12:33.943557 kernel: LSM: Security Framework initializing Feb 9 19:12:33.943574 kernel: SELinux: Initializing. Feb 9 19:12:33.943592 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:12:33.943609 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:12:33.943627 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 9 19:12:33.943644 kernel: Performance Events: AMD PMU driver. Feb 9 19:12:33.943661 kernel: ... version: 0 Feb 9 19:12:33.943678 kernel: ... bit width: 48 Feb 9 19:12:33.943695 kernel: ... generic registers: 4 Feb 9 19:12:33.943725 kernel: ... value mask: 0000ffffffffffff Feb 9 19:12:33.943743 kernel: ... max period: 00007fffffffffff Feb 9 19:12:33.943760 kernel: ... fixed-purpose events: 0 Feb 9 19:12:33.943781 kernel: ... event mask: 000000000000000f Feb 9 19:12:33.943798 kernel: signal: max sigframe size: 1440 Feb 9 19:12:33.943816 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:12:33.943833 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:12:33.943851 kernel: x86: Booting SMP configuration: Feb 9 19:12:33.943872 kernel: .... node #0, CPUs: #1 Feb 9 19:12:33.943890 kernel: kvm-clock: cpu 1, msr 45faa041, secondary cpu clock Feb 9 19:12:33.943907 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 19:12:33.943925 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:12:33.943943 kernel: smpboot: Max logical packages: 2 Feb 9 19:12:33.943961 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 9 19:12:33.943978 kernel: devtmpfs: initialized Feb 9 19:12:33.943996 kernel: x86/mm: Memory block size: 128MB Feb 9 19:12:33.944014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:12:33.944035 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:12:33.944053 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:12:33.944071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:12:33.944089 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:12:33.944106 kernel: audit: type=2000 audit(1707505953.562:1): state=initialized audit_enabled=0 res=1 Feb 9 19:12:33.944124 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:12:33.944142 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:12:33.944160 kernel: cpuidle: using governor menu Feb 9 19:12:33.944177 kernel: ACPI: bus type PCI registered Feb 9 19:12:33.944198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:12:33.944216 kernel: dca service started, version 1.12.1 Feb 9 19:12:33.944234 kernel: PCI: Using configuration type 1 for base access Feb 9 19:12:33.944252 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:12:33.944270 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:12:33.944287 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:12:33.944305 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:12:33.944323 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:12:33.944340 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:12:33.944387 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:12:33.944406 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:12:33.944423 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:12:33.944441 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:12:33.944459 kernel: ACPI: Interpreter enabled Feb 9 19:12:33.944476 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:12:33.944494 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:12:33.944513 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:12:33.944531 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:12:33.944552 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:12:33.944887 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:12:33.945070 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:12:33.945096 kernel: acpiphp: Slot [3] registered Feb 9 19:12:33.945113 kernel: acpiphp: Slot [4] registered Feb 9 19:12:33.945129 kernel: acpiphp: Slot [5] registered Feb 9 19:12:33.945144 kernel: acpiphp: Slot [6] registered Feb 9 19:12:33.945166 kernel: acpiphp: Slot [7] registered Feb 9 19:12:33.945181 kernel: acpiphp: Slot [8] registered Feb 9 19:12:33.945197 kernel: acpiphp: Slot [9] registered Feb 9 19:12:33.945213 kernel: acpiphp: Slot [10] registered Feb 9 19:12:33.945229 kernel: acpiphp: Slot [11] registered Feb 9 19:12:33.945245 kernel: acpiphp: Slot [12] registered Feb 9 19:12:33.945261 kernel: acpiphp: Slot [13] registered Feb 9 19:12:33.945277 kernel: acpiphp: Slot [14] registered Feb 9 19:12:33.945292 kernel: acpiphp: Slot [15] registered Feb 9 19:12:33.945308 kernel: acpiphp: Slot [16] registered Feb 9 19:12:33.945327 kernel: acpiphp: Slot [17] registered Feb 9 19:12:33.945342 kernel: acpiphp: Slot [18] registered Feb 9 19:12:33.945383 kernel: acpiphp: Slot [19] registered Feb 9 19:12:33.945400 kernel: acpiphp: Slot [20] registered Feb 9 19:12:33.945415 kernel: acpiphp: Slot [21] registered Feb 9 19:12:33.945431 kernel: acpiphp: Slot [22] registered Feb 9 19:12:33.945446 kernel: acpiphp: Slot [23] registered Feb 9 19:12:33.945462 kernel: acpiphp: Slot [24] registered Feb 9 19:12:33.945477 kernel: acpiphp: Slot [25] registered Feb 9 19:12:33.945506 kernel: acpiphp: Slot [26] registered Feb 9 19:12:33.945523 kernel: acpiphp: Slot [27] registered Feb 9 19:12:33.945538 kernel: acpiphp: Slot [28] registered Feb 9 19:12:33.945554 kernel: acpiphp: Slot [29] registered Feb 9 19:12:33.945570 kernel: acpiphp: Slot [30] registered Feb 9 19:12:33.945585 kernel: acpiphp: Slot [31] registered Feb 9 19:12:33.945601 kernel: PCI host bridge to bus 0000:00 Feb 9 19:12:33.945786 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:12:33.945947 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:12:33.946137 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:12:33.946319 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:12:33.946523 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 19:12:33.946670 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:12:33.946852 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:12:33.947041 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:12:33.947240 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:12:33.951441 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 9 19:12:33.951615 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:12:33.951715 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:12:33.951804 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:12:33.951893 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:12:33.952014 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:12:33.952108 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 19:12:33.952198 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 19:12:33.952297 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 19:12:33.954464 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 19:12:33.954567 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 19:12:33.954656 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 9 19:12:33.954750 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 9 19:12:33.954836 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:12:33.954933 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:12:33.955014 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 9 19:12:33.955095 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 9 19:12:33.955176 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 19:12:33.955256 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 9 19:12:33.955379 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:12:33.955465 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:12:33.955565 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 9 19:12:33.955654 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 19:12:33.955748 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 19:12:33.955838 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 9 19:12:33.955926 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 19:12:33.956027 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:12:33.956117 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 9 19:12:33.956206 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 19:12:33.956219 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:12:33.956229 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:12:33.956238 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:12:33.956247 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:12:33.956256 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:12:33.956267 kernel: iommu: Default domain type: Translated Feb 9 19:12:33.956276 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:12:33.957439 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:12:33.957540 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:12:33.957628 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:12:33.957641 kernel: vgaarb: loaded Feb 9 19:12:33.957650 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:12:33.957659 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:12:33.957672 kernel: PTP clock support registered Feb 9 19:12:33.957692 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:12:33.957706 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:12:33.957716 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:12:33.957726 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 9 19:12:33.957736 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:12:33.957745 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:12:33.957755 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:12:33.957764 kernel: pnp: PnP ACPI init Feb 9 19:12:33.957900 kernel: pnp 00:03: [dma 2] Feb 9 19:12:33.957923 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:12:33.957933 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:12:33.957942 kernel: NET: Registered PF_INET protocol family Feb 9 19:12:33.957951 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:12:33.957960 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:12:33.957970 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:12:33.957979 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:12:33.957991 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:12:33.958006 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:12:33.958014 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:12:33.958022 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:12:33.958031 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:12:33.958041 kernel: NET: Registered PF_XDP protocol family Feb 9 19:12:33.958124 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:12:33.958206 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:12:33.958282 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:12:33.958380 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:12:33.958464 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 19:12:33.958551 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:12:33.958638 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:12:33.958727 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:12:33.958739 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:12:33.958748 kernel: Initialise system trusted keyrings Feb 9 19:12:33.958757 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:12:33.958766 kernel: Key type asymmetric registered Feb 9 19:12:33.958778 kernel: Asymmetric key parser 'x509' registered Feb 9 19:12:33.958787 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:12:33.958795 kernel: io scheduler mq-deadline registered Feb 9 19:12:33.958804 kernel: io scheduler kyber registered Feb 9 19:12:33.958812 kernel: io scheduler bfq registered Feb 9 19:12:33.958821 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:12:33.958830 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 19:12:33.958839 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:12:33.958848 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:12:33.958858 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:12:33.958867 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:12:33.958875 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:12:33.958884 kernel: random: crng init done Feb 9 19:12:33.958893 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:12:33.958901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:12:33.958910 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:12:33.958918 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:12:33.959011 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 9 19:12:33.959096 kernel: rtc_cmos 00:04: registered as rtc0 Feb 9 19:12:33.959174 kernel: rtc_cmos 00:04: setting system clock to 2024-02-09T19:12:33 UTC (1707505953) Feb 9 19:12:33.959250 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 9 19:12:33.959262 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:12:33.959270 kernel: Segment Routing with IPv6 Feb 9 19:12:33.959279 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:12:33.959288 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:12:33.959297 kernel: Key type dns_resolver registered Feb 9 19:12:33.959308 kernel: IPI shorthand broadcast: enabled Feb 9 19:12:33.959317 kernel: sched_clock: Marking stable (693417401, 119257916)->(845363509, -32688192) Feb 9 19:12:33.959326 kernel: registered taskstats version 1 Feb 9 19:12:33.959334 kernel: Loading compiled-in X.509 certificates Feb 9 19:12:33.959343 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:12:33.959367 kernel: Key type .fscrypt registered Feb 9 19:12:33.959376 kernel: Key type fscrypt-provisioning registered Feb 9 19:12:33.959385 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:12:33.959396 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:12:33.959405 kernel: ima: No architecture policies found Feb 9 19:12:33.959413 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:12:33.959422 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:12:33.959431 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:12:33.959440 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:12:33.959448 kernel: Run /init as init process Feb 9 19:12:33.959457 kernel: with arguments: Feb 9 19:12:33.959465 kernel: /init Feb 9 19:12:33.959473 kernel: with environment: Feb 9 19:12:33.959484 kernel: HOME=/ Feb 9 19:12:33.959545 kernel: TERM=linux Feb 9 19:12:33.959553 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:12:33.959565 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:12:33.959576 systemd[1]: Detected virtualization kvm. Feb 9 19:12:33.959586 systemd[1]: Detected architecture x86-64. Feb 9 19:12:33.959596 systemd[1]: Running in initrd. Feb 9 19:12:33.959607 systemd[1]: No hostname configured, using default hostname. Feb 9 19:12:33.959616 systemd[1]: Hostname set to . Feb 9 19:12:33.959626 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:12:33.959635 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:12:33.959644 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:12:33.959653 systemd[1]: Reached target cryptsetup.target. Feb 9 19:12:33.959662 systemd[1]: Reached target paths.target. Feb 9 19:12:33.959671 systemd[1]: Reached target slices.target. Feb 9 19:12:33.959682 systemd[1]: Reached target swap.target. Feb 9 19:12:33.959691 systemd[1]: Reached target timers.target. Feb 9 19:12:33.959701 systemd[1]: Listening on iscsid.socket. Feb 9 19:12:33.959710 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:12:33.959719 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:12:33.959728 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:12:33.959737 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:12:33.959747 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:12:33.959758 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:12:33.959767 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:12:33.959776 systemd[1]: Reached target sockets.target. Feb 9 19:12:33.959785 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:12:33.959802 systemd[1]: Finished network-cleanup.service. Feb 9 19:12:33.959813 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:12:33.959823 systemd[1]: Starting systemd-journald.service... Feb 9 19:12:33.959833 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:12:33.959843 systemd[1]: Starting systemd-resolved.service... Feb 9 19:12:33.959852 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:12:33.959861 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:12:33.959871 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:12:33.959883 systemd-journald[185]: Journal started Feb 9 19:12:33.959932 systemd-journald[185]: Runtime Journal (/run/log/journal/b8ae617b84b0459191f3e45a47dea2b9) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:12:33.942687 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 19:12:33.950686 systemd-resolved[187]: Positive Trust Anchors: Feb 9 19:12:33.971662 systemd[1]: Started systemd-resolved.service. Feb 9 19:12:33.971687 kernel: audit: type=1130 audit(1707505953.966:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.950698 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:12:33.950733 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:12:33.955460 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 19:12:33.980102 kernel: audit: type=1130 audit(1707505953.975:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.980125 systemd[1]: Started systemd-journald.service. Feb 9 19:12:33.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.980860 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:12:33.981975 systemd[1]: Reached target nss-lookup.target. Feb 9 19:12:33.989512 kernel: audit: type=1130 audit(1707505953.980:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.989528 kernel: audit: type=1130 audit(1707505953.981:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:33.990209 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:12:33.991686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:12:33.998579 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:12:34.000484 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 19:12:34.001369 kernel: Bridge firewalling registered Feb 9 19:12:34.002622 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:12:34.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.010368 kernel: audit: type=1130 audit(1707505954.005:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.013489 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:12:34.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.014771 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:12:34.018973 kernel: audit: type=1130 audit(1707505954.013:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.027877 dracut-cmdline[203]: dracut-dracut-053 Feb 9 19:12:34.031129 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:12:34.035388 kernel: SCSI subsystem initialized Feb 9 19:12:34.054198 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:12:34.054237 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:12:34.056377 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:12:34.062530 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 19:12:34.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.063584 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:12:34.068323 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:12:34.068849 kernel: audit: type=1130 audit(1707505954.063:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.076235 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:12:34.080561 kernel: audit: type=1130 audit(1707505954.076:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.122462 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:12:34.136590 kernel: iscsi: registered transport (tcp) Feb 9 19:12:34.160407 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:12:34.160498 kernel: QLogic iSCSI HBA Driver Feb 9 19:12:34.207120 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:12:34.208755 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:12:34.216496 kernel: audit: type=1130 audit(1707505954.207:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.289593 kernel: raid6: sse2x4 gen() 13037 MB/s Feb 9 19:12:34.306390 kernel: raid6: sse2x4 xor() 6905 MB/s Feb 9 19:12:34.323450 kernel: raid6: sse2x2 gen() 9808 MB/s Feb 9 19:12:34.340422 kernel: raid6: sse2x2 xor() 8323 MB/s Feb 9 19:12:34.357598 kernel: raid6: sse2x1 gen() 11236 MB/s Feb 9 19:12:34.375150 kernel: raid6: sse2x1 xor() 6765 MB/s Feb 9 19:12:34.375223 kernel: raid6: using algorithm sse2x4 gen() 13037 MB/s Feb 9 19:12:34.375274 kernel: raid6: .... xor() 6905 MB/s, rmw enabled Feb 9 19:12:34.376059 kernel: raid6: using ssse3x2 recovery algorithm Feb 9 19:12:34.391434 kernel: xor: measuring software checksum speed Feb 9 19:12:34.393862 kernel: prefetch64-sse : 17095 MB/sec Feb 9 19:12:34.393903 kernel: generic_sse : 15562 MB/sec Feb 9 19:12:34.393928 kernel: xor: using function: prefetch64-sse (17095 MB/sec) Feb 9 19:12:34.512392 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:12:34.523105 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:12:34.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.523000 audit: BPF prog-id=7 op=LOAD Feb 9 19:12:34.524000 audit: BPF prog-id=8 op=LOAD Feb 9 19:12:34.524939 systemd[1]: Starting systemd-udevd.service... Feb 9 19:12:34.541083 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 9 19:12:34.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.548290 systemd[1]: Started systemd-udevd.service. Feb 9 19:12:34.551518 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:12:34.567761 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Feb 9 19:12:34.596722 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:12:34.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.598278 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:12:34.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:34.648520 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:12:34.724383 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 9 19:12:34.733705 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:12:34.733740 kernel: GPT:17805311 != 41943039 Feb 9 19:12:34.733752 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:12:34.733762 kernel: GPT:17805311 != 41943039 Feb 9 19:12:34.733772 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:12:34.733782 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:12:34.742380 kernel: libata version 3.00 loaded. Feb 9 19:12:34.749617 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:12:34.752375 kernel: scsi host0: ata_piix Feb 9 19:12:34.752552 kernel: scsi host1: ata_piix Feb 9 19:12:34.752717 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 9 19:12:34.752731 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 9 19:12:34.899410 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (440) Feb 9 19:12:34.930693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:12:34.941947 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:12:34.944110 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:12:34.949464 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:12:34.954266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:12:34.956115 systemd[1]: Starting disk-uuid.service... Feb 9 19:12:34.971713 disk-uuid[461]: Primary Header is updated. Feb 9 19:12:34.971713 disk-uuid[461]: Secondary Entries is updated. Feb 9 19:12:34.971713 disk-uuid[461]: Secondary Header is updated. Feb 9 19:12:34.984409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:12:34.998436 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:12:36.000653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:12:36.000727 disk-uuid[462]: The operation has completed successfully. Feb 9 19:12:36.068194 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:12:36.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.068446 systemd[1]: Finished disk-uuid.service. Feb 9 19:12:36.071397 systemd[1]: Starting verity-setup.service... Feb 9 19:12:36.108417 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 9 19:12:36.253831 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:12:36.259184 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:12:36.268306 systemd[1]: Finished verity-setup.service. Feb 9 19:12:36.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.461442 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:12:36.461874 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:12:36.462471 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:12:36.463202 systemd[1]: Starting ignition-setup.service... Feb 9 19:12:36.467062 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:12:36.491441 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:12:36.491564 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:12:36.491599 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:12:36.517182 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:12:36.550991 systemd[1]: Finished ignition-setup.service. Feb 9 19:12:36.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.552537 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:12:36.604978 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:12:36.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.606000 audit: BPF prog-id=9 op=LOAD Feb 9 19:12:36.608342 systemd[1]: Starting systemd-networkd.service... Feb 9 19:12:36.637240 systemd-networkd[631]: lo: Link UP Feb 9 19:12:36.638118 systemd-networkd[631]: lo: Gained carrier Feb 9 19:12:36.639628 systemd-networkd[631]: Enumeration completed Feb 9 19:12:36.639923 systemd[1]: Started systemd-networkd.service. Feb 9 19:12:36.640800 systemd-networkd[631]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:12:36.642830 systemd-networkd[631]: eth0: Link UP Feb 9 19:12:36.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.642836 systemd-networkd[631]: eth0: Gained carrier Feb 9 19:12:36.643610 systemd[1]: Reached target network.target. Feb 9 19:12:36.645304 systemd[1]: Starting iscsiuio.service... Feb 9 19:12:36.653437 systemd[1]: Started iscsiuio.service. Feb 9 19:12:36.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.655846 systemd[1]: Starting iscsid.service... Feb 9 19:12:36.661264 iscsid[636]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:12:36.661264 iscsid[636]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:12:36.661264 iscsid[636]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:12:36.661264 iscsid[636]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:12:36.661264 iscsid[636]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:12:36.661264 iscsid[636]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:12:36.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.663505 systemd-networkd[631]: eth0: DHCPv4 address 172.24.4.148/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:12:36.665051 systemd[1]: Started iscsid.service. Feb 9 19:12:36.667442 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:12:36.696787 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:12:36.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.698031 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:12:36.698552 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:12:36.699106 systemd[1]: Reached target remote-fs.target. Feb 9 19:12:36.701841 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:12:36.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:36.713600 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:12:36.962852 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.148 Feb 9 19:12:36.962879 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Feb 9 19:12:37.061794 ignition[589]: Ignition 2.14.0 Feb 9 19:12:37.062515 ignition[589]: Stage: fetch-offline Feb 9 19:12:37.062637 ignition[589]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:12:37.062689 ignition[589]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:12:37.065207 ignition[589]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:12:37.065612 ignition[589]: parsed url from cmdline: "" Feb 9 19:12:37.068589 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:12:37.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:37.065624 ignition[589]: no config URL provided Feb 9 19:12:37.065639 ignition[589]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:12:37.071947 systemd[1]: Starting ignition-fetch.service... Feb 9 19:12:37.065662 ignition[589]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:12:37.065687 ignition[589]: failed to fetch config: resource requires networking Feb 9 19:12:37.065948 ignition[589]: Ignition finished successfully Feb 9 19:12:37.091015 ignition[655]: Ignition 2.14.0 Feb 9 19:12:37.091042 ignition[655]: Stage: fetch Feb 9 19:12:37.091306 ignition[655]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:12:37.091431 ignition[655]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:12:37.094042 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:12:37.094314 ignition[655]: parsed url from cmdline: "" Feb 9 19:12:37.094324 ignition[655]: no config URL provided Feb 9 19:12:37.094338 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:12:37.094393 ignition[655]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:12:37.097312 ignition[655]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 9 19:12:37.097387 ignition[655]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 9 19:12:37.100519 ignition[655]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 9 19:12:37.546833 ignition[655]: GET result: OK Feb 9 19:12:37.547316 ignition[655]: parsing config with SHA512: fe6644b57093cbb4a06ac1b9f96f3123793483145335686f82bc8cc751a5009a48ae5c530e9576ee8816bba483c47b4388b1f2f2c916534c28d851904a7c7e07 Feb 9 19:12:37.690049 unknown[655]: fetched base config from "system" Feb 9 19:12:37.691614 unknown[655]: fetched base config from "system" Feb 9 19:12:37.691641 unknown[655]: fetched user config from "openstack" Feb 9 19:12:37.700032 ignition[655]: fetch: fetch complete Feb 9 19:12:37.700059 ignition[655]: fetch: fetch passed Feb 9 19:12:37.700242 ignition[655]: Ignition finished successfully Feb 9 19:12:37.704323 systemd[1]: Finished ignition-fetch.service. Feb 9 19:12:37.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:37.707261 systemd[1]: Starting ignition-kargs.service... Feb 9 19:12:37.729294 ignition[661]: Ignition 2.14.0 Feb 9 19:12:37.729320 ignition[661]: Stage: kargs Feb 9 19:12:37.729613 ignition[661]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:12:37.729654 ignition[661]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:12:37.741780 systemd[1]: Finished ignition-kargs.service. Feb 9 19:12:37.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:37.731763 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:12:37.743885 systemd[1]: Starting ignition-disks.service... Feb 9 19:12:37.734417 ignition[661]: kargs: kargs passed Feb 9 19:12:37.734496 ignition[661]: Ignition finished successfully Feb 9 19:12:37.750520 systemd-networkd[631]: eth0: Gained IPv6LL Feb 9 19:12:37.753480 ignition[666]: Ignition 2.14.0 Feb 9 19:12:37.753487 ignition[666]: Stage: disks Feb 9 19:12:37.753630 ignition[666]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:12:37.753654 ignition[666]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:12:37.754642 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:12:37.758400 ignition[666]: disks: disks passed Feb 9 19:12:37.759300 systemd[1]: Finished ignition-disks.service. Feb 9 19:12:37.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:37.758447 ignition[666]: Ignition finished successfully Feb 9 19:12:37.761529 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:12:37.762914 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:12:37.764389 systemd[1]: Reached target local-fs.target. Feb 9 19:12:37.765932 systemd[1]: Reached target sysinit.target. Feb 9 19:12:37.767248 systemd[1]: Reached target basic.target. Feb 9 19:12:37.769347 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:12:37.797819 systemd-fsck[674]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:12:37.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:37.814265 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:12:37.816324 systemd[1]: Mounting sysroot.mount... Feb 9 19:12:37.881429 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:12:37.882431 systemd[1]: Mounted sysroot.mount. Feb 9 19:12:37.883784 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:12:37.890872 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:12:37.894905 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:12:37.897952 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 9 19:12:37.899220 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:12:37.899285 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:12:37.902944 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:12:37.918452 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:12:37.921306 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:12:37.945125 initrd-setup-root[686]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:12:37.947513 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (681) Feb 9 19:12:37.957264 initrd-setup-root[694]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:12:37.964556 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:12:37.964601 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:12:37.964628 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:12:37.983835 initrd-setup-root[718]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:12:37.993280 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:12:38.002449 initrd-setup-root[728]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:12:38.164931 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:12:38.179335 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 19:12:38.179522 kernel: audit: type=1130 audit(1707505958.165:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.168504 systemd[1]: Starting ignition-mount.service... Feb 9 19:12:38.183272 systemd[1]: Starting sysroot-boot.service... Feb 9 19:12:38.189861 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:12:38.190129 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:12:38.229047 ignition[748]: INFO : Ignition 2.14.0 Feb 9 19:12:38.230971 ignition[748]: INFO : Stage: mount Feb 9 19:12:38.230971 ignition[748]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:12:38.230971 ignition[748]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:12:38.238435 ignition[748]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:12:38.238435 ignition[748]: INFO : mount: mount passed Feb 9 19:12:38.238435 ignition[748]: INFO : Ignition finished successfully Feb 9 19:12:38.246690 kernel: audit: type=1130 audit(1707505958.241:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.240665 systemd[1]: Finished ignition-mount.service. Feb 9 19:12:38.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.255287 systemd[1]: Finished sysroot-boot.service. Feb 9 19:12:38.259908 kernel: audit: type=1130 audit(1707505958.255:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.269616 coreos-metadata[680]: Feb 09 19:12:38.269 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:12:38.281935 coreos-metadata[680]: Feb 09 19:12:38.281 INFO Fetch successful Feb 9 19:12:38.282687 coreos-metadata[680]: Feb 09 19:12:38.282 INFO wrote hostname ci-3510-3-2-c-8f3f3a83f5.novalocal to /sysroot/etc/hostname Feb 9 19:12:38.286965 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 9 19:12:38.295883 kernel: audit: type=1130 audit(1707505958.287:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.295927 kernel: audit: type=1131 audit(1707505958.287:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:12:38.287091 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 9 19:12:38.288571 systemd[1]: Starting ignition-files.service... Feb 9 19:12:38.301139 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:12:38.314423 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (757) Feb 9 19:12:38.327338 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:12:38.327388 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:12:38.327401 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:12:38.360895 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:12:38.384145 ignition[776]: INFO : Ignition 2.14.0 Feb 9 19:12:38.384145 ignition[776]: INFO : Stage: files Feb 9 19:12:38.387184 ignition[776]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:12:38.387184 ignition[776]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:12:38.387184 ignition[776]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:12:38.393340 ignition[776]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:12:38.395633 ignition[776]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:12:38.395633 ignition[776]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:12:38.404421 ignition[776]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:12:38.406292 ignition[776]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:12:38.409964 unknown[776]: wrote ssh authorized keys file for user: core Feb 9 19:12:38.412319 ignition[776]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:12:38.412319 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:12:38.412319 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:12:38.509115 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:12:38.851008 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:12:38.852248 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:12:38.852248 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:12:39.405440 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:12:40.098113 ignition[776]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:12:40.098113 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:12:40.113204 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:12:40.113204 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:12:40.588617 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:12:41.066608 ignition[776]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:12:41.066608 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:12:41.071837 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:12:41.071837 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:12:41.071837 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:12:41.071837 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:12:41.312502 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:12:53.136116 ignition[776]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:12:53.141427 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:12:53.141427 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:12:53.141427 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:12:53.349929 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:13:03.089654 ignition[776]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:13:03.095518 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:13:03.095518 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:13:03.095518 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:13:03.550833 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:13:34.776539 ignition[776]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:13:34.780833 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:13:34.780833 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:13:34.780833 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:13:34.780833 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:13:34.780833 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:13:35.174574 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 9 19:13:35.609581 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:13:35.609581 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:13:35.615758 ignition[776]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:13:35.615758 ignition[776]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:13:35.615758 ignition[776]: INFO : files: op(11): op(12): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:13:35.615758 ignition[776]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:13:35.615758 ignition[776]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:13:35.615758 ignition[776]: INFO : files: op(13): [started] processing unit "containerd.service" Feb 9 19:13:35.615758 ignition[776]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:13:35.639129 kernel: audit: type=1130 audit(1707506015.629:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.625848 systemd[1]: Finished ignition-files.service. Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(13): [finished] processing unit "containerd.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(15): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(15): op(16): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(15): op(16): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(15): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(17): [started] processing unit "prepare-critools.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(17): op(18): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(17): op(18): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(17): [finished] processing unit "prepare-critools.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(19): [started] processing unit "prepare-helm.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(19): op(1a): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(19): [finished] processing unit "prepare-helm.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(1b): [started] processing unit "coreos-metadata.service" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(1b): op(1c): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:13:35.640583 ignition[776]: INFO : files: op(1b): op(1c): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:13:35.710308 kernel: audit: type=1130 audit(1707506015.642:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.710336 kernel: audit: type=1131 audit(1707506015.642:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.710367 kernel: audit: type=1130 audit(1707506015.660:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.710382 kernel: audit: type=1130 audit(1707506015.684:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.710395 kernel: audit: type=1131 audit(1707506015.684:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.634495 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1b): [finished] processing unit "coreos-metadata.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1d): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1d): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:13:35.711132 ignition[776]: INFO : files: createResultFile: createFiles: op(21): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:13:35.711132 ignition[776]: INFO : files: createResultFile: createFiles: op(21): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:13:35.711132 ignition[776]: INFO : files: files passed Feb 9 19:13:35.711132 ignition[776]: INFO : Ignition finished successfully Feb 9 19:13:35.726160 kernel: audit: type=1130 audit(1707506015.714:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.636104 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:13:35.727228 initrd-setup-root-after-ignition[801]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:13:35.636968 systemd[1]: Starting ignition-quench.service... Feb 9 19:13:35.642118 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:13:35.642209 systemd[1]: Finished ignition-quench.service. Feb 9 19:13:35.655149 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:13:35.734943 kernel: audit: type=1131 audit(1707506015.731:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.660752 systemd[1]: Reached target ignition-complete.target. Feb 9 19:13:35.667404 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:13:35.682520 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:13:35.682618 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:13:35.684499 systemd[1]: Reached target initrd-fs.target. Feb 9 19:13:35.702084 systemd[1]: Reached target initrd.target. Feb 9 19:13:35.703291 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:13:35.703968 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:13:35.714575 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:13:35.715916 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:13:35.727321 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:13:35.728814 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:13:35.748897 kernel: audit: type=1131 audit(1707506015.745:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.729714 systemd[1]: Stopped target timers.target. Feb 9 19:13:35.730557 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:13:35.754137 kernel: audit: type=1131 audit(1707506015.749:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.730707 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:13:35.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.731524 systemd[1]: Stopped target initrd.target. Feb 9 19:13:35.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.735475 systemd[1]: Stopped target basic.target. Feb 9 19:13:35.736326 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:13:35.737219 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:13:35.738101 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:13:35.739011 systemd[1]: Stopped target remote-fs.target. Feb 9 19:13:35.739887 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:13:35.740757 systemd[1]: Stopped target sysinit.target. Feb 9 19:13:35.767154 ignition[814]: INFO : Ignition 2.14.0 Feb 9 19:13:35.767154 ignition[814]: INFO : Stage: umount Feb 9 19:13:35.767154 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:13:35.767154 ignition[814]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:13:35.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.785742 iscsid[636]: iscsid shutting down. Feb 9 19:13:35.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.741827 systemd[1]: Stopped target local-fs.target. Feb 9 19:13:35.786961 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:13:35.786961 ignition[814]: INFO : umount: umount passed Feb 9 19:13:35.786961 ignition[814]: INFO : Ignition finished successfully Feb 9 19:13:35.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.742753 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:13:35.743648 systemd[1]: Stopped target swap.target. Feb 9 19:13:35.744454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:13:35.744626 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:13:35.745475 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:13:35.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.749419 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:13:35.749579 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:13:35.750441 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:13:35.750588 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:13:35.754787 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:13:35.754933 systemd[1]: Stopped ignition-files.service. Feb 9 19:13:35.756715 systemd[1]: Stopping ignition-mount.service... Feb 9 19:13:35.764178 systemd[1]: Stopping iscsid.service... Feb 9 19:13:35.767544 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:13:35.768008 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:13:35.768173 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:13:35.770530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:13:35.770698 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:13:35.775332 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:13:35.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.775464 systemd[1]: Stopped iscsid.service. Feb 9 19:13:35.785092 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:13:35.785171 systemd[1]: Stopped ignition-mount.service. Feb 9 19:13:35.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.787098 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:13:35.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.787177 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:13:35.791123 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:13:35.792969 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:13:35.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.793021 systemd[1]: Stopped ignition-disks.service. Feb 9 19:13:35.793635 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:13:35.793673 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:13:35.794125 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:13:35.794162 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:13:35.794639 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:13:35.794673 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:13:35.795116 systemd[1]: Stopped target paths.target. Feb 9 19:13:35.795991 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:13:35.800388 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:13:35.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.800905 systemd[1]: Stopped target slices.target. Feb 9 19:13:35.801931 systemd[1]: Stopped target sockets.target. Feb 9 19:13:35.802783 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:13:35.802814 systemd[1]: Closed iscsid.socket. Feb 9 19:13:35.803621 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:13:35.803657 systemd[1]: Stopped ignition-setup.service. Feb 9 19:13:35.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.805210 systemd[1]: Stopping iscsiuio.service... Feb 9 19:13:35.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.807248 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:13:35.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.807331 systemd[1]: Stopped iscsiuio.service. Feb 9 19:13:35.808145 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:13:35.808220 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:13:35.808858 systemd[1]: Stopped target network.target. Feb 9 19:13:35.809656 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:13:35.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.809685 systemd[1]: Closed iscsiuio.socket. Feb 9 19:13:35.810498 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:13:35.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.810533 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:13:35.840000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:13:35.811787 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:13:35.812466 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:13:35.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.816391 systemd-networkd[631]: eth0: DHCPv6 lease lost Feb 9 19:13:35.844000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:13:35.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.817418 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:13:35.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.817499 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:13:35.819968 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:13:35.820006 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:13:35.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.826901 systemd[1]: Stopping network-cleanup.service... Feb 9 19:13:35.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.829275 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:13:35.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.829495 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:13:35.830262 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:13:35.830321 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:13:35.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.831656 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:13:35.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:35.831730 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:13:35.832469 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:13:35.834509 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:13:35.835051 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:13:35.835143 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:13:35.838437 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:13:35.838601 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:13:35.840159 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:13:35.840220 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:13:35.842286 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:13:35.842317 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:13:35.843159 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:13:35.843208 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:13:35.844329 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:13:35.844481 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:13:35.867000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:13:35.867000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:13:35.845212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:13:35.845256 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:13:35.846830 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:13:35.869000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:13:35.869000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:13:35.869000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:13:35.852981 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:13:35.853029 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:13:35.854128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:13:35.854172 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:13:35.854801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:13:35.854838 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:13:35.856605 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:13:35.857063 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:13:35.857146 systemd[1]: Stopped network-cleanup.service. Feb 9 19:13:35.858025 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:13:35.858100 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:13:35.858785 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:13:35.860337 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:13:35.867752 systemd[1]: Switching root. Feb 9 19:13:35.887034 systemd-journald[185]: Journal stopped Feb 9 19:13:40.250855 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 9 19:13:40.250902 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:13:40.250916 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:13:40.250927 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:13:40.250941 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:13:40.250952 kernel: SELinux: policy capability open_perms=1 Feb 9 19:13:40.250963 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:13:40.250973 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:13:40.250984 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:13:40.250997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:13:40.251007 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:13:40.251020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:13:40.251031 systemd[1]: Successfully loaded SELinux policy in 89.907ms. Feb 9 19:13:40.251051 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.069ms. Feb 9 19:13:40.251065 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:13:40.251076 systemd[1]: Detected virtualization kvm. Feb 9 19:13:40.251097 systemd[1]: Detected architecture x86-64. Feb 9 19:13:40.251642 systemd[1]: Detected first boot. Feb 9 19:13:40.251662 systemd[1]: Hostname set to . Feb 9 19:13:40.251674 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:13:40.251689 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:13:40.251700 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:13:40.251712 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:13:40.251728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:13:40.251742 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:13:40.251756 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:13:40.251769 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:13:40.251781 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:13:40.251794 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:13:40.251806 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:13:40.251818 systemd[1]: Created slice system-getty.slice. Feb 9 19:13:40.251829 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:13:40.251840 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:13:40.251852 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:13:40.251863 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:13:40.251875 systemd[1]: Created slice user.slice. Feb 9 19:13:40.251886 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:13:40.251899 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:13:40.251910 systemd[1]: Set up automount boot.automount. Feb 9 19:13:40.251921 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:13:40.251933 systemd[1]: Reached target integritysetup.target. Feb 9 19:13:40.251944 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:13:40.251955 systemd[1]: Reached target remote-fs.target. Feb 9 19:13:40.251968 systemd[1]: Reached target slices.target. Feb 9 19:13:40.251980 systemd[1]: Reached target swap.target. Feb 9 19:13:40.252002 systemd[1]: Reached target torcx.target. Feb 9 19:13:40.252014 systemd[1]: Reached target veritysetup.target. Feb 9 19:13:40.252026 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:13:40.252037 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:13:40.252048 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:13:40.252059 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:13:40.252070 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:13:40.252081 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:13:40.252094 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:13:40.252105 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:13:40.252117 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:13:40.252128 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:13:40.252140 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:13:40.252151 systemd[1]: Mounting media.mount... Feb 9 19:13:40.252165 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:13:40.252177 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:13:40.252188 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:13:40.252201 systemd[1]: Mounting tmp.mount... Feb 9 19:13:40.252212 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:13:40.252223 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:13:40.252235 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:13:40.252254 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:13:40.252265 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:13:40.252277 systemd[1]: Starting modprobe@drm.service... Feb 9 19:13:40.252289 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:13:40.252301 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:13:40.252313 systemd[1]: Starting modprobe@loop.service... Feb 9 19:13:40.252325 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:13:40.252337 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:13:40.252365 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:13:40.252379 systemd[1]: Starting systemd-journald.service... Feb 9 19:13:40.252391 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:13:40.252402 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:13:40.252413 kernel: loop: module loaded Feb 9 19:13:40.252424 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:13:40.252437 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:13:40.252449 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:13:40.252460 kernel: fuse: init (API version 7.34) Feb 9 19:13:40.252471 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:13:40.252482 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:13:40.252493 systemd[1]: Mounted media.mount. Feb 9 19:13:40.252505 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:13:40.252524 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:13:40.252536 systemd[1]: Mounted tmp.mount. Feb 9 19:13:40.252549 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:13:40.252561 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:13:40.253543 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:13:40.253561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:13:40.253574 systemd-journald[955]: Journal started Feb 9 19:13:40.253615 systemd-journald[955]: Runtime Journal (/run/log/journal/b8ae617b84b0459191f3e45a47dea2b9) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:13:40.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.249000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:13:40.249000 audit[955]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffca8915580 a2=4000 a3=7ffca891561c items=0 ppid=1 pid=955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:13:40.249000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:13:40.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.260135 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:13:40.260183 systemd[1]: Started systemd-journald.service. Feb 9 19:13:40.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.261542 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:13:40.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.264553 systemd[1]: Finished modprobe@drm.service. Feb 9 19:13:40.265282 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:13:40.265483 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:13:40.266188 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:13:40.266334 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:13:40.267078 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:13:40.267237 systemd[1]: Finished modprobe@loop.service. Feb 9 19:13:40.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.268867 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:13:40.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.272851 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:13:40.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.273876 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:13:40.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.274707 systemd[1]: Reached target network-pre.target. Feb 9 19:13:40.276632 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:13:40.278182 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:13:40.278808 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:13:40.282764 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:13:40.285889 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:13:40.286435 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:13:40.288004 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:13:40.291462 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:13:40.292722 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:13:40.296669 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:13:40.297478 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:13:40.303338 systemd-journald[955]: Time spent on flushing to /var/log/journal/b8ae617b84b0459191f3e45a47dea2b9 is 39.196ms for 1083 entries. Feb 9 19:13:40.303338 systemd-journald[955]: System Journal (/var/log/journal/b8ae617b84b0459191f3e45a47dea2b9) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:13:40.400078 systemd-journald[955]: Received client request to flush runtime journal. Feb 9 19:13:40.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.337126 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:13:40.338346 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:13:40.338859 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:13:40.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.364241 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:13:40.370164 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:13:40.382098 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:13:40.383895 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:13:40.401076 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:13:40.405575 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:13:40.431085 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:13:40.432696 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:13:40.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.471851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:13:40.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.981933 kernel: kauditd_printk_skb: 78 callbacks suppressed Feb 9 19:13:40.982122 kernel: audit: type=1130 audit(1707506020.975:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:40.974935 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:13:40.987808 systemd[1]: Starting systemd-udevd.service... Feb 9 19:13:41.027149 systemd-udevd[1019]: Using default interface naming scheme 'v252'. Feb 9 19:13:41.068430 kernel: audit: type=1130 audit(1707506021.064:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.061107 systemd[1]: Started systemd-udevd.service. Feb 9 19:13:41.069985 systemd[1]: Starting systemd-networkd.service... Feb 9 19:13:41.095317 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:13:41.108400 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:13:41.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.156971 systemd[1]: Started systemd-userdbd.service. Feb 9 19:13:41.161382 kernel: audit: type=1130 audit(1707506021.157:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.192409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:13:41.202385 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:13:41.230000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:13:41.245215 kernel: audit: type=1400 audit(1707506021.230:120): avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:13:41.230000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=556b12b40320 a1=32194 a2=7f4ba854abc5 a3=5 items=108 ppid=1019 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:13:41.230000 audit: CWD cwd="/" Feb 9 19:13:41.285060 kernel: audit: type=1300 audit(1707506021.230:120): arch=c000003e syscall=175 success=yes exit=0 a0=556b12b40320 a1=32194 a2=7f4ba854abc5 a3=5 items=108 ppid=1019 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:13:41.285125 kernel: audit: type=1307 audit(1707506021.230:120): cwd="/" Feb 9 19:13:41.285432 kernel: audit: type=1302 audit(1707506021.230:120): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=1 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=2 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.298608 kernel: audit: type=1302 audit(1707506021.230:120): item=1 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.298662 kernel: audit: type=1302 audit(1707506021.230:120): item=2 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.298700 kernel: audit: type=1302 audit(1707506021.230:120): item=3 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=3 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.300566 systemd-networkd[1033]: lo: Link UP Feb 9 19:13:41.300574 systemd-networkd[1033]: lo: Gained carrier Feb 9 19:13:41.301836 systemd-networkd[1033]: Enumeration completed Feb 9 19:13:41.301937 systemd[1]: Started systemd-networkd.service. Feb 9 19:13:41.303017 systemd-networkd[1033]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:13:41.304954 systemd-networkd[1033]: eth0: Link UP Feb 9 19:13:41.304960 systemd-networkd[1033]: eth0: Gained carrier Feb 9 19:13:41.230000 audit: PATH item=4 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=5 name=(null) inode=14175 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=6 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=7 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=8 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=9 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=10 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=11 name=(null) inode=14178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=12 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=13 name=(null) inode=14179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=14 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=15 name=(null) inode=14180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=16 name=(null) inode=14176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=17 name=(null) inode=14181 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=18 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=19 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=20 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=21 name=(null) inode=14183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=22 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=23 name=(null) inode=14184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=24 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=25 name=(null) inode=14185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=26 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=27 name=(null) inode=14186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=28 name=(null) inode=14182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=29 name=(null) inode=14187 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=30 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=31 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=32 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=33 name=(null) inode=14189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=34 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=35 name=(null) inode=14190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=36 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=37 name=(null) inode=14191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=38 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=39 name=(null) inode=14192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=40 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=41 name=(null) inode=14193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=42 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=43 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=44 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=45 name=(null) inode=14195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=46 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=47 name=(null) inode=14196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=48 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=49 name=(null) inode=14197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=50 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=51 name=(null) inode=14198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=52 name=(null) inode=14194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=53 name=(null) inode=14199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=55 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=56 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=57 name=(null) inode=14201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=58 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=59 name=(null) inode=14202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=60 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=61 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=62 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=63 name=(null) inode=14204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.230000 audit: PATH item=64 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=65 name=(null) inode=14205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=66 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=67 name=(null) inode=14206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=68 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=69 name=(null) inode=14207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=70 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=71 name=(null) inode=14208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=72 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=73 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=74 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=75 name=(null) inode=14210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=76 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=77 name=(null) inode=14211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=78 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=79 name=(null) inode=14212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=80 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=81 name=(null) inode=14213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=82 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=83 name=(null) inode=14214 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=84 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=85 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=86 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=87 name=(null) inode=14216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=88 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=89 name=(null) inode=14217 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=90 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=91 name=(null) inode=14218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=92 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=93 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=94 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=95 name=(null) inode=14220 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=96 name=(null) inode=14200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=97 name=(null) inode=14221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=98 name=(null) inode=14221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=99 name=(null) inode=14222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=100 name=(null) inode=14221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=101 name=(null) inode=14223 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=102 name=(null) inode=14221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=103 name=(null) inode=14224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=104 name=(null) inode=14221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=105 name=(null) inode=14225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=106 name=(null) inode=14221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PATH item=107 name=(null) inode=14226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:13:41.230000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:13:41.316436 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:13:41.318509 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 19:13:41.324375 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:13:41.325477 systemd-networkd[1033]: eth0: DHCPv4 address 172.24.4.148/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:13:41.331176 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:13:41.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.369878 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:13:41.371599 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:13:41.400879 lvm[1049]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:13:41.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.428463 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:13:41.429037 systemd[1]: Reached target cryptsetup.target. Feb 9 19:13:41.430552 systemd[1]: Starting lvm2-activation.service... Feb 9 19:13:41.435803 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:13:41.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.464339 systemd[1]: Finished lvm2-activation.service. Feb 9 19:13:41.464914 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:13:41.465371 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:13:41.465393 systemd[1]: Reached target local-fs.target. Feb 9 19:13:41.465830 systemd[1]: Reached target machines.target. Feb 9 19:13:41.467504 systemd[1]: Starting ldconfig.service... Feb 9 19:13:41.474319 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:13:41.474458 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:13:41.477141 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:13:41.484976 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:13:41.490727 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:13:41.491279 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:13:41.491323 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:13:41.492439 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:13:41.493841 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Feb 9 19:13:41.495169 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:13:41.560407 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:13:41.566638 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:13:41.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:41.651785 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:13:41.670247 systemd-tmpfiles[1057]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:13:42.546104 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:13:42.547642 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:13:42.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.645045 systemd-fsck[1063]: fsck.fat 4.2 (2021-01-31) Feb 9 19:13:42.645045 systemd-fsck[1063]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 19:13:42.646959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:13:42.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.651809 systemd[1]: Mounting boot.mount... Feb 9 19:13:42.684888 systemd[1]: Mounted boot.mount. Feb 9 19:13:42.721194 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:13:42.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.812180 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:13:42.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.814118 systemd[1]: Starting audit-rules.service... Feb 9 19:13:42.815768 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:13:42.818432 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:13:42.822697 systemd[1]: Starting systemd-resolved.service... Feb 9 19:13:42.827422 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:13:42.831038 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:13:42.838533 systemd-networkd[1033]: eth0: Gained IPv6LL Feb 9 19:13:42.845669 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:13:42.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.846342 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:13:42.859000 audit[1077]: SYSTEM_BOOT pid=1077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.863497 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:13:42.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.904524 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:13:42.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:13:42.933000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:13:42.933000 audit[1095]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff638369d0 a2=420 a3=0 items=0 ppid=1072 pid=1095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:13:42.933000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:13:42.933707 augenrules[1095]: No rules Feb 9 19:13:42.934538 systemd[1]: Finished audit-rules.service. Feb 9 19:13:43.012584 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:13:43.013236 systemd[1]: Reached target time-set.target. Feb 9 19:13:43.018591 systemd-resolved[1075]: Positive Trust Anchors: Feb 9 19:13:43.018610 systemd-resolved[1075]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:13:43.018660 systemd-resolved[1075]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:13:43.861224 systemd-timesyncd[1076]: Contacted time server 45.128.41.10:123 (0.flatcar.pool.ntp.org). Feb 9 19:13:43.861661 systemd-timesyncd[1076]: Initial clock synchronization to Fri 2024-02-09 19:13:43.860769 UTC. Feb 9 19:13:43.874813 systemd-resolved[1075]: Using system hostname 'ci-3510-3-2-c-8f3f3a83f5.novalocal'. Feb 9 19:13:43.879858 systemd[1]: Started systemd-resolved.service. Feb 9 19:13:43.881125 systemd[1]: Reached target network.target. Feb 9 19:13:43.882184 systemd[1]: Reached target nss-lookup.target. Feb 9 19:13:44.139673 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:13:44.154342 systemd[1]: Finished ldconfig.service. Feb 9 19:13:44.159050 systemd[1]: Starting systemd-update-done.service... Feb 9 19:13:44.178902 systemd[1]: Finished systemd-update-done.service. Feb 9 19:13:44.180424 systemd[1]: Reached target sysinit.target. Feb 9 19:13:44.181764 systemd[1]: Started motdgen.path. Feb 9 19:13:44.182882 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:13:44.184508 systemd[1]: Started logrotate.timer. Feb 9 19:13:44.186000 systemd[1]: Started mdadm.timer. Feb 9 19:13:44.187043 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:13:44.188230 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:13:44.188331 systemd[1]: Reached target paths.target. Feb 9 19:13:44.189473 systemd[1]: Reached target timers.target. Feb 9 19:13:44.192412 systemd[1]: Listening on dbus.socket. Feb 9 19:13:44.196243 systemd[1]: Starting docker.socket... Feb 9 19:13:44.200799 systemd[1]: Listening on sshd.socket. Feb 9 19:13:44.202399 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:13:44.203773 systemd[1]: Listening on docker.socket. Feb 9 19:13:44.205075 systemd[1]: Reached target sockets.target. Feb 9 19:13:44.206358 systemd[1]: Reached target basic.target. Feb 9 19:13:44.207968 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:13:44.208261 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:13:44.208489 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:13:44.211786 systemd[1]: Starting containerd.service... Feb 9 19:13:44.216211 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:13:44.220280 systemd[1]: Starting dbus.service... Feb 9 19:13:44.225551 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:13:44.232336 systemd[1]: Starting extend-filesystems.service... Feb 9 19:13:44.232903 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:13:44.269052 jq[1111]: false Feb 9 19:13:44.234038 systemd[1]: Starting motdgen.service... Feb 9 19:13:44.236188 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:13:44.238105 systemd[1]: Starting prepare-critools.service... Feb 9 19:13:44.239873 systemd[1]: Starting prepare-helm.service... Feb 9 19:13:44.246718 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:13:44.248891 systemd[1]: Starting sshd-keygen.service... Feb 9 19:13:44.252467 systemd[1]: Starting systemd-logind.service... Feb 9 19:13:44.252968 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:13:44.253026 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:13:44.254100 systemd[1]: Starting update-engine.service... Feb 9 19:13:44.294051 jq[1128]: true Feb 9 19:13:44.263009 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:13:44.321410 tar[1130]: ./ Feb 9 19:13:44.321410 tar[1130]: ./macvlan Feb 9 19:13:44.264903 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:13:44.265145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:13:44.333703 tar[1131]: linux-amd64/helm Feb 9 19:13:44.305707 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:13:44.335936 tar[1133]: crictl Feb 9 19:13:44.305984 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:13:44.348189 jq[1141]: true Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda1 Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda2 Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda3 Feb 9 19:13:44.356798 extend-filesystems[1114]: Found usr Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda4 Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda6 Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda7 Feb 9 19:13:44.356798 extend-filesystems[1114]: Found vda9 Feb 9 19:13:44.356798 extend-filesystems[1114]: Checking size of /dev/vda9 Feb 9 19:13:44.416686 extend-filesystems[1114]: Resized partition /dev/vda9 Feb 9 19:13:44.422482 extend-filesystems[1172]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:13:44.437460 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:13:44.448623 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 9 19:13:44.437712 systemd[1]: Finished motdgen.service. Feb 9 19:13:44.472593 update_engine[1124]: I0209 19:13:44.465574 1124 main.cc:92] Flatcar Update Engine starting Feb 9 19:13:44.472340 systemd[1]: Started dbus.service. Feb 9 19:13:44.472078 dbus-daemon[1110]: [system] SELinux support is enabled Feb 9 19:13:44.531420 update_engine[1124]: I0209 19:13:44.486085 1124 update_check_scheduler.cc:74] Next update check in 6m16s Feb 9 19:13:44.474974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:13:44.474998 systemd[1]: Reached target system-config.target. Feb 9 19:13:44.475492 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:13:44.475508 systemd[1]: Reached target user-config.target. Feb 9 19:13:44.482541 systemd[1]: Started update-engine.service. Feb 9 19:13:44.484979 systemd[1]: Started locksmithd.service. Feb 9 19:13:44.533549 systemd-logind[1123]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:13:44.533601 systemd-logind[1123]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:13:44.535111 systemd-logind[1123]: New seat seat0. Feb 9 19:13:44.538527 systemd[1]: Started systemd-logind.service. Feb 9 19:13:44.544607 env[1135]: time="2024-02-09T19:13:44.543886685Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:13:44.566128 bash[1176]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:13:44.566925 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:13:44.570595 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 9 19:13:44.640139 coreos-metadata[1109]: Feb 09 19:13:44.597 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 9 19:13:44.640516 env[1135]: time="2024-02-09T19:13:44.603506502Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:13:44.641105 env[1135]: time="2024-02-09T19:13:44.640875578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:13:44.646906 extend-filesystems[1172]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:13:44.646906 extend-filesystems[1172]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 9 19:13:44.646906 extend-filesystems[1172]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 9 19:13:44.646289 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.645706368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.645733829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646149058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646169717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646184404Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646213208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646312915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646614811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646796853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:13:44.650044 env[1135]: time="2024-02-09T19:13:44.646833452Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:13:44.650292 tar[1130]: ./static Feb 9 19:13:44.650327 extend-filesystems[1114]: Resized filesystem in /dev/vda9 Feb 9 19:13:44.646591 systemd[1]: Finished extend-filesystems.service. Feb 9 19:13:44.653917 env[1135]: time="2024-02-09T19:13:44.646885219Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:13:44.653917 env[1135]: time="2024-02-09T19:13:44.646955460Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669136150Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669218655Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669237090Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669296561Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669319594Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669343189Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669375920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669393673Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669409212Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669423920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669438938Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669470157Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669640346Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:13:44.670416 env[1135]: time="2024-02-09T19:13:44.669748168Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670157516Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670186009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670218500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670261701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670276479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670308940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670322265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670335219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670349406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670361338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670603292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671038 env[1135]: time="2024-02-09T19:13:44.670619172Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671406939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671430223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671444329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671457153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671499473Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671514090Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:13:44.671604 env[1135]: time="2024-02-09T19:13:44.671534037Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:13:44.672059 env[1135]: time="2024-02-09T19:13:44.671849960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:13:44.672251 env[1135]: time="2024-02-09T19:13:44.672172735Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.672402156Z" level=info msg="Connect containerd service" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.672436129Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673550479Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673679311Z" level=info msg="Start subscribing containerd event" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673716551Z" level=info msg="Start recovering state" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673786282Z" level=info msg="Start event monitor" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673802492Z" level=info msg="Start snapshots syncer" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673811629Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.673837928Z" level=info msg="Start streaming server" Feb 9 19:13:44.674769 env[1135]: time="2024-02-09T19:13:44.674197032Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:13:44.675074 env[1135]: time="2024-02-09T19:13:44.675056744Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:13:44.675288 systemd[1]: Started containerd.service. Feb 9 19:13:44.679057 env[1135]: time="2024-02-09T19:13:44.675194783Z" level=info msg="containerd successfully booted in 0.148819s" Feb 9 19:13:44.709159 tar[1130]: ./vlan Feb 9 19:13:44.746477 tar[1130]: ./portmap Feb 9 19:13:44.782654 tar[1130]: ./host-local Feb 9 19:13:44.855459 tar[1130]: ./vrf Feb 9 19:13:44.930669 tar[1130]: ./bridge Feb 9 19:13:45.012300 tar[1130]: ./tuning Feb 9 19:13:45.064472 coreos-metadata[1109]: Feb 09 19:13:45.064 INFO Fetch successful Feb 9 19:13:45.064472 coreos-metadata[1109]: Feb 09 19:13:45.064 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:13:45.076767 coreos-metadata[1109]: Feb 09 19:13:45.076 INFO Fetch successful Feb 9 19:13:45.081312 unknown[1109]: wrote ssh authorized keys file for user: core Feb 9 19:13:45.099323 tar[1130]: ./firewall Feb 9 19:13:45.123422 update-ssh-keys[1193]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:13:45.124135 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:13:45.207292 tar[1130]: ./host-device Feb 9 19:13:45.294049 tar[1130]: ./sbr Feb 9 19:13:45.368960 tar[1130]: ./loopback Feb 9 19:13:45.445133 tar[1130]: ./dhcp Feb 9 19:13:45.618627 tar[1131]: linux-amd64/LICENSE Feb 9 19:13:45.618627 tar[1131]: linux-amd64/README.md Feb 9 19:13:45.627905 systemd[1]: Finished prepare-helm.service. Feb 9 19:13:45.659226 tar[1130]: ./ptp Feb 9 19:13:45.670580 systemd[1]: Finished prepare-critools.service. Feb 9 19:13:45.705640 tar[1130]: ./ipvlan Feb 9 19:13:45.743933 tar[1130]: ./bandwidth Feb 9 19:13:45.869548 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:13:45.979313 sshd_keygen[1146]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:13:46.030668 systemd[1]: Finished sshd-keygen.service. Feb 9 19:13:46.035336 systemd[1]: Starting issuegen.service... Feb 9 19:13:46.051065 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:13:46.051906 systemd[1]: Finished issuegen.service. Feb 9 19:13:46.058473 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:13:46.070803 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:13:46.072853 systemd[1]: Started getty@tty1.service. Feb 9 19:13:46.076440 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:13:46.077489 systemd[1]: Reached target getty.target. Feb 9 19:13:46.078054 systemd[1]: Reached target multi-user.target. Feb 9 19:13:46.079872 locksmithd[1179]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:13:46.082903 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:13:46.095509 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:13:46.096090 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:13:46.099322 systemd[1]: Startup finished in 1min 3.355s (kernel) + 9.248s (userspace) = 1min 12.603s. Feb 9 19:13:53.978766 systemd[1]: Created slice system-sshd.slice. Feb 9 19:13:53.983112 systemd[1]: Started sshd@0-172.24.4.148:22-172.24.4.1:35624.service. Feb 9 19:13:55.070727 sshd[1227]: Accepted publickey for core from 172.24.4.1 port 35624 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:13:55.075093 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:13:55.099080 systemd[1]: Created slice user-500.slice. Feb 9 19:13:55.101732 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:13:55.112713 systemd-logind[1123]: New session 1 of user core. Feb 9 19:13:55.126969 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:13:55.131805 systemd[1]: Starting user@500.service... Feb 9 19:13:55.141293 (systemd)[1232]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:13:55.262102 systemd[1232]: Queued start job for default target default.target. Feb 9 19:13:55.262609 systemd[1232]: Reached target paths.target. Feb 9 19:13:55.262732 systemd[1232]: Reached target sockets.target. Feb 9 19:13:55.262820 systemd[1232]: Reached target timers.target. Feb 9 19:13:55.262902 systemd[1232]: Reached target basic.target. Feb 9 19:13:55.263012 systemd[1232]: Reached target default.target. Feb 9 19:13:55.263111 systemd[1232]: Startup finished in 110ms. Feb 9 19:13:55.263484 systemd[1]: Started user@500.service. Feb 9 19:13:55.268245 systemd[1]: Started session-1.scope. Feb 9 19:13:55.595043 systemd[1]: Started sshd@1-172.24.4.148:22-172.24.4.1:41592.service. Feb 9 19:13:57.320218 sshd[1241]: Accepted publickey for core from 172.24.4.1 port 41592 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:13:57.324630 sshd[1241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:13:57.335855 systemd-logind[1123]: New session 2 of user core. Feb 9 19:13:57.336935 systemd[1]: Started session-2.scope. Feb 9 19:13:57.972294 sshd[1241]: pam_unix(sshd:session): session closed for user core Feb 9 19:13:57.972999 systemd[1]: Started sshd@2-172.24.4.148:22-172.24.4.1:41602.service. Feb 9 19:13:57.979772 systemd-logind[1123]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:13:57.981313 systemd[1]: sshd@1-172.24.4.148:22-172.24.4.1:41592.service: Deactivated successfully. Feb 9 19:13:57.982115 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:13:57.984935 systemd-logind[1123]: Removed session 2. Feb 9 19:13:59.287913 sshd[1246]: Accepted publickey for core from 172.24.4.1 port 41602 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:13:59.290737 sshd[1246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:13:59.302050 systemd-logind[1123]: New session 3 of user core. Feb 9 19:13:59.302722 systemd[1]: Started session-3.scope. Feb 9 19:13:59.981262 sshd[1246]: pam_unix(sshd:session): session closed for user core Feb 9 19:13:59.982803 systemd[1]: Started sshd@3-172.24.4.148:22-172.24.4.1:41612.service. Feb 9 19:13:59.990319 systemd[1]: sshd@2-172.24.4.148:22-172.24.4.1:41602.service: Deactivated successfully. Feb 9 19:13:59.993897 systemd-logind[1123]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:13:59.994083 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:13:59.999638 systemd-logind[1123]: Removed session 3. Feb 9 19:14:01.541867 sshd[1253]: Accepted publickey for core from 172.24.4.1 port 41612 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:14:01.545407 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:14:01.555820 systemd-logind[1123]: New session 4 of user core. Feb 9 19:14:01.556858 systemd[1]: Started session-4.scope. Feb 9 19:14:02.238080 sshd[1253]: pam_unix(sshd:session): session closed for user core Feb 9 19:14:02.240100 systemd[1]: Started sshd@4-172.24.4.148:22-172.24.4.1:41614.service. Feb 9 19:14:02.246904 systemd-logind[1123]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:14:02.248914 systemd[1]: sshd@3-172.24.4.148:22-172.24.4.1:41612.service: Deactivated successfully. Feb 9 19:14:02.250464 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:14:02.254048 systemd-logind[1123]: Removed session 4. Feb 9 19:14:03.585790 sshd[1260]: Accepted publickey for core from 172.24.4.1 port 41614 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:14:03.588905 sshd[1260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:14:03.598448 systemd-logind[1123]: New session 5 of user core. Feb 9 19:14:03.599187 systemd[1]: Started session-5.scope. Feb 9 19:14:04.041549 sudo[1266]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:14:04.042843 sudo[1266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:14:05.067444 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:14:05.080220 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:14:05.081004 systemd[1]: Reached target network-online.target. Feb 9 19:14:05.084105 systemd[1]: Starting docker.service... Feb 9 19:14:05.148373 env[1283]: time="2024-02-09T19:14:05.148271704Z" level=info msg="Starting up" Feb 9 19:14:05.150432 env[1283]: time="2024-02-09T19:14:05.150407329Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:14:05.150525 env[1283]: time="2024-02-09T19:14:05.150508970Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:14:05.150641 env[1283]: time="2024-02-09T19:14:05.150620439Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:14:05.150709 env[1283]: time="2024-02-09T19:14:05.150695269Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:14:05.153594 env[1283]: time="2024-02-09T19:14:05.153533903Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:14:05.153756 env[1283]: time="2024-02-09T19:14:05.153737946Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:14:05.153834 env[1283]: time="2024-02-09T19:14:05.153817355Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:14:05.153905 env[1283]: time="2024-02-09T19:14:05.153890552Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:14:05.160263 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport447338500-merged.mount: Deactivated successfully. Feb 9 19:14:05.383655 env[1283]: time="2024-02-09T19:14:05.383598910Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:14:05.384012 env[1283]: time="2024-02-09T19:14:05.383975727Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:14:05.384483 env[1283]: time="2024-02-09T19:14:05.384447161Z" level=info msg="Loading containers: start." Feb 9 19:14:05.666161 kernel: Initializing XFRM netlink socket Feb 9 19:14:05.750778 env[1283]: time="2024-02-09T19:14:05.750688196Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:14:05.855331 systemd-networkd[1033]: docker0: Link UP Feb 9 19:14:05.877008 env[1283]: time="2024-02-09T19:14:05.876932737Z" level=info msg="Loading containers: done." Feb 9 19:14:05.900264 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2334454350-merged.mount: Deactivated successfully. Feb 9 19:14:05.907906 env[1283]: time="2024-02-09T19:14:05.907817422Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:14:05.909288 env[1283]: time="2024-02-09T19:14:05.909233317Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:14:05.909652 env[1283]: time="2024-02-09T19:14:05.909544501Z" level=info msg="Daemon has completed initialization" Feb 9 19:14:05.944764 systemd[1]: Started docker.service. Feb 9 19:14:05.954221 env[1283]: time="2024-02-09T19:14:05.954035654Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:14:05.998768 systemd[1]: Reloading. Feb 9 19:14:06.122995 /usr/lib/systemd/system-generators/torcx-generator[1423]: time="2024-02-09T19:14:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:14:06.123413 /usr/lib/systemd/system-generators/torcx-generator[1423]: time="2024-02-09T19:14:06Z" level=info msg="torcx already run" Feb 9 19:14:06.238377 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:06.238721 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:06.265339 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:06.353531 systemd[1]: Started kubelet.service. Feb 9 19:14:06.438609 kubelet[1471]: E0209 19:14:06.438512 1471 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:14:06.440507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:14:06.440721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:14:08.395035 env[1135]: time="2024-02-09T19:14:08.394837046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:14:09.190724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438224052.mount: Deactivated successfully. Feb 9 19:14:12.515773 env[1135]: time="2024-02-09T19:14:12.515620444Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:12.527977 env[1135]: time="2024-02-09T19:14:12.527883684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:12.534273 env[1135]: time="2024-02-09T19:14:12.534206893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:12.541216 env[1135]: time="2024-02-09T19:14:12.541117244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:12.543213 env[1135]: time="2024-02-09T19:14:12.543146760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:14:12.560478 env[1135]: time="2024-02-09T19:14:12.560433371Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:14:16.356945 env[1135]: time="2024-02-09T19:14:16.356822715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:16.361452 env[1135]: time="2024-02-09T19:14:16.361378221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:16.365857 env[1135]: time="2024-02-09T19:14:16.365794868Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:16.370414 env[1135]: time="2024-02-09T19:14:16.370354611Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:16.372623 env[1135]: time="2024-02-09T19:14:16.372528083Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:14:16.407451 env[1135]: time="2024-02-09T19:14:16.407381608Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:14:16.626379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:14:16.626890 systemd[1]: Stopped kubelet.service. Feb 9 19:14:16.630529 systemd[1]: Started kubelet.service. Feb 9 19:14:16.746870 kubelet[1501]: E0209 19:14:16.746781 1501 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:14:16.753963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:14:16.754125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:14:19.095274 env[1135]: time="2024-02-09T19:14:19.095183602Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:19.193037 env[1135]: time="2024-02-09T19:14:19.192959683Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:19.426008 env[1135]: time="2024-02-09T19:14:19.425914920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:19.431783 env[1135]: time="2024-02-09T19:14:19.431701496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:19.434268 env[1135]: time="2024-02-09T19:14:19.434193706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:14:19.459441 env[1135]: time="2024-02-09T19:14:19.459328812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:14:21.699537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673573775.mount: Deactivated successfully. Feb 9 19:14:22.511048 env[1135]: time="2024-02-09T19:14:22.510871549Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:22.515234 env[1135]: time="2024-02-09T19:14:22.515151976Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:22.517970 env[1135]: time="2024-02-09T19:14:22.517870589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:22.524779 env[1135]: time="2024-02-09T19:14:22.524674547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:22.526034 env[1135]: time="2024-02-09T19:14:22.525888019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:14:22.549971 env[1135]: time="2024-02-09T19:14:22.549889423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:14:23.425224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1598836982.mount: Deactivated successfully. Feb 9 19:14:23.464572 env[1135]: time="2024-02-09T19:14:23.464470754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:23.475934 env[1135]: time="2024-02-09T19:14:23.475861016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:23.478632 env[1135]: time="2024-02-09T19:14:23.478544016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:23.484114 env[1135]: time="2024-02-09T19:14:23.484060496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:23.485607 env[1135]: time="2024-02-09T19:14:23.485498151Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:14:23.510775 env[1135]: time="2024-02-09T19:14:23.510664144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:14:24.787806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213810642.mount: Deactivated successfully. Feb 9 19:14:26.876156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:14:26.876619 systemd[1]: Stopped kubelet.service. Feb 9 19:14:26.879635 systemd[1]: Started kubelet.service. Feb 9 19:14:26.970720 kubelet[1525]: E0209 19:14:26.970650 1525 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:14:26.972636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:14:26.972792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:14:29.521447 update_engine[1124]: I0209 19:14:29.521050 1124 update_attempter.cc:509] Updating boot flags... Feb 9 19:14:31.910172 env[1135]: time="2024-02-09T19:14:31.910072238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:31.917903 env[1135]: time="2024-02-09T19:14:31.917817360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:31.924250 env[1135]: time="2024-02-09T19:14:31.924187150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:31.928079 env[1135]: time="2024-02-09T19:14:31.928026641Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:31.929960 env[1135]: time="2024-02-09T19:14:31.929906242Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:14:31.945142 env[1135]: time="2024-02-09T19:14:31.945092477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:14:32.680267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835143595.mount: Deactivated successfully. Feb 9 19:14:33.738004 env[1135]: time="2024-02-09T19:14:33.737963447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:33.741836 env[1135]: time="2024-02-09T19:14:33.741812170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:33.746504 env[1135]: time="2024-02-09T19:14:33.746482139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:33.749320 env[1135]: time="2024-02-09T19:14:33.749248915Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:33.750050 env[1135]: time="2024-02-09T19:14:33.750024637Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:14:37.125988 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 9 19:14:37.126204 systemd[1]: Stopped kubelet.service. Feb 9 19:14:37.127823 systemd[1]: Started kubelet.service. Feb 9 19:14:37.185846 systemd[1]: Stopping kubelet.service... Feb 9 19:14:37.191223 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:14:37.191458 systemd[1]: Stopped kubelet.service. Feb 9 19:14:37.228058 systemd[1]: Reloading. Feb 9 19:14:37.318655 /usr/lib/systemd/system-generators/torcx-generator[1638]: time="2024-02-09T19:14:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:14:37.318688 /usr/lib/systemd/system-generators/torcx-generator[1638]: time="2024-02-09T19:14:37Z" level=info msg="torcx already run" Feb 9 19:14:37.404157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:37.404178 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:37.435819 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:37.528816 systemd[1]: Started kubelet.service. Feb 9 19:14:37.600203 kubelet[1691]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:14:37.600586 kubelet[1691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:14:37.600721 kubelet[1691]: I0209 19:14:37.600691 1691 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:14:37.602147 kubelet[1691]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:14:37.602204 kubelet[1691]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:14:37.987240 kubelet[1691]: I0209 19:14:37.987216 1691 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:14:37.987391 kubelet[1691]: I0209 19:14:37.987380 1691 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:14:37.987685 kubelet[1691]: I0209 19:14:37.987671 1691 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:14:37.991252 kubelet[1691]: E0209 19:14:37.991216 1691 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:37.991338 kubelet[1691]: I0209 19:14:37.991264 1691 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:14:37.994699 kubelet[1691]: I0209 19:14:37.994669 1691 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:14:37.995153 kubelet[1691]: I0209 19:14:37.995140 1691 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:14:37.995294 kubelet[1691]: I0209 19:14:37.995281 1691 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:14:37.995432 kubelet[1691]: I0209 19:14:37.995421 1691 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:14:37.995504 kubelet[1691]: I0209 19:14:37.995493 1691 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:14:37.995689 kubelet[1691]: I0209 19:14:37.995676 1691 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:14:38.001757 kubelet[1691]: I0209 19:14:38.001734 1691 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:14:38.002043 kubelet[1691]: I0209 19:14:38.001991 1691 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:14:38.002156 kubelet[1691]: I0209 19:14:38.002141 1691 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:14:38.002254 kubelet[1691]: I0209 19:14:38.002241 1691 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:14:38.002512 kubelet[1691]: W0209 19:14:38.002438 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-8f3f3a83f5.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.002653 kubelet[1691]: E0209 19:14:38.002527 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-8f3f3a83f5.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.004451 kubelet[1691]: I0209 19:14:38.004434 1691 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:14:38.004857 kubelet[1691]: W0209 19:14:38.004842 1691 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:14:38.005621 kubelet[1691]: I0209 19:14:38.005607 1691 server.go:1186] "Started kubelet" Feb 9 19:14:38.006747 kubelet[1691]: W0209 19:14:38.006705 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.006860 kubelet[1691]: E0209 19:14:38.006847 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.010718 kubelet[1691]: E0209 19:14:38.009982 1691 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-8f3f3a83f5.novalocal.17b247bb2f560962", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-8f3f3a83f5.novalocal", UID:"ci-3510-3-2-c-8f3f3a83f5.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-8f3f3a83f5.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 14, 38, 5463394, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 14, 38, 5463394, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.148:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.148:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:14:38.012535 kubelet[1691]: E0209 19:14:38.012506 1691 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:14:38.012535 kubelet[1691]: E0209 19:14:38.012531 1691 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:14:38.012795 kubelet[1691]: I0209 19:14:38.012715 1691 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:14:38.013313 kubelet[1691]: I0209 19:14:38.013284 1691 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:14:38.016886 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:14:38.018258 kubelet[1691]: I0209 19:14:38.018210 1691 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:14:38.025451 kubelet[1691]: I0209 19:14:38.025425 1691 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:14:38.025810 kubelet[1691]: I0209 19:14:38.025783 1691 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:14:38.026645 kubelet[1691]: W0209 19:14:38.026491 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.026838 kubelet[1691]: E0209 19:14:38.026816 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.027333 kubelet[1691]: E0209 19:14:38.027289 1691 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.24.4.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-8f3f3a83f5.novalocal?timeout=10s": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.085756 kubelet[1691]: I0209 19:14:38.085722 1691 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:14:38.091691 kubelet[1691]: I0209 19:14:38.091669 1691 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:14:38.091691 kubelet[1691]: I0209 19:14:38.091688 1691 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:14:38.091818 kubelet[1691]: I0209 19:14:38.091703 1691 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:14:38.095663 kubelet[1691]: I0209 19:14:38.095636 1691 policy_none.go:49] "None policy: Start" Feb 9 19:14:38.096526 kubelet[1691]: I0209 19:14:38.096500 1691 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:14:38.096608 kubelet[1691]: I0209 19:14:38.096537 1691 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:14:38.100811 kubelet[1691]: I0209 19:14:38.100784 1691 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:14:38.100995 kubelet[1691]: I0209 19:14:38.100974 1691 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:14:38.104686 kubelet[1691]: E0209 19:14:38.104668 1691 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:38.116821 kubelet[1691]: I0209 19:14:38.116807 1691 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:14:38.116914 kubelet[1691]: I0209 19:14:38.116903 1691 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:14:38.116995 kubelet[1691]: I0209 19:14:38.116985 1691 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:14:38.117097 kubelet[1691]: E0209 19:14:38.117087 1691 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:14:38.118115 kubelet[1691]: W0209 19:14:38.118093 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.118214 kubelet[1691]: E0209 19:14:38.118204 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.127622 kubelet[1691]: I0209 19:14:38.127607 1691 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.127927 kubelet[1691]: E0209 19:14:38.127914 1691 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.148:6443/api/v1/nodes\": dial tcp 172.24.4.148:6443: connect: connection refused" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.217602 kubelet[1691]: I0209 19:14:38.217520 1691 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:38.220918 kubelet[1691]: I0209 19:14:38.220840 1691 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:38.225928 kubelet[1691]: I0209 19:14:38.225880 1691 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:38.228495 kubelet[1691]: I0209 19:14:38.228458 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.228820 kubelet[1691]: I0209 19:14:38.228790 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.229080 kubelet[1691]: I0209 19:14:38.229041 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/255e009375cab99ccb1406fd897ebb55-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"255e009375cab99ccb1406fd897ebb55\") " pod="kube-system/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.229322 kubelet[1691]: I0209 19:14:38.229298 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/444ca30309e727b340a864e2a783cc7d-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"444ca30309e727b340a864e2a783cc7d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.229610 kubelet[1691]: I0209 19:14:38.229539 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/444ca30309e727b340a864e2a783cc7d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"444ca30309e727b340a864e2a783cc7d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.229863 kubelet[1691]: I0209 19:14:38.229838 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.230100 kubelet[1691]: I0209 19:14:38.230074 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/444ca30309e727b340a864e2a783cc7d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"444ca30309e727b340a864e2a783cc7d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.230376 kubelet[1691]: I0209 19:14:38.230350 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.230680 kubelet[1691]: I0209 19:14:38.230650 1691 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.238333 kubelet[1691]: I0209 19:14:38.238084 1691 status_manager.go:698] "Failed to get status for pod" podUID=444ca30309e727b340a864e2a783cc7d pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" err="Get \"https://172.24.4.148:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\": dial tcp 172.24.4.148:6443: connect: connection refused" Feb 9 19:14:38.242801 kubelet[1691]: E0209 19:14:38.242715 1691 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.24.4.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-8f3f3a83f5.novalocal?timeout=10s": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.248730 kubelet[1691]: I0209 19:14:38.248678 1691 status_manager.go:698] "Failed to get status for pod" podUID=0f94bca4a39c57a4d24b5a82abf05f2b pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" err="Get \"https://172.24.4.148:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\": dial tcp 172.24.4.148:6443: connect: connection refused" Feb 9 19:14:38.253452 kubelet[1691]: I0209 19:14:38.253420 1691 status_manager.go:698] "Failed to get status for pod" podUID=255e009375cab99ccb1406fd897ebb55 pod="kube-system/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal" err="Get \"https://172.24.4.148:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal\": dial tcp 172.24.4.148:6443: connect: connection refused" Feb 9 19:14:38.332834 kubelet[1691]: I0209 19:14:38.332774 1691 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.333763 kubelet[1691]: E0209 19:14:38.333714 1691 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.148:6443/api/v1/nodes\": dial tcp 172.24.4.148:6443: connect: connection refused" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.539613 env[1135]: time="2024-02-09T19:14:38.539394514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal,Uid:444ca30309e727b340a864e2a783cc7d,Namespace:kube-system,Attempt:0,}" Feb 9 19:14:38.561446 env[1135]: time="2024-02-09T19:14:38.561356949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal,Uid:255e009375cab99ccb1406fd897ebb55,Namespace:kube-system,Attempt:0,}" Feb 9 19:14:38.565832 env[1135]: time="2024-02-09T19:14:38.565710710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal,Uid:0f94bca4a39c57a4d24b5a82abf05f2b,Namespace:kube-system,Attempt:0,}" Feb 9 19:14:38.644124 kubelet[1691]: E0209 19:14:38.644048 1691 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.24.4.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-8f3f3a83f5.novalocal?timeout=10s": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:38.737425 kubelet[1691]: I0209 19:14:38.737360 1691 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:38.738073 kubelet[1691]: E0209 19:14:38.738014 1691 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.148:6443/api/v1/nodes\": dial tcp 172.24.4.148:6443: connect: connection refused" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:39.130915 kubelet[1691]: W0209 19:14:39.130758 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.130915 kubelet[1691]: E0209 19:14:39.130875 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.235311 kubelet[1691]: W0209 19:14:39.235144 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-8f3f3a83f5.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.235311 kubelet[1691]: E0209 19:14:39.235270 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-8f3f3a83f5.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.385633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971557644.mount: Deactivated successfully. Feb 9 19:14:39.387744 kubelet[1691]: W0209 19:14:39.386426 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.387744 kubelet[1691]: E0209 19:14:39.386651 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.395919 env[1135]: time="2024-02-09T19:14:39.395799056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.401644 env[1135]: time="2024-02-09T19:14:39.401534026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.402809 kubelet[1691]: W0209 19:14:39.402652 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.402809 kubelet[1691]: E0209 19:14:39.402772 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.148:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.404649 env[1135]: time="2024-02-09T19:14:39.404596617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.410297 env[1135]: time="2024-02-09T19:14:39.410238584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.412425 env[1135]: time="2024-02-09T19:14:39.412358136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.427434 env[1135]: time="2024-02-09T19:14:39.427345000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.431173 env[1135]: time="2024-02-09T19:14:39.431089987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.440644 env[1135]: time="2024-02-09T19:14:39.440536322Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.445742 kubelet[1691]: E0209 19:14:39.445553 1691 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.24.4.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-8f3f3a83f5.novalocal?timeout=10s": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:39.450369 env[1135]: time="2024-02-09T19:14:39.450310124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.456541 env[1135]: time="2024-02-09T19:14:39.456482836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.458826 env[1135]: time="2024-02-09T19:14:39.458781070Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.460738 env[1135]: time="2024-02-09T19:14:39.460689700Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:14:39.527277 env[1135]: time="2024-02-09T19:14:39.527160567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:14:39.527277 env[1135]: time="2024-02-09T19:14:39.527230987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:14:39.527277 env[1135]: time="2024-02-09T19:14:39.527246376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:14:39.527724 env[1135]: time="2024-02-09T19:14:39.527677866Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f87daf8444f889d8f565c2c04f2ea4b51ea1e967ecaa38182d1315ae403b6c86 pid=1767 runtime=io.containerd.runc.v2 Feb 9 19:14:39.528065 env[1135]: time="2024-02-09T19:14:39.528005485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:14:39.528123 env[1135]: time="2024-02-09T19:14:39.528075564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:14:39.528123 env[1135]: time="2024-02-09T19:14:39.528106923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:14:39.528326 env[1135]: time="2024-02-09T19:14:39.528274734Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64d8973537d0d8f0ac29b9eb358afa3ea82991fa0119528334f97f4f94c80abc pid=1776 runtime=io.containerd.runc.v2 Feb 9 19:14:39.532732 env[1135]: time="2024-02-09T19:14:39.532657615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:14:39.532878 env[1135]: time="2024-02-09T19:14:39.532855081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:14:39.533002 env[1135]: time="2024-02-09T19:14:39.532974403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:14:39.533275 env[1135]: time="2024-02-09T19:14:39.533223255Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2db1da68c5056e703763f2f982eaf0f62c20636f8996bde51ac593b309292f pid=1792 runtime=io.containerd.runc.v2 Feb 9 19:14:39.539725 kubelet[1691]: I0209 19:14:39.539473 1691 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:39.539826 kubelet[1691]: E0209 19:14:39.539804 1691 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.148:6443/api/v1/nodes\": dial tcp 172.24.4.148:6443: connect: connection refused" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:39.632400 env[1135]: time="2024-02-09T19:14:39.632330086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal,Uid:255e009375cab99ccb1406fd897ebb55,Namespace:kube-system,Attempt:0,} returns sandbox id \"64d8973537d0d8f0ac29b9eb358afa3ea82991fa0119528334f97f4f94c80abc\"" Feb 9 19:14:39.635982 env[1135]: time="2024-02-09T19:14:39.635948169Z" level=info msg="CreateContainer within sandbox \"64d8973537d0d8f0ac29b9eb358afa3ea82991fa0119528334f97f4f94c80abc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:14:39.642005 env[1135]: time="2024-02-09T19:14:39.641862260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal,Uid:444ca30309e727b340a864e2a783cc7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d2db1da68c5056e703763f2f982eaf0f62c20636f8996bde51ac593b309292f\"" Feb 9 19:14:39.647318 env[1135]: time="2024-02-09T19:14:39.647274231Z" level=info msg="CreateContainer within sandbox \"2d2db1da68c5056e703763f2f982eaf0f62c20636f8996bde51ac593b309292f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:14:39.652335 env[1135]: time="2024-02-09T19:14:39.652287972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal,Uid:0f94bca4a39c57a4d24b5a82abf05f2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f87daf8444f889d8f565c2c04f2ea4b51ea1e967ecaa38182d1315ae403b6c86\"" Feb 9 19:14:39.658279 env[1135]: time="2024-02-09T19:14:39.658245575Z" level=info msg="CreateContainer within sandbox \"f87daf8444f889d8f565c2c04f2ea4b51ea1e967ecaa38182d1315ae403b6c86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:14:39.669087 env[1135]: time="2024-02-09T19:14:39.669043347Z" level=info msg="CreateContainer within sandbox \"64d8973537d0d8f0ac29b9eb358afa3ea82991fa0119528334f97f4f94c80abc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0879814bb0f4e9d8261315f922ac55e0371378e3aff540e9604d7e990bd4b44b\"" Feb 9 19:14:39.669878 env[1135]: time="2024-02-09T19:14:39.669853670Z" level=info msg="StartContainer for \"0879814bb0f4e9d8261315f922ac55e0371378e3aff540e9604d7e990bd4b44b\"" Feb 9 19:14:39.689143 env[1135]: time="2024-02-09T19:14:39.689103883Z" level=info msg="CreateContainer within sandbox \"2d2db1da68c5056e703763f2f982eaf0f62c20636f8996bde51ac593b309292f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"307851b7a310051fddfb0163bca0b6bb66f7acfb7b3472ace457bfa295cee901\"" Feb 9 19:14:39.689918 env[1135]: time="2024-02-09T19:14:39.689869534Z" level=info msg="StartContainer for \"307851b7a310051fddfb0163bca0b6bb66f7acfb7b3472ace457bfa295cee901\"" Feb 9 19:14:39.699804 env[1135]: time="2024-02-09T19:14:39.699750084Z" level=info msg="CreateContainer within sandbox \"f87daf8444f889d8f565c2c04f2ea4b51ea1e967ecaa38182d1315ae403b6c86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"27beea71d7b1a93989b56a1262ddd3e73a4826343a7048545410a9f33e4e43be\"" Feb 9 19:14:39.700507 env[1135]: time="2024-02-09T19:14:39.700475480Z" level=info msg="StartContainer for \"27beea71d7b1a93989b56a1262ddd3e73a4826343a7048545410a9f33e4e43be\"" Feb 9 19:14:39.769116 env[1135]: time="2024-02-09T19:14:39.769049408Z" level=info msg="StartContainer for \"0879814bb0f4e9d8261315f922ac55e0371378e3aff540e9604d7e990bd4b44b\" returns successfully" Feb 9 19:14:39.810325 env[1135]: time="2024-02-09T19:14:39.810263868Z" level=info msg="StartContainer for \"307851b7a310051fddfb0163bca0b6bb66f7acfb7b3472ace457bfa295cee901\" returns successfully" Feb 9 19:14:39.863542 env[1135]: time="2024-02-09T19:14:39.863450209Z" level=info msg="StartContainer for \"27beea71d7b1a93989b56a1262ddd3e73a4826343a7048545410a9f33e4e43be\" returns successfully" Feb 9 19:14:40.126486 kubelet[1691]: I0209 19:14:40.126453 1691 status_manager.go:698] "Failed to get status for pod" podUID=255e009375cab99ccb1406fd897ebb55 pod="kube-system/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal" err="Get \"https://172.24.4.148:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal\": dial tcp 172.24.4.148:6443: connect: connection refused" Feb 9 19:14:40.128988 kubelet[1691]: I0209 19:14:40.128960 1691 status_manager.go:698] "Failed to get status for pod" podUID=0f94bca4a39c57a4d24b5a82abf05f2b pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" err="Get \"https://172.24.4.148:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\": dial tcp 172.24.4.148:6443: connect: connection refused" Feb 9 19:14:40.180690 kubelet[1691]: E0209 19:14:40.180662 1691 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.148:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:40.203035 kubelet[1691]: I0209 19:14:40.203007 1691 status_manager.go:698] "Failed to get status for pod" podUID=444ca30309e727b340a864e2a783cc7d pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" err="Get \"https://172.24.4.148:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\": dial tcp 172.24.4.148:6443: connect: connection refused" Feb 9 19:14:41.039032 kubelet[1691]: W0209 19:14:41.038992 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:41.039032 kubelet[1691]: E0209 19:14:41.039032 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.148:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:41.046405 kubelet[1691]: E0209 19:14:41.046376 1691 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.24.4.148:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-8f3f3a83f5.novalocal?timeout=10s": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:41.141030 kubelet[1691]: I0209 19:14:41.140971 1691 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:41.141350 kubelet[1691]: E0209 19:14:41.141298 1691 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.148:6443/api/v1/nodes\": dial tcp 172.24.4.148:6443: connect: connection refused" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:41.187875 kubelet[1691]: W0209 19:14:41.187841 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-8f3f3a83f5.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:41.187875 kubelet[1691]: E0209 19:14:41.187877 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.148:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-8f3f3a83f5.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:41.361343 kubelet[1691]: W0209 19:14:41.361209 1691 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:41.361343 kubelet[1691]: E0209 19:14:41.361252 1691 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.148:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.148:6443: connect: connection refused Feb 9 19:14:44.113155 kubelet[1691]: E0209 19:14:44.113026 1691 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3510-3-2-c-8f3f3a83f5.novalocal" not found Feb 9 19:14:44.257442 kubelet[1691]: E0209 19:14:44.257394 1691 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:44.344866 kubelet[1691]: I0209 19:14:44.344814 1691 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:44.715081 kubelet[1691]: I0209 19:14:44.714971 1691 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:44.737616 kubelet[1691]: E0209 19:14:44.737514 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:44.839187 kubelet[1691]: E0209 19:14:44.839151 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:44.939646 kubelet[1691]: E0209 19:14:44.939550 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.040287 kubelet[1691]: E0209 19:14:45.040122 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.140757 kubelet[1691]: E0209 19:14:45.140671 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.241376 kubelet[1691]: E0209 19:14:45.241259 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.341684 kubelet[1691]: E0209 19:14:45.341524 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.441889 kubelet[1691]: E0209 19:14:45.441792 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.542902 kubelet[1691]: E0209 19:14:45.542866 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:45.643631 kubelet[1691]: E0209 19:14:45.643592 1691 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" not found" Feb 9 19:14:46.006654 kubelet[1691]: I0209 19:14:46.006459 1691 apiserver.go:52] "Watching apiserver" Feb 9 19:14:46.026654 kubelet[1691]: I0209 19:14:46.026605 1691 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:14:46.080502 kubelet[1691]: I0209 19:14:46.080422 1691 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:14:46.721692 systemd[1]: Reloading. Feb 9 19:14:46.838700 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2024-02-09T19:14:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:14:46.838733 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2024-02-09T19:14:46Z" level=info msg="torcx already run" Feb 9 19:14:46.946044 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:46.946245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:46.972504 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:47.083425 systemd[1]: Stopping kubelet.service... Feb 9 19:14:47.097357 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:14:47.097829 systemd[1]: Stopped kubelet.service. Feb 9 19:14:47.100681 systemd[1]: Started kubelet.service. Feb 9 19:14:47.192168 kubelet[2077]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:14:47.192168 kubelet[2077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:14:47.192505 kubelet[2077]: I0209 19:14:47.192203 2077 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:14:47.194357 kubelet[2077]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:14:47.194357 kubelet[2077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:14:47.200084 kubelet[2077]: I0209 19:14:47.200045 2077 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:14:47.200084 kubelet[2077]: I0209 19:14:47.200071 2077 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:14:47.200300 kubelet[2077]: I0209 19:14:47.200277 2077 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:14:47.201779 kubelet[2077]: I0209 19:14:47.201752 2077 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:14:47.206888 kubelet[2077]: I0209 19:14:47.205765 2077 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:14:47.206888 kubelet[2077]: I0209 19:14:47.206238 2077 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:14:47.206888 kubelet[2077]: I0209 19:14:47.206335 2077 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:14:47.206888 kubelet[2077]: I0209 19:14:47.206355 2077 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:14:47.206888 kubelet[2077]: I0209 19:14:47.206369 2077 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:14:47.206888 kubelet[2077]: I0209 19:14:47.206493 2077 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:14:47.207160 kubelet[2077]: I0209 19:14:47.206966 2077 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:14:47.209092 sudo[2088]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:14:47.209320 sudo[2088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:14:47.210502 kubelet[2077]: I0209 19:14:47.210283 2077 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:14:47.210502 kubelet[2077]: I0209 19:14:47.210301 2077 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:14:47.210502 kubelet[2077]: I0209 19:14:47.210321 2077 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:14:47.210502 kubelet[2077]: I0209 19:14:47.210336 2077 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:14:47.225049 kubelet[2077]: I0209 19:14:47.217979 2077 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:14:47.225049 kubelet[2077]: I0209 19:14:47.218704 2077 server.go:1186] "Started kubelet" Feb 9 19:14:47.225049 kubelet[2077]: I0209 19:14:47.222930 2077 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:14:47.229310 kubelet[2077]: I0209 19:14:47.226830 2077 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:14:47.229310 kubelet[2077]: I0209 19:14:47.228840 2077 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:14:47.251654 kubelet[2077]: E0209 19:14:47.240048 2077 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:14:47.251654 kubelet[2077]: E0209 19:14:47.240107 2077 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:14:47.251654 kubelet[2077]: I0209 19:14:47.244648 2077 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:14:47.251654 kubelet[2077]: I0209 19:14:47.244793 2077 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:14:47.305692 kubelet[2077]: I0209 19:14:47.305641 2077 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:14:47.368546 kubelet[2077]: I0209 19:14:47.368469 2077 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.369182 kubelet[2077]: I0209 19:14:47.369172 2077 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:14:47.369259 kubelet[2077]: I0209 19:14:47.369250 2077 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:14:47.369337 kubelet[2077]: I0209 19:14:47.369327 2077 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:14:47.369429 kubelet[2077]: E0209 19:14:47.369420 2077 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:14:47.392036 kubelet[2077]: I0209 19:14:47.392010 2077 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:14:47.392036 kubelet[2077]: I0209 19:14:47.392031 2077 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:14:47.392036 kubelet[2077]: I0209 19:14:47.392048 2077 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:14:47.392229 kubelet[2077]: I0209 19:14:47.392183 2077 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:14:47.392229 kubelet[2077]: I0209 19:14:47.392197 2077 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:14:47.392229 kubelet[2077]: I0209 19:14:47.392203 2077 policy_none.go:49] "None policy: Start" Feb 9 19:14:47.393837 kubelet[2077]: I0209 19:14:47.393818 2077 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.393893 kubelet[2077]: I0209 19:14:47.393875 2077 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.401298 kubelet[2077]: I0209 19:14:47.401272 2077 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:14:47.401401 kubelet[2077]: I0209 19:14:47.401304 2077 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:14:47.401460 kubelet[2077]: I0209 19:14:47.401432 2077 state_mem.go:75] "Updated machine memory state" Feb 9 19:14:47.404019 kubelet[2077]: I0209 19:14:47.403993 2077 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:14:47.404220 kubelet[2077]: I0209 19:14:47.404200 2077 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:14:47.470547 kubelet[2077]: I0209 19:14:47.470519 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:47.470789 kubelet[2077]: I0209 19:14:47.470775 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:47.470916 kubelet[2077]: I0209 19:14:47.470904 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:47.546701 kubelet[2077]: I0209 19:14:47.546669 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546854 kubelet[2077]: I0209 19:14:47.546720 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/444ca30309e727b340a864e2a783cc7d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"444ca30309e727b340a864e2a783cc7d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546854 kubelet[2077]: I0209 19:14:47.546755 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/444ca30309e727b340a864e2a783cc7d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"444ca30309e727b340a864e2a783cc7d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546854 kubelet[2077]: I0209 19:14:47.546799 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546854 kubelet[2077]: I0209 19:14:47.546834 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546981 kubelet[2077]: I0209 19:14:47.546866 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546981 kubelet[2077]: I0209 19:14:47.546898 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f94bca4a39c57a4d24b5a82abf05f2b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"0f94bca4a39c57a4d24b5a82abf05f2b\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546981 kubelet[2077]: I0209 19:14:47.546931 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/255e009375cab99ccb1406fd897ebb55-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"255e009375cab99ccb1406fd897ebb55\") " pod="kube-system/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.546981 kubelet[2077]: I0209 19:14:47.546959 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/444ca30309e727b340a864e2a783cc7d-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" (UID: \"444ca30309e727b340a864e2a783cc7d\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:47.951803 sudo[2088]: pam_unix(sudo:session): session closed for user root Feb 9 19:14:48.215496 kubelet[2077]: I0209 19:14:48.215016 2077 apiserver.go:52] "Watching apiserver" Feb 9 19:14:48.244975 kubelet[2077]: I0209 19:14:48.244932 2077 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:14:48.254150 kubelet[2077]: I0209 19:14:48.254091 2077 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:14:48.440090 kubelet[2077]: E0209 19:14:48.439999 2077 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:48.620801 kubelet[2077]: E0209 19:14:48.620585 2077 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:48.823360 kubelet[2077]: E0209 19:14:48.823280 2077 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" Feb 9 19:14:49.624674 kubelet[2077]: I0209 19:14:49.624588 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-c-8f3f3a83f5.novalocal" podStartSLOduration=2.624464097 pod.CreationTimestamp="2024-02-09 19:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:14:49.243683448 +0000 UTC m=+2.140106843" watchObservedRunningTime="2024-02-09 19:14:49.624464097 +0000 UTC m=+2.520887472" Feb 9 19:14:49.625513 kubelet[2077]: I0209 19:14:49.624808 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-c-8f3f3a83f5.novalocal" podStartSLOduration=2.624755611 pod.CreationTimestamp="2024-02-09 19:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:14:49.624146655 +0000 UTC m=+2.520570030" watchObservedRunningTime="2024-02-09 19:14:49.624755611 +0000 UTC m=+2.521178986" Feb 9 19:14:50.019675 kubelet[2077]: I0209 19:14:50.019644 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-c-8f3f3a83f5.novalocal" podStartSLOduration=3.019605339 pod.CreationTimestamp="2024-02-09 19:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:14:50.018497392 +0000 UTC m=+2.914920717" watchObservedRunningTime="2024-02-09 19:14:50.019605339 +0000 UTC m=+2.916028674" Feb 9 19:14:50.264586 sudo[1266]: pam_unix(sudo:session): session closed for user root Feb 9 19:14:50.470170 sshd[1260]: pam_unix(sshd:session): session closed for user core Feb 9 19:14:50.475886 systemd[1]: sshd@4-172.24.4.148:22-172.24.4.1:41614.service: Deactivated successfully. Feb 9 19:14:50.477539 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:14:50.480943 systemd-logind[1123]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:14:50.483114 systemd-logind[1123]: Removed session 5. Feb 9 19:14:59.182905 kubelet[2077]: I0209 19:14:59.182883 2077 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:14:59.183707 env[1135]: time="2024-02-09T19:14:59.183671499Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:14:59.183986 kubelet[2077]: I0209 19:14:59.183874 2077 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:14:59.309214 kubelet[2077]: I0209 19:14:59.309126 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:59.397652 kubelet[2077]: I0209 19:14:59.397616 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:59.419288 kubelet[2077]: I0209 19:14:59.419245 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:14:59.427492 kubelet[2077]: I0209 19:14:59.427467 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwkjz\" (UniqueName: \"kubernetes.io/projected/cf7cc758-7181-4a33-a26d-da405010726b-kube-api-access-lwkjz\") pod \"cilium-operator-f59cbd8c6-bxkrm\" (UID: \"cf7cc758-7181-4a33-a26d-da405010726b\") " pod="kube-system/cilium-operator-f59cbd8c6-bxkrm" Feb 9 19:14:59.427696 kubelet[2077]: I0209 19:14:59.427684 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf7cc758-7181-4a33-a26d-da405010726b-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-bxkrm\" (UID: \"cf7cc758-7181-4a33-a26d-da405010726b\") " pod="kube-system/cilium-operator-f59cbd8c6-bxkrm" Feb 9 19:14:59.528323 kubelet[2077]: I0209 19:14:59.528195 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhgjs\" (UniqueName: \"kubernetes.io/projected/761d7e5f-633b-486a-bbd1-b816f8c57198-kube-api-access-zhgjs\") pod \"kube-proxy-58pgq\" (UID: \"761d7e5f-633b-486a-bbd1-b816f8c57198\") " pod="kube-system/kube-proxy-58pgq" Feb 9 19:14:59.528515 kubelet[2077]: I0209 19:14:59.528504 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-cgroup\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.528658 kubelet[2077]: I0209 19:14:59.528647 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-xtables-lock\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.528778 kubelet[2077]: I0209 19:14:59.528767 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-net\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.528897 kubelet[2077]: I0209 19:14:59.528885 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-bpf-maps\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.529009 kubelet[2077]: I0209 19:14:59.528998 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/761d7e5f-633b-486a-bbd1-b816f8c57198-kube-proxy\") pod \"kube-proxy-58pgq\" (UID: \"761d7e5f-633b-486a-bbd1-b816f8c57198\") " pod="kube-system/kube-proxy-58pgq" Feb 9 19:14:59.529114 kubelet[2077]: I0209 19:14:59.529104 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/761d7e5f-633b-486a-bbd1-b816f8c57198-lib-modules\") pod \"kube-proxy-58pgq\" (UID: \"761d7e5f-633b-486a-bbd1-b816f8c57198\") " pod="kube-system/kube-proxy-58pgq" Feb 9 19:14:59.529222 kubelet[2077]: I0209 19:14:59.529212 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/761def3a-235d-452e-b574-043c0bdbb2da-cilium-config-path\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.529364 kubelet[2077]: I0209 19:14:59.529354 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cni-path\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.529471 kubelet[2077]: I0209 19:14:59.529461 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-lib-modules\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.529610 kubelet[2077]: I0209 19:14:59.529599 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/761def3a-235d-452e-b574-043c0bdbb2da-clustermesh-secrets\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.529787 kubelet[2077]: I0209 19:14:59.529758 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-hubble-tls\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.529846 kubelet[2077]: I0209 19:14:59.529839 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-etc-cni-netd\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.531897 kubelet[2077]: I0209 19:14:59.531863 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-hostproc\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.531978 kubelet[2077]: I0209 19:14:59.531948 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-run\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.532022 kubelet[2077]: I0209 19:14:59.532014 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-kernel\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.532085 kubelet[2077]: I0209 19:14:59.532068 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcds\" (UniqueName: \"kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-kube-api-access-5zcds\") pod \"cilium-rb25m\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " pod="kube-system/cilium-rb25m" Feb 9 19:14:59.532149 kubelet[2077]: I0209 19:14:59.532127 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/761d7e5f-633b-486a-bbd1-b816f8c57198-xtables-lock\") pod \"kube-proxy-58pgq\" (UID: \"761d7e5f-633b-486a-bbd1-b816f8c57198\") " pod="kube-system/kube-proxy-58pgq" Feb 9 19:14:59.943843 env[1135]: time="2024-02-09T19:14:59.943710766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bxkrm,Uid:cf7cc758-7181-4a33-a26d-da405010726b,Namespace:kube-system,Attempt:0,}" Feb 9 19:14:59.994348 env[1135]: time="2024-02-09T19:14:59.994159972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:14:59.994634 env[1135]: time="2024-02-09T19:14:59.994271371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:14:59.994634 env[1135]: time="2024-02-09T19:14:59.994350949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:14:59.995067 env[1135]: time="2024-02-09T19:14:59.994976770Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69 pid=2181 runtime=io.containerd.runc.v2 Feb 9 19:15:00.088750 env[1135]: time="2024-02-09T19:15:00.088679416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-bxkrm,Uid:cf7cc758-7181-4a33-a26d-da405010726b,Namespace:kube-system,Attempt:0,} returns sandbox id \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\"" Feb 9 19:15:00.092336 env[1135]: time="2024-02-09T19:15:00.092282605Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:15:00.096490 env[1135]: time="2024-02-09T19:15:00.096437085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb25m,Uid:761def3a-235d-452e-b574-043c0bdbb2da,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:00.260293 env[1135]: time="2024-02-09T19:15:00.260008977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:00.260293 env[1135]: time="2024-02-09T19:15:00.260192921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:00.261518 env[1135]: time="2024-02-09T19:15:00.260262983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:00.261518 env[1135]: time="2024-02-09T19:15:00.261267592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6 pid=2223 runtime=io.containerd.runc.v2 Feb 9 19:15:00.303401 env[1135]: time="2024-02-09T19:15:00.303328877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58pgq,Uid:761d7e5f-633b-486a-bbd1-b816f8c57198,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:00.337497 env[1135]: time="2024-02-09T19:15:00.337401602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rb25m,Uid:761def3a-235d-452e-b574-043c0bdbb2da,Namespace:kube-system,Attempt:0,} returns sandbox id \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\"" Feb 9 19:15:00.348105 env[1135]: time="2024-02-09T19:15:00.347934762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:00.348819 env[1135]: time="2024-02-09T19:15:00.348551346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:00.348819 env[1135]: time="2024-02-09T19:15:00.348621877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:00.348991 env[1135]: time="2024-02-09T19:15:00.348903434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/42123528d6856142f372d0b29641b0e22fb787cd98baef97b36ee7422197bfa4 pid=2266 runtime=io.containerd.runc.v2 Feb 9 19:15:00.384271 env[1135]: time="2024-02-09T19:15:00.384194588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58pgq,Uid:761d7e5f-633b-486a-bbd1-b816f8c57198,Namespace:kube-system,Attempt:0,} returns sandbox id \"42123528d6856142f372d0b29641b0e22fb787cd98baef97b36ee7422197bfa4\"" Feb 9 19:15:00.389078 env[1135]: time="2024-02-09T19:15:00.389033217Z" level=info msg="CreateContainer within sandbox \"42123528d6856142f372d0b29641b0e22fb787cd98baef97b36ee7422197bfa4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:15:00.411350 env[1135]: time="2024-02-09T19:15:00.411292428Z" level=info msg="CreateContainer within sandbox \"42123528d6856142f372d0b29641b0e22fb787cd98baef97b36ee7422197bfa4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"235999155913d073772825c67d31ea7dc27d99472f5669462563a4f9de8f478e\"" Feb 9 19:15:00.413850 env[1135]: time="2024-02-09T19:15:00.413765573Z" level=info msg="StartContainer for \"235999155913d073772825c67d31ea7dc27d99472f5669462563a4f9de8f478e\"" Feb 9 19:15:00.478434 env[1135]: time="2024-02-09T19:15:00.478391381Z" level=info msg="StartContainer for \"235999155913d073772825c67d31ea7dc27d99472f5669462563a4f9de8f478e\" returns successfully" Feb 9 19:15:01.474280 kubelet[2077]: I0209 19:15:01.474209 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-58pgq" podStartSLOduration=2.4740541670000002 pod.CreationTimestamp="2024-02-09 19:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:01.47364422 +0000 UTC m=+14.370067565" watchObservedRunningTime="2024-02-09 19:15:01.474054167 +0000 UTC m=+14.370477542" Feb 9 19:15:01.730671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334840669.mount: Deactivated successfully. Feb 9 19:15:03.671140 env[1135]: time="2024-02-09T19:15:03.671055209Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:03.674911 env[1135]: time="2024-02-09T19:15:03.674865058Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:03.681485 env[1135]: time="2024-02-09T19:15:03.681383976Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:03.683700 env[1135]: time="2024-02-09T19:15:03.682538877Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:15:03.686689 env[1135]: time="2024-02-09T19:15:03.686610996Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:15:03.692992 env[1135]: time="2024-02-09T19:15:03.692877362Z" level=info msg="CreateContainer within sandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:15:03.726938 env[1135]: time="2024-02-09T19:15:03.726825165Z" level=info msg="CreateContainer within sandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\"" Feb 9 19:15:03.733147 env[1135]: time="2024-02-09T19:15:03.733063078Z" level=info msg="StartContainer for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\"" Feb 9 19:15:03.817291 env[1135]: time="2024-02-09T19:15:03.817236161Z" level=info msg="StartContainer for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" returns successfully" Feb 9 19:15:07.543397 kubelet[2077]: I0209 19:15:07.542843 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-bxkrm" podStartSLOduration=-9.223372028312014e+09 pod.CreationTimestamp="2024-02-09 19:14:59 +0000 UTC" firstStartedPulling="2024-02-09 19:15:00.091253872 +0000 UTC m=+12.987677237" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:05.16031309 +0000 UTC m=+18.056736535" watchObservedRunningTime="2024-02-09 19:15:07.542763135 +0000 UTC m=+20.439186510" Feb 9 19:15:11.018860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190754129.mount: Deactivated successfully. Feb 9 19:15:16.019434 env[1135]: time="2024-02-09T19:15:16.019312870Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.030143 env[1135]: time="2024-02-09T19:15:16.027857860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.031993 env[1135]: time="2024-02-09T19:15:16.031947513Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:16.033625 env[1135]: time="2024-02-09T19:15:16.033533304Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:15:16.037715 env[1135]: time="2024-02-09T19:15:16.037688028Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:15:16.065384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498457656.mount: Deactivated successfully. Feb 9 19:15:16.072344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855123989.mount: Deactivated successfully. Feb 9 19:15:16.075338 env[1135]: time="2024-02-09T19:15:16.072729103Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\"" Feb 9 19:15:16.075709 env[1135]: time="2024-02-09T19:15:16.075658361Z" level=info msg="StartContainer for \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\"" Feb 9 19:15:16.135765 env[1135]: time="2024-02-09T19:15:16.135712300Z" level=info msg="StartContainer for \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\" returns successfully" Feb 9 19:15:16.515218 env[1135]: time="2024-02-09T19:15:16.515130872Z" level=info msg="shim disconnected" id=d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf Feb 9 19:15:16.515696 env[1135]: time="2024-02-09T19:15:16.515649123Z" level=warning msg="cleaning up after shim disconnected" id=d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf namespace=k8s.io Feb 9 19:15:16.515927 env[1135]: time="2024-02-09T19:15:16.515888913Z" level=info msg="cleaning up dead shim" Feb 9 19:15:16.564199 env[1135]: time="2024-02-09T19:15:16.564079845Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:15:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2527 runtime=io.containerd.runc.v2\n" Feb 9 19:15:17.054854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf-rootfs.mount: Deactivated successfully. Feb 9 19:15:17.543233 env[1135]: time="2024-02-09T19:15:17.542748316Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:15:17.584760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172365704.mount: Deactivated successfully. Feb 9 19:15:17.594929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967810059.mount: Deactivated successfully. Feb 9 19:15:17.596860 env[1135]: time="2024-02-09T19:15:17.596783434Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\"" Feb 9 19:15:17.602167 env[1135]: time="2024-02-09T19:15:17.599782874Z" level=info msg="StartContainer for \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\"" Feb 9 19:15:17.668627 env[1135]: time="2024-02-09T19:15:17.668546019Z" level=info msg="StartContainer for \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\" returns successfully" Feb 9 19:15:17.680590 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:15:17.680911 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:15:17.681921 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:15:17.683522 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:15:17.700408 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:15:17.712589 env[1135]: time="2024-02-09T19:15:17.712525395Z" level=info msg="shim disconnected" id=bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1 Feb 9 19:15:17.712821 env[1135]: time="2024-02-09T19:15:17.712800030Z" level=warning msg="cleaning up after shim disconnected" id=bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1 namespace=k8s.io Feb 9 19:15:17.712916 env[1135]: time="2024-02-09T19:15:17.712898755Z" level=info msg="cleaning up dead shim" Feb 9 19:15:17.721074 env[1135]: time="2024-02-09T19:15:17.721027297Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:15:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2595 runtime=io.containerd.runc.v2\n" Feb 9 19:15:18.053759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1-rootfs.mount: Deactivated successfully. Feb 9 19:15:18.558013 env[1135]: time="2024-02-09T19:15:18.557728818Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:15:18.599724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount333187549.mount: Deactivated successfully. Feb 9 19:15:18.618934 env[1135]: time="2024-02-09T19:15:18.618854178Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\"" Feb 9 19:15:18.620926 env[1135]: time="2024-02-09T19:15:18.620872440Z" level=info msg="StartContainer for \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\"" Feb 9 19:15:18.695309 env[1135]: time="2024-02-09T19:15:18.695265161Z" level=info msg="StartContainer for \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\" returns successfully" Feb 9 19:15:18.730697 env[1135]: time="2024-02-09T19:15:18.730634419Z" level=info msg="shim disconnected" id=3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7 Feb 9 19:15:18.730697 env[1135]: time="2024-02-09T19:15:18.730691636Z" level=warning msg="cleaning up after shim disconnected" id=3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7 namespace=k8s.io Feb 9 19:15:18.730697 env[1135]: time="2024-02-09T19:15:18.730702897Z" level=info msg="cleaning up dead shim" Feb 9 19:15:18.738551 env[1135]: time="2024-02-09T19:15:18.738505969Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:15:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2657 runtime=io.containerd.runc.v2\n" Feb 9 19:15:19.053800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7-rootfs.mount: Deactivated successfully. Feb 9 19:15:19.551031 env[1135]: time="2024-02-09T19:15:19.550852921Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:15:19.605868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824982103.mount: Deactivated successfully. Feb 9 19:15:19.620422 env[1135]: time="2024-02-09T19:15:19.620375704Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\"" Feb 9 19:15:19.623389 env[1135]: time="2024-02-09T19:15:19.623349457Z" level=info msg="StartContainer for \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\"" Feb 9 19:15:19.672917 env[1135]: time="2024-02-09T19:15:19.672875416Z" level=info msg="StartContainer for \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\" returns successfully" Feb 9 19:15:19.697804 env[1135]: time="2024-02-09T19:15:19.697750562Z" level=info msg="shim disconnected" id=b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed Feb 9 19:15:19.697804 env[1135]: time="2024-02-09T19:15:19.697798441Z" level=warning msg="cleaning up after shim disconnected" id=b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed namespace=k8s.io Feb 9 19:15:19.697804 env[1135]: time="2024-02-09T19:15:19.697810324Z" level=info msg="cleaning up dead shim" Feb 9 19:15:19.705818 env[1135]: time="2024-02-09T19:15:19.705778356Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:15:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2715 runtime=io.containerd.runc.v2\n" Feb 9 19:15:20.053830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed-rootfs.mount: Deactivated successfully. Feb 9 19:15:20.581048 env[1135]: time="2024-02-09T19:15:20.567605758Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:15:20.617889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606387143.mount: Deactivated successfully. Feb 9 19:15:20.628895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3880786356.mount: Deactivated successfully. Feb 9 19:15:20.639234 env[1135]: time="2024-02-09T19:15:20.639194324Z" level=info msg="CreateContainer within sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\"" Feb 9 19:15:20.641847 env[1135]: time="2024-02-09T19:15:20.641817277Z" level=info msg="StartContainer for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\"" Feb 9 19:15:20.704003 env[1135]: time="2024-02-09T19:15:20.703948561Z" level=info msg="StartContainer for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" returns successfully" Feb 9 19:15:20.788114 kubelet[2077]: I0209 19:15:20.787902 2077 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:15:20.911358 kubelet[2077]: I0209 19:15:20.911300 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:20.918916 kubelet[2077]: I0209 19:15:20.918868 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:20.933095 kubelet[2077]: I0209 19:15:20.933001 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nppl4\" (UniqueName: \"kubernetes.io/projected/42ee6fbe-9f35-4f17-baea-485555ffd333-kube-api-access-nppl4\") pod \"coredns-787d4945fb-ssp8p\" (UID: \"42ee6fbe-9f35-4f17-baea-485555ffd333\") " pod="kube-system/coredns-787d4945fb-ssp8p" Feb 9 19:15:20.933388 kubelet[2077]: I0209 19:15:20.933244 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/302da7ff-465d-44e7-842d-38f251738705-config-volume\") pod \"coredns-787d4945fb-lzcvb\" (UID: \"302da7ff-465d-44e7-842d-38f251738705\") " pod="kube-system/coredns-787d4945fb-lzcvb" Feb 9 19:15:20.933554 kubelet[2077]: I0209 19:15:20.933462 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbxgl\" (UniqueName: \"kubernetes.io/projected/302da7ff-465d-44e7-842d-38f251738705-kube-api-access-nbxgl\") pod \"coredns-787d4945fb-lzcvb\" (UID: \"302da7ff-465d-44e7-842d-38f251738705\") " pod="kube-system/coredns-787d4945fb-lzcvb" Feb 9 19:15:20.933758 kubelet[2077]: I0209 19:15:20.933726 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42ee6fbe-9f35-4f17-baea-485555ffd333-config-volume\") pod \"coredns-787d4945fb-ssp8p\" (UID: \"42ee6fbe-9f35-4f17-baea-485555ffd333\") " pod="kube-system/coredns-787d4945fb-ssp8p" Feb 9 19:15:21.240235 env[1135]: time="2024-02-09T19:15:21.239677522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ssp8p,Uid:42ee6fbe-9f35-4f17-baea-485555ffd333,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:21.265671 env[1135]: time="2024-02-09T19:15:21.265617647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-lzcvb,Uid:302da7ff-465d-44e7-842d-38f251738705,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:21.605602 kubelet[2077]: I0209 19:15:21.605459 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rb25m" podStartSLOduration=-9.223372014249369e+09 pod.CreationTimestamp="2024-02-09 19:14:59 +0000 UTC" firstStartedPulling="2024-02-09 19:15:00.339442609 +0000 UTC m=+13.235865934" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:21.603398631 +0000 UTC m=+34.499821956" watchObservedRunningTime="2024-02-09 19:15:21.605407917 +0000 UTC m=+34.501831242" Feb 9 19:15:23.376681 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:15:23.381132 systemd-networkd[1033]: cilium_host: Link UP Feb 9 19:15:23.382840 systemd-networkd[1033]: cilium_net: Link UP Feb 9 19:15:23.382851 systemd-networkd[1033]: cilium_net: Gained carrier Feb 9 19:15:23.383910 systemd-networkd[1033]: cilium_host: Gained carrier Feb 9 19:15:23.501177 systemd-networkd[1033]: cilium_vxlan: Link UP Feb 9 19:15:23.501192 systemd-networkd[1033]: cilium_vxlan: Gained carrier Feb 9 19:15:23.502773 systemd-networkd[1033]: cilium_net: Gained IPv6LL Feb 9 19:15:24.337013 kernel: NET: Registered PF_ALG protocol family Feb 9 19:15:24.384024 systemd-networkd[1033]: cilium_host: Gained IPv6LL Feb 9 19:15:25.187849 systemd-networkd[1033]: lxc_health: Link UP Feb 9 19:15:25.197585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:15:25.197778 systemd-networkd[1033]: lxc_health: Gained carrier Feb 9 19:15:25.316934 systemd-networkd[1033]: lxc483f96f072b1: Link UP Feb 9 19:15:25.322583 kernel: eth0: renamed from tmp7b410 Feb 9 19:15:25.332546 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc483f96f072b1: link becomes ready Feb 9 19:15:25.330668 systemd-networkd[1033]: lxc483f96f072b1: Gained carrier Feb 9 19:15:25.343700 systemd-networkd[1033]: cilium_vxlan: Gained IPv6LL Feb 9 19:15:25.358992 systemd-networkd[1033]: lxce59d3d0e729d: Link UP Feb 9 19:15:25.366633 kernel: eth0: renamed from tmp29266 Feb 9 19:15:25.372233 systemd-networkd[1033]: lxce59d3d0e729d: Gained carrier Feb 9 19:15:25.382238 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce59d3d0e729d: link becomes ready Feb 9 19:15:26.558875 systemd-networkd[1033]: lxc483f96f072b1: Gained IPv6LL Feb 9 19:15:26.942764 systemd-networkd[1033]: lxc_health: Gained IPv6LL Feb 9 19:15:27.199100 systemd-networkd[1033]: lxce59d3d0e729d: Gained IPv6LL Feb 9 19:15:29.889833 env[1135]: time="2024-02-09T19:15:29.889703277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:29.889833 env[1135]: time="2024-02-09T19:15:29.889766847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:29.890519 env[1135]: time="2024-02-09T19:15:29.889785293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:29.890882 env[1135]: time="2024-02-09T19:15:29.890830536Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29266414cccc0f2c470af5cc7913d88ffe8570db822edf255cffa5643e56af84 pid=3257 runtime=io.containerd.runc.v2 Feb 9 19:15:30.015606 env[1135]: time="2024-02-09T19:15:30.014307420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:30.015606 env[1135]: time="2024-02-09T19:15:30.014693732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:30.015606 env[1135]: time="2024-02-09T19:15:30.014726916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:30.015606 env[1135]: time="2024-02-09T19:15:30.015054397Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7b410c0769783475fa91da8c26178a74efa0c5b0ddb5563efc638f728f1fe6ec pid=3296 runtime=io.containerd.runc.v2 Feb 9 19:15:30.026616 env[1135]: time="2024-02-09T19:15:30.025221420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-lzcvb,Uid:302da7ff-465d-44e7-842d-38f251738705,Namespace:kube-system,Attempt:0,} returns sandbox id \"29266414cccc0f2c470af5cc7913d88ffe8570db822edf255cffa5643e56af84\"" Feb 9 19:15:30.046597 env[1135]: time="2024-02-09T19:15:30.045540486Z" level=info msg="CreateContainer within sandbox \"29266414cccc0f2c470af5cc7913d88ffe8570db822edf255cffa5643e56af84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:15:30.090527 env[1135]: time="2024-02-09T19:15:30.090375052Z" level=info msg="CreateContainer within sandbox \"29266414cccc0f2c470af5cc7913d88ffe8570db822edf255cffa5643e56af84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddbc10ce7ef9f07f43547b5bb1cd2ee4a22bee8bdb2c395bfb898c3559176702\"" Feb 9 19:15:30.093425 env[1135]: time="2024-02-09T19:15:30.093386648Z" level=info msg="StartContainer for \"ddbc10ce7ef9f07f43547b5bb1cd2ee4a22bee8bdb2c395bfb898c3559176702\"" Feb 9 19:15:30.107351 env[1135]: time="2024-02-09T19:15:30.107258239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ssp8p,Uid:42ee6fbe-9f35-4f17-baea-485555ffd333,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b410c0769783475fa91da8c26178a74efa0c5b0ddb5563efc638f728f1fe6ec\"" Feb 9 19:15:30.144209 env[1135]: time="2024-02-09T19:15:30.143670554Z" level=info msg="CreateContainer within sandbox \"7b410c0769783475fa91da8c26178a74efa0c5b0ddb5563efc638f728f1fe6ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:15:30.176873 env[1135]: time="2024-02-09T19:15:30.176814336Z" level=info msg="CreateContainer within sandbox \"7b410c0769783475fa91da8c26178a74efa0c5b0ddb5563efc638f728f1fe6ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a4b1382887b8491cf8c8dd3d6f5abb46453901b60da0f0558933d3f199cfc65\"" Feb 9 19:15:30.181912 env[1135]: time="2024-02-09T19:15:30.181865832Z" level=info msg="StartContainer for \"4a4b1382887b8491cf8c8dd3d6f5abb46453901b60da0f0558933d3f199cfc65\"" Feb 9 19:15:30.198406 env[1135]: time="2024-02-09T19:15:30.197469320Z" level=info msg="StartContainer for \"ddbc10ce7ef9f07f43547b5bb1cd2ee4a22bee8bdb2c395bfb898c3559176702\" returns successfully" Feb 9 19:15:30.255911 env[1135]: time="2024-02-09T19:15:30.255853038Z" level=info msg="StartContainer for \"4a4b1382887b8491cf8c8dd3d6f5abb46453901b60da0f0558933d3f199cfc65\" returns successfully" Feb 9 19:15:30.674100 kubelet[2077]: I0209 19:15:30.674061 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-lzcvb" podStartSLOduration=31.674021096 pod.CreationTimestamp="2024-02-09 19:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:30.651338595 +0000 UTC m=+43.547761970" watchObservedRunningTime="2024-02-09 19:15:30.674021096 +0000 UTC m=+43.570444421" Feb 9 19:15:30.916166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount355153959.mount: Deactivated successfully. Feb 9 19:15:31.269014 kubelet[2077]: I0209 19:15:31.268930 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-ssp8p" podStartSLOduration=32.268809711 pod.CreationTimestamp="2024-02-09 19:14:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:30.675410182 +0000 UTC m=+43.571833518" watchObservedRunningTime="2024-02-09 19:15:31.268809711 +0000 UTC m=+44.165233086" Feb 9 19:15:58.846349 systemd[1]: Started sshd@5-172.24.4.148:22-172.24.4.1:60524.service. Feb 9 19:16:00.303877 sshd[3492]: Accepted publickey for core from 172.24.4.1 port 60524 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:00.307638 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:00.321162 systemd[1]: Started session-6.scope. Feb 9 19:16:00.321767 systemd-logind[1123]: New session 6 of user core. Feb 9 19:16:01.088773 sshd[3492]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:01.095666 systemd-logind[1123]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:16:01.097159 systemd[1]: sshd@5-172.24.4.148:22-172.24.4.1:60524.service: Deactivated successfully. Feb 9 19:16:01.103271 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:16:01.105519 systemd-logind[1123]: Removed session 6. Feb 9 19:16:06.097090 systemd[1]: Started sshd@6-172.24.4.148:22-172.24.4.1:37378.service. Feb 9 19:16:07.517654 sshd[3507]: Accepted publickey for core from 172.24.4.1 port 37378 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:07.520091 sshd[3507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:07.530809 systemd[1]: Started session-7.scope. Feb 9 19:16:07.533520 systemd-logind[1123]: New session 7 of user core. Feb 9 19:16:08.345726 sshd[3507]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:08.350326 systemd[1]: sshd@6-172.24.4.148:22-172.24.4.1:37378.service: Deactivated successfully. Feb 9 19:16:08.351976 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:16:08.354378 systemd-logind[1123]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:16:08.356748 systemd-logind[1123]: Removed session 7. Feb 9 19:16:13.351941 systemd[1]: Started sshd@7-172.24.4.148:22-172.24.4.1:37394.service. Feb 9 19:16:15.183124 sshd[3521]: Accepted publickey for core from 172.24.4.1 port 37394 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:15.185738 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:15.197680 systemd-logind[1123]: New session 8 of user core. Feb 9 19:16:15.201486 systemd[1]: Started session-8.scope. Feb 9 19:16:16.318846 sshd[3521]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:16.325149 systemd[1]: sshd@7-172.24.4.148:22-172.24.4.1:37394.service: Deactivated successfully. Feb 9 19:16:16.328707 systemd-logind[1123]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:16:16.330130 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:16:16.332904 systemd-logind[1123]: Removed session 8. Feb 9 19:16:21.326346 systemd[1]: Started sshd@8-172.24.4.148:22-172.24.4.1:46364.service. Feb 9 19:16:22.624753 sshd[3535]: Accepted publickey for core from 172.24.4.1 port 46364 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:22.629751 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:22.641698 systemd-logind[1123]: New session 9 of user core. Feb 9 19:16:22.642668 systemd[1]: Started session-9.scope. Feb 9 19:16:23.342226 sshd[3535]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:23.348898 systemd[1]: Started sshd@9-172.24.4.148:22-172.24.4.1:46366.service. Feb 9 19:16:23.350210 systemd[1]: sshd@8-172.24.4.148:22-172.24.4.1:46364.service: Deactivated successfully. Feb 9 19:16:23.353107 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:16:23.354021 systemd-logind[1123]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:16:23.363894 systemd-logind[1123]: Removed session 9. Feb 9 19:16:24.772154 sshd[3547]: Accepted publickey for core from 172.24.4.1 port 46366 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:24.774691 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:24.786896 systemd-logind[1123]: New session 10 of user core. Feb 9 19:16:24.789272 systemd[1]: Started session-10.scope. Feb 9 19:16:26.562306 systemd[1]: Started sshd@10-172.24.4.148:22-172.24.4.1:46528.service. Feb 9 19:16:26.562719 sshd[3547]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:26.586185 systemd[1]: sshd@9-172.24.4.148:22-172.24.4.1:46366.service: Deactivated successfully. Feb 9 19:16:26.591730 systemd-logind[1123]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:16:26.592033 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:16:26.596390 systemd-logind[1123]: Removed session 10. Feb 9 19:16:27.850055 sshd[3558]: Accepted publickey for core from 172.24.4.1 port 46528 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:27.852675 sshd[3558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:27.861687 systemd-logind[1123]: New session 11 of user core. Feb 9 19:16:27.863440 systemd[1]: Started session-11.scope. Feb 9 19:16:28.860253 sshd[3558]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:28.867372 systemd[1]: sshd@10-172.24.4.148:22-172.24.4.1:46528.service: Deactivated successfully. Feb 9 19:16:28.871774 systemd-logind[1123]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:16:28.873338 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:16:28.876963 systemd-logind[1123]: Removed session 11. Feb 9 19:16:33.868832 systemd[1]: Started sshd@11-172.24.4.148:22-172.24.4.1:46534.service. Feb 9 19:16:34.972311 sshd[3575]: Accepted publickey for core from 172.24.4.1 port 46534 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:34.975084 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:34.985500 systemd-logind[1123]: New session 12 of user core. Feb 9 19:16:34.985764 systemd[1]: Started session-12.scope. Feb 9 19:16:36.014211 sshd[3575]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:36.019498 systemd[1]: sshd@11-172.24.4.148:22-172.24.4.1:46534.service: Deactivated successfully. Feb 9 19:16:36.022005 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:16:36.026375 systemd-logind[1123]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:16:36.031079 systemd-logind[1123]: Removed session 12. Feb 9 19:16:41.021392 systemd[1]: Started sshd@12-172.24.4.148:22-172.24.4.1:54234.service. Feb 9 19:16:42.351321 sshd[3587]: Accepted publickey for core from 172.24.4.1 port 54234 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:42.354533 sshd[3587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:42.366967 systemd-logind[1123]: New session 13 of user core. Feb 9 19:16:42.368050 systemd[1]: Started session-13.scope. Feb 9 19:16:43.124717 sshd[3587]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:43.125693 systemd[1]: Started sshd@13-172.24.4.148:22-172.24.4.1:54248.service. Feb 9 19:16:43.134316 systemd[1]: sshd@12-172.24.4.148:22-172.24.4.1:54234.service: Deactivated successfully. Feb 9 19:16:43.140006 systemd-logind[1123]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:16:43.140157 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:16:43.144904 systemd-logind[1123]: Removed session 13. Feb 9 19:16:44.666434 sshd[3598]: Accepted publickey for core from 172.24.4.1 port 54248 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:44.668800 sshd[3598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:44.680197 systemd[1]: Started session-14.scope. Feb 9 19:16:44.680990 systemd-logind[1123]: New session 14 of user core. Feb 9 19:16:46.046522 sshd[3598]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:46.050924 systemd[1]: Started sshd@14-172.24.4.148:22-172.24.4.1:38470.service. Feb 9 19:16:46.057694 systemd[1]: sshd@13-172.24.4.148:22-172.24.4.1:54248.service: Deactivated successfully. Feb 9 19:16:46.060639 systemd-logind[1123]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:16:46.061779 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:16:46.067400 systemd-logind[1123]: Removed session 14. Feb 9 19:16:47.340120 sshd[3609]: Accepted publickey for core from 172.24.4.1 port 38470 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:47.343001 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:47.354639 systemd-logind[1123]: New session 15 of user core. Feb 9 19:16:47.355554 systemd[1]: Started session-15.scope. Feb 9 19:16:49.883910 sshd[3609]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:49.884139 systemd[1]: Started sshd@15-172.24.4.148:22-172.24.4.1:38486.service. Feb 9 19:16:49.899102 systemd[1]: sshd@14-172.24.4.148:22-172.24.4.1:38470.service: Deactivated successfully. Feb 9 19:16:49.902995 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:16:49.904276 systemd-logind[1123]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:16:49.912988 systemd-logind[1123]: Removed session 15. Feb 9 19:16:52.113749 sshd[3677]: Accepted publickey for core from 172.24.4.1 port 38486 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:52.116344 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:52.127143 systemd-logind[1123]: New session 16 of user core. Feb 9 19:16:52.128037 systemd[1]: Started session-16.scope. Feb 9 19:16:53.089957 sshd[3677]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:53.095903 systemd[1]: Started sshd@16-172.24.4.148:22-172.24.4.1:38502.service. Feb 9 19:16:53.102266 systemd[1]: sshd@15-172.24.4.148:22-172.24.4.1:38486.service: Deactivated successfully. Feb 9 19:16:53.105955 systemd-logind[1123]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:16:53.106027 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:16:53.112914 systemd-logind[1123]: Removed session 16. Feb 9 19:16:54.246695 sshd[3688]: Accepted publickey for core from 172.24.4.1 port 38502 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:16:54.248400 sshd[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:54.258212 systemd-logind[1123]: New session 17 of user core. Feb 9 19:16:54.258250 systemd[1]: Started session-17.scope. Feb 9 19:16:55.089213 sshd[3688]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:55.094972 systemd[1]: sshd@16-172.24.4.148:22-172.24.4.1:38502.service: Deactivated successfully. Feb 9 19:16:55.100056 systemd-logind[1123]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:16:55.100758 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:16:55.105198 systemd-logind[1123]: Removed session 17. Feb 9 19:17:00.094847 systemd[1]: Started sshd@17-172.24.4.148:22-172.24.4.1:42722.service. Feb 9 19:17:01.620226 sshd[3704]: Accepted publickey for core from 172.24.4.1 port 42722 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:01.623092 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:01.635103 systemd[1]: Started session-18.scope. Feb 9 19:17:01.635640 systemd-logind[1123]: New session 18 of user core. Feb 9 19:17:02.455141 sshd[3704]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:02.460777 systemd[1]: sshd@17-172.24.4.148:22-172.24.4.1:42722.service: Deactivated successfully. Feb 9 19:17:02.463126 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:17:02.463336 systemd-logind[1123]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:17:02.466692 systemd-logind[1123]: Removed session 18. Feb 9 19:17:07.461662 systemd[1]: Started sshd@18-172.24.4.148:22-172.24.4.1:46354.service. Feb 9 19:17:08.870273 sshd[3747]: Accepted publickey for core from 172.24.4.1 port 46354 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:08.872313 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:08.878604 systemd-logind[1123]: New session 19 of user core. Feb 9 19:17:08.880774 systemd[1]: Started session-19.scope. Feb 9 19:17:09.562437 sshd[3747]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:09.567960 systemd[1]: sshd@18-172.24.4.148:22-172.24.4.1:46354.service: Deactivated successfully. Feb 9 19:17:09.569774 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:17:09.572716 systemd-logind[1123]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:17:09.575113 systemd-logind[1123]: Removed session 19. Feb 9 19:17:14.569477 systemd[1]: Started sshd@19-172.24.4.148:22-172.24.4.1:34566.service. Feb 9 19:17:16.095631 sshd[3760]: Accepted publickey for core from 172.24.4.1 port 34566 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:16.098971 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:16.109671 systemd-logind[1123]: New session 20 of user core. Feb 9 19:17:16.111072 systemd[1]: Started session-20.scope. Feb 9 19:17:16.853888 sshd[3760]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:16.853226 systemd[1]: Started sshd@20-172.24.4.148:22-172.24.4.1:34580.service. Feb 9 19:17:16.869038 systemd[1]: sshd@19-172.24.4.148:22-172.24.4.1:34566.service: Deactivated successfully. Feb 9 19:17:16.873895 systemd-logind[1123]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:17:16.878255 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:17:16.886021 systemd-logind[1123]: Removed session 20. Feb 9 19:17:18.295918 sshd[3770]: Accepted publickey for core from 172.24.4.1 port 34580 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:18.298494 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:18.310192 systemd[1]: Started session-21.scope. Feb 9 19:17:18.311073 systemd-logind[1123]: New session 21 of user core. Feb 9 19:17:21.390893 systemd[1]: run-containerd-runc-k8s.io-149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4-runc.1N2ULO.mount: Deactivated successfully. Feb 9 19:17:21.404738 env[1135]: time="2024-02-09T19:17:21.404695211Z" level=info msg="StopContainer for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" with timeout 30 (s)" Feb 9 19:17:21.405755 env[1135]: time="2024-02-09T19:17:21.405714276Z" level=info msg="Stop container \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" with signal terminated" Feb 9 19:17:21.435589 env[1135]: time="2024-02-09T19:17:21.430246027Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:17:21.439649 env[1135]: time="2024-02-09T19:17:21.439615383Z" level=info msg="StopContainer for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" with timeout 1 (s)" Feb 9 19:17:21.440609 env[1135]: time="2024-02-09T19:17:21.440538678Z" level=info msg="Stop container \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" with signal terminated" Feb 9 19:17:21.452307 systemd-networkd[1033]: lxc_health: Link DOWN Feb 9 19:17:21.452314 systemd-networkd[1033]: lxc_health: Lost carrier Feb 9 19:17:21.478636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad-rootfs.mount: Deactivated successfully. Feb 9 19:17:21.494185 env[1135]: time="2024-02-09T19:17:21.494112394Z" level=info msg="shim disconnected" id=43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad Feb 9 19:17:21.494458 env[1135]: time="2024-02-09T19:17:21.494438587Z" level=warning msg="cleaning up after shim disconnected" id=43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad namespace=k8s.io Feb 9 19:17:21.494541 env[1135]: time="2024-02-09T19:17:21.494525861Z" level=info msg="cleaning up dead shim" Feb 9 19:17:21.510947 env[1135]: time="2024-02-09T19:17:21.510894842Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Feb 9 19:17:21.515488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4-rootfs.mount: Deactivated successfully. Feb 9 19:17:21.516413 env[1135]: time="2024-02-09T19:17:21.516381602Z" level=info msg="StopContainer for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" returns successfully" Feb 9 19:17:21.517639 env[1135]: time="2024-02-09T19:17:21.517603709Z" level=info msg="StopPodSandbox for \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\"" Feb 9 19:17:21.517811 env[1135]: time="2024-02-09T19:17:21.517776483Z" level=info msg="Container to stop \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:21.526123 env[1135]: time="2024-02-09T19:17:21.526074237Z" level=info msg="shim disconnected" id=149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4 Feb 9 19:17:21.526974 env[1135]: time="2024-02-09T19:17:21.526925085Z" level=warning msg="cleaning up after shim disconnected" id=149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4 namespace=k8s.io Feb 9 19:17:21.527124 env[1135]: time="2024-02-09T19:17:21.527106947Z" level=info msg="cleaning up dead shim" Feb 9 19:17:21.544438 env[1135]: time="2024-02-09T19:17:21.544389425Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3863 runtime=io.containerd.runc.v2\n" Feb 9 19:17:21.548214 env[1135]: time="2024-02-09T19:17:21.548170490Z" level=info msg="StopContainer for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" returns successfully" Feb 9 19:17:21.548782 env[1135]: time="2024-02-09T19:17:21.548748456Z" level=info msg="StopPodSandbox for \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\"" Feb 9 19:17:21.549023 env[1135]: time="2024-02-09T19:17:21.548906293Z" level=info msg="Container to stop \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:21.549209 env[1135]: time="2024-02-09T19:17:21.549187171Z" level=info msg="Container to stop \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:21.549310 env[1135]: time="2024-02-09T19:17:21.549289903Z" level=info msg="Container to stop \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:21.549410 env[1135]: time="2024-02-09T19:17:21.549389732Z" level=info msg="Container to stop \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:21.549506 env[1135]: time="2024-02-09T19:17:21.549485171Z" level=info msg="Container to stop \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:21.563487 env[1135]: time="2024-02-09T19:17:21.563444624Z" level=info msg="shim disconnected" id=69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69 Feb 9 19:17:21.563727 env[1135]: time="2024-02-09T19:17:21.563707467Z" level=warning msg="cleaning up after shim disconnected" id=69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69 namespace=k8s.io Feb 9 19:17:21.563825 env[1135]: time="2024-02-09T19:17:21.563809008Z" level=info msg="cleaning up dead shim" Feb 9 19:17:21.580330 env[1135]: time="2024-02-09T19:17:21.580277145Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3898 runtime=io.containerd.runc.v2\n" Feb 9 19:17:21.580890 env[1135]: time="2024-02-09T19:17:21.580860352Z" level=info msg="TearDown network for sandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" successfully" Feb 9 19:17:21.580998 env[1135]: time="2024-02-09T19:17:21.580977381Z" level=info msg="StopPodSandbox for \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" returns successfully" Feb 9 19:17:21.590072 env[1135]: time="2024-02-09T19:17:21.590030524Z" level=info msg="shim disconnected" id=914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6 Feb 9 19:17:21.590347 env[1135]: time="2024-02-09T19:17:21.590327873Z" level=warning msg="cleaning up after shim disconnected" id=914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6 namespace=k8s.io Feb 9 19:17:21.590445 env[1135]: time="2024-02-09T19:17:21.590424404Z" level=info msg="cleaning up dead shim" Feb 9 19:17:21.598899 env[1135]: time="2024-02-09T19:17:21.598852772Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3923 runtime=io.containerd.runc.v2\n" Feb 9 19:17:21.599190 env[1135]: time="2024-02-09T19:17:21.599158817Z" level=info msg="TearDown network for sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" successfully" Feb 9 19:17:21.599244 env[1135]: time="2024-02-09T19:17:21.599187592Z" level=info msg="StopPodSandbox for \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" returns successfully" Feb 9 19:17:21.689711 kubelet[2077]: I0209 19:17:21.687497 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-bpf-maps\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.689711 kubelet[2077]: I0209 19:17:21.687695 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/761def3a-235d-452e-b574-043c0bdbb2da-clustermesh-secrets\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.689711 kubelet[2077]: I0209 19:17:21.687799 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-run\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.689711 kubelet[2077]: I0209 19:17:21.687874 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf7cc758-7181-4a33-a26d-da405010726b-cilium-config-path\") pod \"cf7cc758-7181-4a33-a26d-da405010726b\" (UID: \"cf7cc758-7181-4a33-a26d-da405010726b\") " Feb 9 19:17:21.689711 kubelet[2077]: I0209 19:17:21.687932 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-xtables-lock\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.689711 kubelet[2077]: I0209 19:17:21.687993 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cni-path\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.690871 kubelet[2077]: I0209 19:17:21.689886 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-etc-cni-netd\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.690871 kubelet[2077]: I0209 19:17:21.689973 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zcds\" (UniqueName: \"kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-kube-api-access-5zcds\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.690871 kubelet[2077]: I0209 19:17:21.690029 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-kernel\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.690871 kubelet[2077]: I0209 19:17:21.690081 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-lib-modules\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.691169 kubelet[2077]: I0209 19:17:21.690981 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-net\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.691169 kubelet[2077]: I0209 19:17:21.691077 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-hostproc\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.691169 kubelet[2077]: I0209 19:17:21.691141 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwkjz\" (UniqueName: \"kubernetes.io/projected/cf7cc758-7181-4a33-a26d-da405010726b-kube-api-access-lwkjz\") pod \"cf7cc758-7181-4a33-a26d-da405010726b\" (UID: \"cf7cc758-7181-4a33-a26d-da405010726b\") " Feb 9 19:17:21.691413 kubelet[2077]: I0209 19:17:21.691193 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-cgroup\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.691413 kubelet[2077]: I0209 19:17:21.691269 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/761def3a-235d-452e-b574-043c0bdbb2da-cilium-config-path\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.691413 kubelet[2077]: I0209 19:17:21.691325 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-hubble-tls\") pod \"761def3a-235d-452e-b574-043c0bdbb2da\" (UID: \"761def3a-235d-452e-b574-043c0bdbb2da\") " Feb 9 19:17:21.692050 kubelet[2077]: I0209 19:17:21.687492 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.695175 kubelet[2077]: I0209 19:17:21.694815 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.695175 kubelet[2077]: I0209 19:17:21.694905 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.695175 kubelet[2077]: I0209 19:17:21.694953 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.695175 kubelet[2077]: I0209 19:17:21.695001 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-hostproc" (OuterVolumeSpecName: "hostproc") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.700670 kubelet[2077]: I0209 19:17:21.700553 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.700939 kubelet[2077]: I0209 19:17:21.700893 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-kube-api-access-5zcds" (OuterVolumeSpecName: "kube-api-access-5zcds") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "kube-api-access-5zcds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:21.701137 kubelet[2077]: I0209 19:17:21.701101 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf7cc758-7181-4a33-a26d-da405010726b-kube-api-access-lwkjz" (OuterVolumeSpecName: "kube-api-access-lwkjz") pod "cf7cc758-7181-4a33-a26d-da405010726b" (UID: "cf7cc758-7181-4a33-a26d-da405010726b"). InnerVolumeSpecName "kube-api-access-lwkjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:21.701893 kubelet[2077]: W0209 19:17:21.701724 2077 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/761def3a-235d-452e-b574-043c0bdbb2da/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:17:21.706687 kubelet[2077]: I0209 19:17:21.706624 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/761def3a-235d-452e-b574-043c0bdbb2da-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:17:21.708700 kubelet[2077]: I0209 19:17:21.708637 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761def3a-235d-452e-b574-043c0bdbb2da-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:17:21.708951 kubelet[2077]: I0209 19:17:21.708917 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.709347 kubelet[2077]: W0209 19:17:21.709295 2077 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/cf7cc758-7181-4a33-a26d-da405010726b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:17:21.711605 kubelet[2077]: I0209 19:17:21.711514 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:21.711723 kubelet[2077]: I0209 19:17:21.711657 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cni-path" (OuterVolumeSpecName: "cni-path") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.711795 kubelet[2077]: I0209 19:17:21.711707 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.711795 kubelet[2077]: I0209 19:17:21.711765 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "761def3a-235d-452e-b574-043c0bdbb2da" (UID: "761def3a-235d-452e-b574-043c0bdbb2da"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:21.714316 kubelet[2077]: I0209 19:17:21.714272 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf7cc758-7181-4a33-a26d-da405010726b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cf7cc758-7181-4a33-a26d-da405010726b" (UID: "cf7cc758-7181-4a33-a26d-da405010726b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:17:21.793829 kubelet[2077]: I0209 19:17:21.793781 2077 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-hubble-tls\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.794094 kubelet[2077]: I0209 19:17:21.794071 2077 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-hostproc\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.794269 kubelet[2077]: I0209 19:17:21.794247 2077 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-lwkjz\" (UniqueName: \"kubernetes.io/projected/cf7cc758-7181-4a33-a26d-da405010726b-kube-api-access-lwkjz\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.794436 kubelet[2077]: I0209 19:17:21.794414 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-cgroup\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.794676 kubelet[2077]: I0209 19:17:21.794652 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/761def3a-235d-452e-b574-043c0bdbb2da-cilium-config-path\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.794858 kubelet[2077]: I0209 19:17:21.794837 2077 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-bpf-maps\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.795015 kubelet[2077]: I0209 19:17:21.794995 2077 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/761def3a-235d-452e-b574-043c0bdbb2da-clustermesh-secrets\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.795177 kubelet[2077]: I0209 19:17:21.795156 2077 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-etc-cni-netd\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.795379 kubelet[2077]: I0209 19:17:21.795321 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cilium-run\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.795648 kubelet[2077]: I0209 19:17:21.795623 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf7cc758-7181-4a33-a26d-da405010726b-cilium-config-path\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.795841 kubelet[2077]: I0209 19:17:21.795818 2077 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-xtables-lock\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.796013 kubelet[2077]: I0209 19:17:21.795985 2077 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-cni-path\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.796185 kubelet[2077]: I0209 19:17:21.796164 2077 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-5zcds\" (UniqueName: \"kubernetes.io/projected/761def3a-235d-452e-b574-043c0bdbb2da-kube-api-access-5zcds\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.796408 kubelet[2077]: I0209 19:17:21.796384 2077 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-kernel\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.796615 kubelet[2077]: I0209 19:17:21.796590 2077 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-lib-modules\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:21.796802 kubelet[2077]: I0209 19:17:21.796780 2077 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/761def3a-235d-452e-b574-043c0bdbb2da-host-proc-sys-net\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:22.006107 kubelet[2077]: I0209 19:17:22.003009 2077 scope.go:115] "RemoveContainer" containerID="43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad" Feb 9 19:17:22.012033 env[1135]: time="2024-02-09T19:17:22.011938825Z" level=info msg="RemoveContainer for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\"" Feb 9 19:17:22.037330 env[1135]: time="2024-02-09T19:17:22.032441444Z" level=info msg="RemoveContainer for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" returns successfully" Feb 9 19:17:22.037330 env[1135]: time="2024-02-09T19:17:22.033631800Z" level=error msg="ContainerStatus for \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\": not found" Feb 9 19:17:22.037766 kubelet[2077]: I0209 19:17:22.033214 2077 scope.go:115] "RemoveContainer" containerID="43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad" Feb 9 19:17:22.037766 kubelet[2077]: E0209 19:17:22.035211 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\": not found" containerID="43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad" Feb 9 19:17:22.039466 kubelet[2077]: I0209 19:17:22.039429 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad} err="failed to get container status \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\": rpc error: code = NotFound desc = an error occurred when try to find container \"43e65f1c8ad4335ab747f3affa5a7be638e4776b1e3efb982054e5433a9372ad\": not found" Feb 9 19:17:22.039717 kubelet[2077]: I0209 19:17:22.039689 2077 scope.go:115] "RemoveContainer" containerID="149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4" Feb 9 19:17:22.051969 env[1135]: time="2024-02-09T19:17:22.051693923Z" level=info msg="RemoveContainer for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\"" Feb 9 19:17:22.069206 env[1135]: time="2024-02-09T19:17:22.069132573Z" level=info msg="RemoveContainer for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" returns successfully" Feb 9 19:17:22.070133 kubelet[2077]: I0209 19:17:22.070100 2077 scope.go:115] "RemoveContainer" containerID="b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed" Feb 9 19:17:22.076659 env[1135]: time="2024-02-09T19:17:22.076028041Z" level=info msg="RemoveContainer for \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\"" Feb 9 19:17:22.081002 env[1135]: time="2024-02-09T19:17:22.080965639Z" level=info msg="RemoveContainer for \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\" returns successfully" Feb 9 19:17:22.081622 kubelet[2077]: I0209 19:17:22.081540 2077 scope.go:115] "RemoveContainer" containerID="3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7" Feb 9 19:17:22.083853 env[1135]: time="2024-02-09T19:17:22.083795317Z" level=info msg="RemoveContainer for \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\"" Feb 9 19:17:22.091439 env[1135]: time="2024-02-09T19:17:22.091408233Z" level=info msg="RemoveContainer for \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\" returns successfully" Feb 9 19:17:22.091799 kubelet[2077]: I0209 19:17:22.091766 2077 scope.go:115] "RemoveContainer" containerID="bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1" Feb 9 19:17:22.095109 env[1135]: time="2024-02-09T19:17:22.095079452Z" level=info msg="RemoveContainer for \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\"" Feb 9 19:17:22.098547 env[1135]: time="2024-02-09T19:17:22.098508386Z" level=info msg="RemoveContainer for \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\" returns successfully" Feb 9 19:17:22.099077 kubelet[2077]: I0209 19:17:22.098982 2077 scope.go:115] "RemoveContainer" containerID="d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf" Feb 9 19:17:22.101984 env[1135]: time="2024-02-09T19:17:22.101938442Z" level=info msg="RemoveContainer for \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\"" Feb 9 19:17:22.105323 env[1135]: time="2024-02-09T19:17:22.105292013Z" level=info msg="RemoveContainer for \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\" returns successfully" Feb 9 19:17:22.105580 kubelet[2077]: I0209 19:17:22.105492 2077 scope.go:115] "RemoveContainer" containerID="149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4" Feb 9 19:17:22.105947 env[1135]: time="2024-02-09T19:17:22.105875760Z" level=error msg="ContainerStatus for \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\": not found" Feb 9 19:17:22.106221 kubelet[2077]: E0209 19:17:22.106099 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\": not found" containerID="149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4" Feb 9 19:17:22.106221 kubelet[2077]: I0209 19:17:22.106131 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4} err="failed to get container status \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\": rpc error: code = NotFound desc = an error occurred when try to find container \"149199e172d17de488429e74849057276c3179d7e377b6685cce8576e5065af4\": not found" Feb 9 19:17:22.106221 kubelet[2077]: I0209 19:17:22.106142 2077 scope.go:115] "RemoveContainer" containerID="b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed" Feb 9 19:17:22.106530 env[1135]: time="2024-02-09T19:17:22.106448186Z" level=error msg="ContainerStatus for \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\": not found" Feb 9 19:17:22.106834 kubelet[2077]: E0209 19:17:22.106740 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\": not found" containerID="b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed" Feb 9 19:17:22.106834 kubelet[2077]: I0209 19:17:22.106766 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed} err="failed to get container status \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"b711b0579b6991c3679a9a8542e47ca0b7aca6c8ca71b2a7dba33fad2a9279ed\": not found" Feb 9 19:17:22.106834 kubelet[2077]: I0209 19:17:22.106775 2077 scope.go:115] "RemoveContainer" containerID="3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7" Feb 9 19:17:22.107246 env[1135]: time="2024-02-09T19:17:22.107195661Z" level=error msg="ContainerStatus for \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\": not found" Feb 9 19:17:22.107823 kubelet[2077]: E0209 19:17:22.107673 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\": not found" containerID="3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7" Feb 9 19:17:22.107823 kubelet[2077]: I0209 19:17:22.107710 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7} err="failed to get container status \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3614110f9a7eae0ca80fac36f6f95c063abf591cde1a60a7b931c90106bb6fc7\": not found" Feb 9 19:17:22.107823 kubelet[2077]: I0209 19:17:22.107729 2077 scope.go:115] "RemoveContainer" containerID="bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1" Feb 9 19:17:22.108212 env[1135]: time="2024-02-09T19:17:22.108071356Z" level=error msg="ContainerStatus for \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\": not found" Feb 9 19:17:22.108452 kubelet[2077]: E0209 19:17:22.108353 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\": not found" containerID="bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1" Feb 9 19:17:22.108452 kubelet[2077]: I0209 19:17:22.108379 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1} err="failed to get container status \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc57af03573a0916bf73d56073b2eb5bd1707be9069dbdca1e9c86055c4b15a1\": not found" Feb 9 19:17:22.108452 kubelet[2077]: I0209 19:17:22.108389 2077 scope.go:115] "RemoveContainer" containerID="d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf" Feb 9 19:17:22.108983 env[1135]: time="2024-02-09T19:17:22.108872261Z" level=error msg="ContainerStatus for \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\": not found" Feb 9 19:17:22.109222 kubelet[2077]: E0209 19:17:22.109131 2077 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\": not found" containerID="d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf" Feb 9 19:17:22.109222 kubelet[2077]: I0209 19:17:22.109197 2077 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf} err="failed to get container status \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1c0289ecd746258bd2db526f68ddf901a3a8c125fedb5494aeba16606af3cdf\": not found" Feb 9 19:17:22.383722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6-rootfs.mount: Deactivated successfully. Feb 9 19:17:22.384055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6-shm.mount: Deactivated successfully. Feb 9 19:17:22.384317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69-rootfs.mount: Deactivated successfully. Feb 9 19:17:22.384587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69-shm.mount: Deactivated successfully. Feb 9 19:17:22.384848 systemd[1]: var-lib-kubelet-pods-761def3a\x2d235d\x2d452e\x2db574\x2d043c0bdbb2da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zcds.mount: Deactivated successfully. Feb 9 19:17:22.385126 systemd[1]: var-lib-kubelet-pods-cf7cc758\x2d7181\x2d4a33\x2da26d\x2dda405010726b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlwkjz.mount: Deactivated successfully. Feb 9 19:17:22.385352 systemd[1]: var-lib-kubelet-pods-761def3a\x2d235d\x2d452e\x2db574\x2d043c0bdbb2da-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:17:22.385642 systemd[1]: var-lib-kubelet-pods-761def3a\x2d235d\x2d452e\x2db574\x2d043c0bdbb2da-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:17:22.465400 kubelet[2077]: E0209 19:17:22.465346 2077 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:17:23.375410 kubelet[2077]: I0209 19:17:23.375340 2077 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=761def3a-235d-452e-b574-043c0bdbb2da path="/var/lib/kubelet/pods/761def3a-235d-452e-b574-043c0bdbb2da/volumes" Feb 9 19:17:23.377810 kubelet[2077]: I0209 19:17:23.377780 2077 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cf7cc758-7181-4a33-a26d-da405010726b path="/var/lib/kubelet/pods/cf7cc758-7181-4a33-a26d-da405010726b/volumes" Feb 9 19:17:23.441133 systemd[1]: Started sshd@21-172.24.4.148:22-172.24.4.1:34592.service. Feb 9 19:17:23.446106 sshd[3770]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:23.453440 systemd[1]: sshd@20-172.24.4.148:22-172.24.4.1:34580.service: Deactivated successfully. Feb 9 19:17:23.455619 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:17:23.464997 systemd-logind[1123]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:17:23.469889 systemd-logind[1123]: Removed session 21. Feb 9 19:17:24.841330 sshd[3941]: Accepted publickey for core from 172.24.4.1 port 34592 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:24.844515 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:24.856960 systemd-logind[1123]: New session 22 of user core. Feb 9 19:17:24.857703 systemd[1]: Started session-22.scope. Feb 9 19:17:26.458842 kubelet[2077]: I0209 19:17:26.458809 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:26.460551 kubelet[2077]: E0209 19:17:26.460513 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="761def3a-235d-452e-b574-043c0bdbb2da" containerName="mount-cgroup" Feb 9 19:17:26.460736 kubelet[2077]: E0209 19:17:26.460721 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="761def3a-235d-452e-b574-043c0bdbb2da" containerName="mount-bpf-fs" Feb 9 19:17:26.460833 kubelet[2077]: E0209 19:17:26.460821 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="761def3a-235d-452e-b574-043c0bdbb2da" containerName="clean-cilium-state" Feb 9 19:17:26.460921 kubelet[2077]: E0209 19:17:26.460910 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cf7cc758-7181-4a33-a26d-da405010726b" containerName="cilium-operator" Feb 9 19:17:26.461022 kubelet[2077]: E0209 19:17:26.461011 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="761def3a-235d-452e-b574-043c0bdbb2da" containerName="apply-sysctl-overwrites" Feb 9 19:17:26.461126 kubelet[2077]: E0209 19:17:26.461115 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="761def3a-235d-452e-b574-043c0bdbb2da" containerName="cilium-agent" Feb 9 19:17:26.461269 kubelet[2077]: I0209 19:17:26.461255 2077 memory_manager.go:346] "RemoveStaleState removing state" podUID="cf7cc758-7181-4a33-a26d-da405010726b" containerName="cilium-operator" Feb 9 19:17:26.461375 kubelet[2077]: I0209 19:17:26.461363 2077 memory_manager.go:346] "RemoveStaleState removing state" podUID="761def3a-235d-452e-b574-043c0bdbb2da" containerName="cilium-agent" Feb 9 19:17:26.531495 kubelet[2077]: I0209 19:17:26.531465 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-clustermesh-secrets\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.532990 kubelet[2077]: I0209 19:17:26.532961 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-kernel\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533055 kubelet[2077]: I0209 19:17:26.533043 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-bpf-maps\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533108 kubelet[2077]: I0209 19:17:26.533073 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-etc-cni-netd\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533108 kubelet[2077]: I0209 19:17:26.533104 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-lib-modules\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533173 kubelet[2077]: I0209 19:17:26.533134 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-xtables-lock\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533173 kubelet[2077]: I0209 19:17:26.533162 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-cgroup\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533242 kubelet[2077]: I0209 19:17:26.533190 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-hostproc\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533242 kubelet[2077]: I0209 19:17:26.533217 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-config-path\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533327 kubelet[2077]: I0209 19:17:26.533244 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-hubble-tls\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533327 kubelet[2077]: I0209 19:17:26.533273 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6shg\" (UniqueName: \"kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-kube-api-access-z6shg\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533327 kubelet[2077]: I0209 19:17:26.533298 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cni-path\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533327 kubelet[2077]: I0209 19:17:26.533328 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-ipsec-secrets\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533456 kubelet[2077]: I0209 19:17:26.533356 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-net\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.533456 kubelet[2077]: I0209 19:17:26.533390 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-run\") pod \"cilium-65d4j\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " pod="kube-system/cilium-65d4j" Feb 9 19:17:26.699453 sshd[3941]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:26.701850 systemd[1]: Started sshd@22-172.24.4.148:22-172.24.4.1:36466.service. Feb 9 19:17:26.713425 systemd[1]: sshd@21-172.24.4.148:22-172.24.4.1:34592.service: Deactivated successfully. Feb 9 19:17:26.722126 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:17:26.726218 systemd-logind[1123]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:17:26.728790 systemd-logind[1123]: Removed session 22. Feb 9 19:17:26.770122 env[1135]: time="2024-02-09T19:17:26.770002095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-65d4j,Uid:e78c746d-35c4-4afa-a106-ad0cff0c1397,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:26.795628 env[1135]: time="2024-02-09T19:17:26.792011123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:26.795628 env[1135]: time="2024-02-09T19:17:26.792074201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:26.795628 env[1135]: time="2024-02-09T19:17:26.792098226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:26.795628 env[1135]: time="2024-02-09T19:17:26.792462121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31 pid=3967 runtime=io.containerd.runc.v2 Feb 9 19:17:26.855413 env[1135]: time="2024-02-09T19:17:26.855344123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-65d4j,Uid:e78c746d-35c4-4afa-a106-ad0cff0c1397,Namespace:kube-system,Attempt:0,} returns sandbox id \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\"" Feb 9 19:17:26.860516 env[1135]: time="2024-02-09T19:17:26.860480646Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:17:26.873120 env[1135]: time="2024-02-09T19:17:26.873070092Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba\"" Feb 9 19:17:26.873811 env[1135]: time="2024-02-09T19:17:26.873783694Z" level=info msg="StartContainer for \"ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba\"" Feb 9 19:17:26.928318 env[1135]: time="2024-02-09T19:17:26.928141378Z" level=info msg="StartContainer for \"ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba\" returns successfully" Feb 9 19:17:26.969337 env[1135]: time="2024-02-09T19:17:26.969276245Z" level=info msg="shim disconnected" id=ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba Feb 9 19:17:26.969337 env[1135]: time="2024-02-09T19:17:26.969326870Z" level=warning msg="cleaning up after shim disconnected" id=ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba namespace=k8s.io Feb 9 19:17:26.969337 env[1135]: time="2024-02-09T19:17:26.969338081Z" level=info msg="cleaning up dead shim" Feb 9 19:17:26.977788 env[1135]: time="2024-02-09T19:17:26.977643378Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4052 runtime=io.containerd.runc.v2\n" Feb 9 19:17:27.058976 env[1135]: time="2024-02-09T19:17:27.058913334Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:17:27.091415 env[1135]: time="2024-02-09T19:17:27.091279883Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb\"" Feb 9 19:17:27.095907 env[1135]: time="2024-02-09T19:17:27.095847557Z" level=info msg="StartContainer for \"d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb\"" Feb 9 19:17:27.175494 env[1135]: time="2024-02-09T19:17:27.175450458Z" level=info msg="StartContainer for \"d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb\" returns successfully" Feb 9 19:17:27.198077 env[1135]: time="2024-02-09T19:17:27.198021933Z" level=info msg="shim disconnected" id=d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb Feb 9 19:17:27.198077 env[1135]: time="2024-02-09T19:17:27.198071065Z" level=warning msg="cleaning up after shim disconnected" id=d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb namespace=k8s.io Feb 9 19:17:27.198077 env[1135]: time="2024-02-09T19:17:27.198082226Z" level=info msg="cleaning up dead shim" Feb 9 19:17:27.206287 env[1135]: time="2024-02-09T19:17:27.206232652Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4117 runtime=io.containerd.runc.v2\n" Feb 9 19:17:27.466354 kubelet[2077]: E0209 19:17:27.466306 2077 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:17:27.995333 sshd[3956]: Accepted publickey for core from 172.24.4.1 port 36466 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:27.998136 sshd[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:28.008755 systemd-logind[1123]: New session 23 of user core. Feb 9 19:17:28.010504 systemd[1]: Started session-23.scope. Feb 9 19:17:28.056696 env[1135]: time="2024-02-09T19:17:28.056396329Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:17:28.084934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount517884037.mount: Deactivated successfully. Feb 9 19:17:28.104780 env[1135]: time="2024-02-09T19:17:28.104713412Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c\"" Feb 9 19:17:28.108753 env[1135]: time="2024-02-09T19:17:28.108721093Z" level=info msg="StartContainer for \"6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c\"" Feb 9 19:17:28.194662 env[1135]: time="2024-02-09T19:17:28.194603123Z" level=info msg="StartContainer for \"6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c\" returns successfully" Feb 9 19:17:28.223994 env[1135]: time="2024-02-09T19:17:28.223919883Z" level=info msg="shim disconnected" id=6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c Feb 9 19:17:28.223994 env[1135]: time="2024-02-09T19:17:28.223990705Z" level=warning msg="cleaning up after shim disconnected" id=6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c namespace=k8s.io Feb 9 19:17:28.224202 env[1135]: time="2024-02-09T19:17:28.224002998Z" level=info msg="cleaning up dead shim" Feb 9 19:17:28.232528 env[1135]: time="2024-02-09T19:17:28.232471292Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4176 runtime=io.containerd.runc.v2\n" Feb 9 19:17:28.644720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c-rootfs.mount: Deactivated successfully. Feb 9 19:17:28.841092 sshd[3956]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:28.848115 systemd[1]: Started sshd@23-172.24.4.148:22-172.24.4.1:36468.service. Feb 9 19:17:28.851239 systemd[1]: sshd@22-172.24.4.148:22-172.24.4.1:36466.service: Deactivated successfully. Feb 9 19:17:28.861619 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:17:28.863138 systemd-logind[1123]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:17:28.866014 systemd-logind[1123]: Removed session 23. Feb 9 19:17:29.061097 env[1135]: time="2024-02-09T19:17:29.060856448Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:17:29.101602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980592507.mount: Deactivated successfully. Feb 9 19:17:29.112873 env[1135]: time="2024-02-09T19:17:29.112698177Z" level=info msg="CreateContainer within sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e\"" Feb 9 19:17:29.125348 env[1135]: time="2024-02-09T19:17:29.122480129Z" level=info msg="StartContainer for \"7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e\"" Feb 9 19:17:29.189589 env[1135]: time="2024-02-09T19:17:29.189527860Z" level=info msg="StartContainer for \"7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e\" returns successfully" Feb 9 19:17:29.212959 env[1135]: time="2024-02-09T19:17:29.212910963Z" level=info msg="shim disconnected" id=7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e Feb 9 19:17:29.212959 env[1135]: time="2024-02-09T19:17:29.212959040Z" level=warning msg="cleaning up after shim disconnected" id=7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e namespace=k8s.io Feb 9 19:17:29.213182 env[1135]: time="2024-02-09T19:17:29.212971143Z" level=info msg="cleaning up dead shim" Feb 9 19:17:29.221001 env[1135]: time="2024-02-09T19:17:29.220965512Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4239 runtime=io.containerd.runc.v2\n" Feb 9 19:17:29.647981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e-rootfs.mount: Deactivated successfully. Feb 9 19:17:30.072845 env[1135]: time="2024-02-09T19:17:30.072553799Z" level=info msg="StopPodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\"" Feb 9 19:17:30.075880 env[1135]: time="2024-02-09T19:17:30.073089828Z" level=info msg="Container to stop \"7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:30.076203 env[1135]: time="2024-02-09T19:17:30.075953416Z" level=info msg="Container to stop \"ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:30.076321 env[1135]: time="2024-02-09T19:17:30.076079857Z" level=info msg="Container to stop \"d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:30.076401 env[1135]: time="2024-02-09T19:17:30.076307453Z" level=info msg="Container to stop \"6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:17:30.080703 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31-shm.mount: Deactivated successfully. Feb 9 19:17:30.157607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31-rootfs.mount: Deactivated successfully. Feb 9 19:17:30.159624 kubelet[2077]: I0209 19:17:30.157825 2077 setters.go:548] "Node became not ready" node="ci-3510-3-2-c-8f3f3a83f5.novalocal" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:17:30.154849768 +0000 UTC m=+163.051273093 LastTransitionTime:2024-02-09 19:17:30.154849768 +0000 UTC m=+163.051273093 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:17:30.167978 env[1135]: time="2024-02-09T19:17:30.167803067Z" level=info msg="shim disconnected" id=11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31 Feb 9 19:17:30.167978 env[1135]: time="2024-02-09T19:17:30.167866543Z" level=warning msg="cleaning up after shim disconnected" id=11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31 namespace=k8s.io Feb 9 19:17:30.167978 env[1135]: time="2024-02-09T19:17:30.167878525Z" level=info msg="cleaning up dead shim" Feb 9 19:17:30.184714 env[1135]: time="2024-02-09T19:17:30.184661265Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4271 runtime=io.containerd.runc.v2\n" Feb 9 19:17:30.185012 env[1135]: time="2024-02-09T19:17:30.184984726Z" level=info msg="TearDown network for sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" successfully" Feb 9 19:17:30.185069 env[1135]: time="2024-02-09T19:17:30.185012747Z" level=info msg="StopPodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" returns successfully" Feb 9 19:17:30.289266 kubelet[2077]: I0209 19:17:30.289180 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-etc-cni-netd\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289537 kubelet[2077]: I0209 19:17:30.289311 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-net\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289537 kubelet[2077]: I0209 19:17:30.289409 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-run\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289537 kubelet[2077]: I0209 19:17:30.289521 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-clustermesh-secrets\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289824 kubelet[2077]: I0209 19:17:30.289720 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-ipsec-secrets\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289824 kubelet[2077]: I0209 19:17:30.289781 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cni-path\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289960 kubelet[2077]: I0209 19:17:30.289875 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-kernel\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.289960 kubelet[2077]: I0209 19:17:30.289931 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-lib-modules\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290173 kubelet[2077]: I0209 19:17:30.290021 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-cgroup\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290173 kubelet[2077]: I0209 19:17:30.290083 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-hubble-tls\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290173 kubelet[2077]: I0209 19:17:30.290133 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-bpf-maps\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290369 kubelet[2077]: I0209 19:17:30.290183 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-xtables-lock\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290369 kubelet[2077]: I0209 19:17:30.290290 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-config-path\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290369 kubelet[2077]: I0209 19:17:30.290352 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6shg\" (UniqueName: \"kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-kube-api-access-z6shg\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290602 kubelet[2077]: I0209 19:17:30.290403 2077 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-hostproc\") pod \"e78c746d-35c4-4afa-a106-ad0cff0c1397\" (UID: \"e78c746d-35c4-4afa-a106-ad0cff0c1397\") " Feb 9 19:17:30.290602 kubelet[2077]: I0209 19:17:30.290521 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-hostproc" (OuterVolumeSpecName: "hostproc") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.290760 kubelet[2077]: I0209 19:17:30.290638 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.290760 kubelet[2077]: I0209 19:17:30.290683 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.290760 kubelet[2077]: I0209 19:17:30.290721 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.291658 kubelet[2077]: I0209 19:17:30.291024 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.299835 systemd[1]: var-lib-kubelet-pods-e78c746d\x2d35c4\x2d4afa\x2da106\x2dad0cff0c1397-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:17:30.301642 kubelet[2077]: I0209 19:17:30.301533 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cni-path" (OuterVolumeSpecName: "cni-path") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.306790 systemd[1]: var-lib-kubelet-pods-e78c746d\x2d35c4\x2d4afa\x2da106\x2dad0cff0c1397-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:17:30.309524 kubelet[2077]: I0209 19:17:30.301894 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.309817 kubelet[2077]: I0209 19:17:30.301936 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.309984 kubelet[2077]: W0209 19:17:30.302213 2077 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e78c746d-35c4-4afa-a106-ad0cff0c1397/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:17:30.313683 kubelet[2077]: I0209 19:17:30.308369 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.313899 kubelet[2077]: I0209 19:17:30.308469 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:17:30.314052 kubelet[2077]: I0209 19:17:30.309290 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:17:30.314203 kubelet[2077]: I0209 19:17:30.309493 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:17:30.316208 kubelet[2077]: I0209 19:17:30.316159 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:17:30.317323 kubelet[2077]: I0209 19:17:30.317245 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:30.321878 kubelet[2077]: I0209 19:17:30.321683 2077 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-kube-api-access-z6shg" (OuterVolumeSpecName: "kube-api-access-z6shg") pod "e78c746d-35c4-4afa-a106-ad0cff0c1397" (UID: "e78c746d-35c4-4afa-a106-ad0cff0c1397"). InnerVolumeSpecName "kube-api-access-z6shg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:17:30.345773 sshd[4195]: Accepted publickey for core from 172.24.4.1 port 36468 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:17:30.355144 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:30.373637 systemd[1]: Started session-24.scope. Feb 9 19:17:30.380502 systemd-logind[1123]: New session 24 of user core. Feb 9 19:17:30.391329 kubelet[2077]: I0209 19:17:30.391261 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-run\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391329 kubelet[2077]: I0209 19:17:30.391307 2077 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-etc-cni-netd\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391329 kubelet[2077]: I0209 19:17:30.391325 2077 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-net\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391329 kubelet[2077]: I0209 19:17:30.391340 2077 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-clustermesh-secrets\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391356 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-ipsec-secrets\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391371 2077 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-host-proc-sys-kernel\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391385 2077 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-lib-modules\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391399 2077 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cni-path\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391412 2077 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-hubble-tls\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391426 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-cgroup\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.391850 kubelet[2077]: I0209 19:17:30.391440 2077 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-bpf-maps\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.392418 kubelet[2077]: I0209 19:17:30.391453 2077 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-xtables-lock\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.392418 kubelet[2077]: I0209 19:17:30.391467 2077 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e78c746d-35c4-4afa-a106-ad0cff0c1397-cilium-config-path\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.392418 kubelet[2077]: I0209 19:17:30.391533 2077 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-z6shg\" (UniqueName: \"kubernetes.io/projected/e78c746d-35c4-4afa-a106-ad0cff0c1397-kube-api-access-z6shg\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.392418 kubelet[2077]: I0209 19:17:30.391550 2077 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e78c746d-35c4-4afa-a106-ad0cff0c1397-hostproc\") on node \"ci-3510-3-2-c-8f3f3a83f5.novalocal\" DevicePath \"\"" Feb 9 19:17:30.647392 systemd[1]: var-lib-kubelet-pods-e78c746d\x2d35c4\x2d4afa\x2da106\x2dad0cff0c1397-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz6shg.mount: Deactivated successfully. Feb 9 19:17:30.647839 systemd[1]: var-lib-kubelet-pods-e78c746d\x2d35c4\x2d4afa\x2da106\x2dad0cff0c1397-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:17:31.077832 kubelet[2077]: I0209 19:17:31.077390 2077 scope.go:115] "RemoveContainer" containerID="7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e" Feb 9 19:17:31.081058 env[1135]: time="2024-02-09T19:17:31.080376025Z" level=info msg="RemoveContainer for \"7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e\"" Feb 9 19:17:31.090268 env[1135]: time="2024-02-09T19:17:31.090160166Z" level=info msg="RemoveContainer for \"7d3061eab58f03f48502338560b493be3b7da4f5731da12b37fdffe8d3e5023e\" returns successfully" Feb 9 19:17:31.092495 kubelet[2077]: I0209 19:17:31.092436 2077 scope.go:115] "RemoveContainer" containerID="6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c" Feb 9 19:17:31.100833 env[1135]: time="2024-02-09T19:17:31.097182482Z" level=info msg="RemoveContainer for \"6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c\"" Feb 9 19:17:31.151868 env[1135]: time="2024-02-09T19:17:31.151818131Z" level=info msg="RemoveContainer for \"6d40e7c5c57df32efdfe5fadbe28d19d597cf6691f20e5a887a17a847713ec5c\" returns successfully" Feb 9 19:17:31.152152 kubelet[2077]: I0209 19:17:31.152125 2077 scope.go:115] "RemoveContainer" containerID="d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb" Feb 9 19:17:31.153407 env[1135]: time="2024-02-09T19:17:31.153370055Z" level=info msg="RemoveContainer for \"d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb\"" Feb 9 19:17:31.158220 env[1135]: time="2024-02-09T19:17:31.158178988Z" level=info msg="RemoveContainer for \"d81df79618c1219a6785224ca441215fa6a63d386319a35444d4862aa91381fb\" returns successfully" Feb 9 19:17:31.158461 kubelet[2077]: I0209 19:17:31.158436 2077 scope.go:115] "RemoveContainer" containerID="ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba" Feb 9 19:17:31.163189 env[1135]: time="2024-02-09T19:17:31.163155648Z" level=info msg="RemoveContainer for \"ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba\"" Feb 9 19:17:31.163342 kubelet[2077]: I0209 19:17:31.163220 2077 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:17:31.163342 kubelet[2077]: E0209 19:17:31.163307 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e78c746d-35c4-4afa-a106-ad0cff0c1397" containerName="mount-cgroup" Feb 9 19:17:31.163342 kubelet[2077]: E0209 19:17:31.163320 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e78c746d-35c4-4afa-a106-ad0cff0c1397" containerName="apply-sysctl-overwrites" Feb 9 19:17:31.163342 kubelet[2077]: E0209 19:17:31.163330 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e78c746d-35c4-4afa-a106-ad0cff0c1397" containerName="mount-bpf-fs" Feb 9 19:17:31.163342 kubelet[2077]: E0209 19:17:31.163338 2077 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e78c746d-35c4-4afa-a106-ad0cff0c1397" containerName="clean-cilium-state" Feb 9 19:17:31.163787 kubelet[2077]: I0209 19:17:31.163385 2077 memory_manager.go:346] "RemoveStaleState removing state" podUID="e78c746d-35c4-4afa-a106-ad0cff0c1397" containerName="clean-cilium-state" Feb 9 19:17:31.172140 env[1135]: time="2024-02-09T19:17:31.172105415Z" level=info msg="RemoveContainer for \"ae3643c2f566efe3bfb255883db7044e1c38bb4e1e384d9a70a3968a11b346ba\" returns successfully" Feb 9 19:17:31.296523 kubelet[2077]: I0209 19:17:31.296476 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-bpf-maps\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296523 kubelet[2077]: I0209 19:17:31.296523 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-xtables-lock\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296732 kubelet[2077]: I0209 19:17:31.296587 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ebbf956a-6582-493c-a064-2038ae344597-clustermesh-secrets\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296732 kubelet[2077]: I0209 19:17:31.296618 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-cni-path\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296732 kubelet[2077]: I0209 19:17:31.296670 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-host-proc-sys-net\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296732 kubelet[2077]: I0209 19:17:31.296726 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-lib-modules\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296878 kubelet[2077]: I0209 19:17:31.296757 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ebbf956a-6582-493c-a064-2038ae344597-cilium-config-path\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296878 kubelet[2077]: I0209 19:17:31.296784 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjlp\" (UniqueName: \"kubernetes.io/projected/ebbf956a-6582-493c-a064-2038ae344597-kube-api-access-4mjlp\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296878 kubelet[2077]: I0209 19:17:31.296829 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ebbf956a-6582-493c-a064-2038ae344597-cilium-ipsec-secrets\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296878 kubelet[2077]: I0209 19:17:31.296860 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-hostproc\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296996 kubelet[2077]: I0209 19:17:31.296904 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ebbf956a-6582-493c-a064-2038ae344597-hubble-tls\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296996 kubelet[2077]: I0209 19:17:31.296939 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-host-proc-sys-kernel\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.296996 kubelet[2077]: I0209 19:17:31.296983 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-cilium-cgroup\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.297154 kubelet[2077]: I0209 19:17:31.297013 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-cilium-run\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.297154 kubelet[2077]: I0209 19:17:31.297066 2077 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ebbf956a-6582-493c-a064-2038ae344597-etc-cni-netd\") pod \"cilium-5fvkj\" (UID: \"ebbf956a-6582-493c-a064-2038ae344597\") " pod="kube-system/cilium-5fvkj" Feb 9 19:17:31.371780 env[1135]: time="2024-02-09T19:17:31.371628443Z" level=info msg="StopPodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\"" Feb 9 19:17:31.372189 env[1135]: time="2024-02-09T19:17:31.372113720Z" level=info msg="TearDown network for sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" successfully" Feb 9 19:17:31.372350 env[1135]: time="2024-02-09T19:17:31.372315187Z" level=info msg="StopPodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" returns successfully" Feb 9 19:17:31.374949 kubelet[2077]: I0209 19:17:31.374918 2077 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e78c746d-35c4-4afa-a106-ad0cff0c1397 path="/var/lib/kubelet/pods/e78c746d-35c4-4afa-a106-ad0cff0c1397/volumes" Feb 9 19:17:31.469864 env[1135]: time="2024-02-09T19:17:31.469819834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fvkj,Uid:ebbf956a-6582-493c-a064-2038ae344597,Namespace:kube-system,Attempt:0,}" Feb 9 19:17:31.484171 env[1135]: time="2024-02-09T19:17:31.484052280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:17:31.484171 env[1135]: time="2024-02-09T19:17:31.484125844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:17:31.484457 env[1135]: time="2024-02-09T19:17:31.484140301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:17:31.485747 env[1135]: time="2024-02-09T19:17:31.484870935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2 pid=4311 runtime=io.containerd.runc.v2 Feb 9 19:17:31.526351 env[1135]: time="2024-02-09T19:17:31.526301696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fvkj,Uid:ebbf956a-6582-493c-a064-2038ae344597,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\"" Feb 9 19:17:31.530633 env[1135]: time="2024-02-09T19:17:31.530593876Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:17:31.543724 env[1135]: time="2024-02-09T19:17:31.543685416Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bd7640851d158c9493da00a6fbfb6dcc20e072d1bac768777396742a47a6de5\"" Feb 9 19:17:31.545951 env[1135]: time="2024-02-09T19:17:31.544736877Z" level=info msg="StartContainer for \"0bd7640851d158c9493da00a6fbfb6dcc20e072d1bac768777396742a47a6de5\"" Feb 9 19:17:31.597056 env[1135]: time="2024-02-09T19:17:31.597011043Z" level=info msg="StartContainer for \"0bd7640851d158c9493da00a6fbfb6dcc20e072d1bac768777396742a47a6de5\" returns successfully" Feb 9 19:17:31.629827 env[1135]: time="2024-02-09T19:17:31.629783391Z" level=info msg="shim disconnected" id=0bd7640851d158c9493da00a6fbfb6dcc20e072d1bac768777396742a47a6de5 Feb 9 19:17:31.630006 env[1135]: time="2024-02-09T19:17:31.629830486Z" level=warning msg="cleaning up after shim disconnected" id=0bd7640851d158c9493da00a6fbfb6dcc20e072d1bac768777396742a47a6de5 namespace=k8s.io Feb 9 19:17:31.630006 env[1135]: time="2024-02-09T19:17:31.629842659Z" level=info msg="cleaning up dead shim" Feb 9 19:17:31.638059 env[1135]: time="2024-02-09T19:17:31.638011118Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4395 runtime=io.containerd.runc.v2\n" Feb 9 19:17:32.091020 env[1135]: time="2024-02-09T19:17:32.090513824Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:17:32.133613 env[1135]: time="2024-02-09T19:17:32.130198585Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b\"" Feb 9 19:17:32.131920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1200544529.mount: Deactivated successfully. Feb 9 19:17:32.135033 env[1135]: time="2024-02-09T19:17:32.134969853Z" level=info msg="StartContainer for \"d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b\"" Feb 9 19:17:32.217521 env[1135]: time="2024-02-09T19:17:32.217484622Z" level=info msg="StartContainer for \"d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b\" returns successfully" Feb 9 19:17:32.239087 env[1135]: time="2024-02-09T19:17:32.239042502Z" level=info msg="shim disconnected" id=d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b Feb 9 19:17:32.239397 env[1135]: time="2024-02-09T19:17:32.239363889Z" level=warning msg="cleaning up after shim disconnected" id=d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b namespace=k8s.io Feb 9 19:17:32.239488 env[1135]: time="2024-02-09T19:17:32.239471275Z" level=info msg="cleaning up dead shim" Feb 9 19:17:32.248182 env[1135]: time="2024-02-09T19:17:32.248151105Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4459 runtime=io.containerd.runc.v2\n" Feb 9 19:17:32.468466 kubelet[2077]: E0209 19:17:32.468414 2077 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:17:32.647391 systemd[1]: run-containerd-runc-k8s.io-d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b-runc.vy8nOF.mount: Deactivated successfully. Feb 9 19:17:32.647752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d56b30d4c3269ad222d3fb91170a7304f9d02a6b7c68b1fd6f65fa886b212c9b-rootfs.mount: Deactivated successfully. Feb 9 19:17:33.096946 env[1135]: time="2024-02-09T19:17:33.096873235Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:17:33.137425 env[1135]: time="2024-02-09T19:17:33.137097330Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6a2a8953a4ae8f3b3174ffa3e44466ca806d3f9242792ab20044374d41a69c5\"" Feb 9 19:17:33.139412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279642167.mount: Deactivated successfully. Feb 9 19:17:33.144121 env[1135]: time="2024-02-09T19:17:33.144055899Z" level=info msg="StartContainer for \"a6a2a8953a4ae8f3b3174ffa3e44466ca806d3f9242792ab20044374d41a69c5\"" Feb 9 19:17:33.229969 env[1135]: time="2024-02-09T19:17:33.229928053Z" level=info msg="StartContainer for \"a6a2a8953a4ae8f3b3174ffa3e44466ca806d3f9242792ab20044374d41a69c5\" returns successfully" Feb 9 19:17:33.256655 env[1135]: time="2024-02-09T19:17:33.256538725Z" level=info msg="shim disconnected" id=a6a2a8953a4ae8f3b3174ffa3e44466ca806d3f9242792ab20044374d41a69c5 Feb 9 19:17:33.256849 env[1135]: time="2024-02-09T19:17:33.256657672Z" level=warning msg="cleaning up after shim disconnected" id=a6a2a8953a4ae8f3b3174ffa3e44466ca806d3f9242792ab20044374d41a69c5 namespace=k8s.io Feb 9 19:17:33.256849 env[1135]: time="2024-02-09T19:17:33.256672520Z" level=info msg="cleaning up dead shim" Feb 9 19:17:33.266431 env[1135]: time="2024-02-09T19:17:33.266373462Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4518 runtime=io.containerd.runc.v2\n" Feb 9 19:17:33.647644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6a2a8953a4ae8f3b3174ffa3e44466ca806d3f9242792ab20044374d41a69c5-rootfs.mount: Deactivated successfully. Feb 9 19:17:34.135690 env[1135]: time="2024-02-09T19:17:34.129079341Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:17:34.160817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213181447.mount: Deactivated successfully. Feb 9 19:17:34.177345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613219706.mount: Deactivated successfully. Feb 9 19:17:34.186100 env[1135]: time="2024-02-09T19:17:34.186060205Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ea8b4ece3587466de789cba66b1b528fe1935588e601d0070bd8329af097aa7\"" Feb 9 19:17:34.187027 env[1135]: time="2024-02-09T19:17:34.186988493Z" level=info msg="StartContainer for \"9ea8b4ece3587466de789cba66b1b528fe1935588e601d0070bd8329af097aa7\"" Feb 9 19:17:34.250210 env[1135]: time="2024-02-09T19:17:34.250171478Z" level=info msg="StartContainer for \"9ea8b4ece3587466de789cba66b1b528fe1935588e601d0070bd8329af097aa7\" returns successfully" Feb 9 19:17:34.277326 env[1135]: time="2024-02-09T19:17:34.277276743Z" level=info msg="shim disconnected" id=9ea8b4ece3587466de789cba66b1b528fe1935588e601d0070bd8329af097aa7 Feb 9 19:17:34.277606 env[1135]: time="2024-02-09T19:17:34.277584365Z" level=warning msg="cleaning up after shim disconnected" id=9ea8b4ece3587466de789cba66b1b528fe1935588e601d0070bd8329af097aa7 namespace=k8s.io Feb 9 19:17:34.277695 env[1135]: time="2024-02-09T19:17:34.277679138Z" level=info msg="cleaning up dead shim" Feb 9 19:17:34.285876 env[1135]: time="2024-02-09T19:17:34.285841133Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:17:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4574 runtime=io.containerd.runc.v2\n" Feb 9 19:17:34.647672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ea8b4ece3587466de789cba66b1b528fe1935588e601d0070bd8329af097aa7-rootfs.mount: Deactivated successfully. Feb 9 19:17:35.137626 env[1135]: time="2024-02-09T19:17:35.137509484Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:17:35.179846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180886695.mount: Deactivated successfully. Feb 9 19:17:35.188050 env[1135]: time="2024-02-09T19:17:35.187998873Z" level=info msg="CreateContainer within sandbox \"0b9617208d42ec0f6e4eb00d5ddf52d763f8eb0d8b1d2762a3ec2e88a048b1b2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692\"" Feb 9 19:17:35.188782 env[1135]: time="2024-02-09T19:17:35.188713240Z" level=info msg="StartContainer for \"ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692\"" Feb 9 19:17:35.254195 env[1135]: time="2024-02-09T19:17:35.254152436Z" level=info msg="StartContainer for \"ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692\" returns successfully" Feb 9 19:17:36.154304 kubelet[2077]: I0209 19:17:36.154196 2077 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5fvkj" podStartSLOduration=5.154087296 pod.CreationTimestamp="2024-02-09 19:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:17:36.147543542 +0000 UTC m=+169.043966877" watchObservedRunningTime="2024-02-09 19:17:36.154087296 +0000 UTC m=+169.050510641" Feb 9 19:17:36.194610 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:17:36.241594 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 9 19:17:37.255889 systemd[1]: run-containerd-runc-k8s.io-ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692-runc.4pQfp7.mount: Deactivated successfully. Feb 9 19:17:39.484623 systemd[1]: run-containerd-runc-k8s.io-ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692-runc.4vWkrv.mount: Deactivated successfully. Feb 9 19:17:39.501075 systemd-networkd[1033]: lxc_health: Link UP Feb 9 19:17:39.507734 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:17:39.507592 systemd-networkd[1033]: lxc_health: Gained carrier Feb 9 19:17:41.150821 systemd-networkd[1033]: lxc_health: Gained IPv6LL Feb 9 19:17:41.834644 systemd[1]: run-containerd-runc-k8s.io-ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692-runc.7ejGVW.mount: Deactivated successfully. Feb 9 19:17:44.038484 systemd[1]: run-containerd-runc-k8s.io-ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692-runc.uVz59Z.mount: Deactivated successfully. Feb 9 19:17:46.335876 systemd[1]: run-containerd-runc-k8s.io-ef6fabf1623a336a5c5a2c0f097d013464fff977d8da691ae9fdd1a8177e4692-runc.JErGTp.mount: Deactivated successfully. Feb 9 19:17:46.710500 sshd[4195]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:46.716877 systemd[1]: sshd@23-172.24.4.148:22-172.24.4.1:36468.service: Deactivated successfully. Feb 9 19:17:46.719164 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:17:46.719353 systemd-logind[1123]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:17:46.722895 systemd-logind[1123]: Removed session 24. Feb 9 19:17:47.267831 env[1135]: time="2024-02-09T19:17:47.267734680Z" level=info msg="StopPodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\"" Feb 9 19:17:47.268724 env[1135]: time="2024-02-09T19:17:47.267915642Z" level=info msg="TearDown network for sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" successfully" Feb 9 19:17:47.268724 env[1135]: time="2024-02-09T19:17:47.267992403Z" level=info msg="StopPodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" returns successfully" Feb 9 19:17:47.268975 env[1135]: time="2024-02-09T19:17:47.268933431Z" level=info msg="RemovePodSandbox for \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\"" Feb 9 19:17:47.269053 env[1135]: time="2024-02-09T19:17:47.268992079Z" level=info msg="Forcibly stopping sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\"" Feb 9 19:17:47.269186 env[1135]: time="2024-02-09T19:17:47.269133510Z" level=info msg="TearDown network for sandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" successfully" Feb 9 19:17:47.280076 env[1135]: time="2024-02-09T19:17:47.279902520Z" level=info msg="RemovePodSandbox \"11e2b18ed708118aab456c5c799c2f890ec1b5b3d380250de4cb68358b451b31\" returns successfully" Feb 9 19:17:47.280966 env[1135]: time="2024-02-09T19:17:47.280903879Z" level=info msg="StopPodSandbox for \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\"" Feb 9 19:17:47.281146 env[1135]: time="2024-02-09T19:17:47.281056699Z" level=info msg="TearDown network for sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" successfully" Feb 9 19:17:47.281260 env[1135]: time="2024-02-09T19:17:47.281136306Z" level=info msg="StopPodSandbox for \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" returns successfully" Feb 9 19:17:47.284807 env[1135]: time="2024-02-09T19:17:47.281923802Z" level=info msg="RemovePodSandbox for \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\"" Feb 9 19:17:47.284807 env[1135]: time="2024-02-09T19:17:47.282001675Z" level=info msg="Forcibly stopping sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\"" Feb 9 19:17:47.284807 env[1135]: time="2024-02-09T19:17:47.282176075Z" level=info msg="TearDown network for sandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" successfully" Feb 9 19:17:47.287693 env[1135]: time="2024-02-09T19:17:47.287634400Z" level=info msg="RemovePodSandbox \"914a54d49f666f7d3a1bfdabd89009fe57e9c83a9fad3a2ea081a63ae5b53fc6\" returns successfully" Feb 9 19:17:47.288617 env[1135]: time="2024-02-09T19:17:47.288495052Z" level=info msg="StopPodSandbox for \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\"" Feb 9 19:17:47.288763 env[1135]: time="2024-02-09T19:17:47.288683037Z" level=info msg="TearDown network for sandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" successfully" Feb 9 19:17:47.288870 env[1135]: time="2024-02-09T19:17:47.288751563Z" level=info msg="StopPodSandbox for \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" returns successfully" Feb 9 19:17:47.289428 env[1135]: time="2024-02-09T19:17:47.289355322Z" level=info msg="RemovePodSandbox for \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\"" Feb 9 19:17:47.289553 env[1135]: time="2024-02-09T19:17:47.289422936Z" level=info msg="Forcibly stopping sandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\"" Feb 9 19:17:47.289699 env[1135]: time="2024-02-09T19:17:47.289653800Z" level=info msg="TearDown network for sandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" successfully" Feb 9 19:17:47.294879 env[1135]: time="2024-02-09T19:17:47.294786818Z" level=info msg="RemovePodSandbox \"69a23001951013e559ed778885406253e6a007bca94780b63bd413bba55bbd69\" returns successfully"