Feb 9 19:20:15.029913 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:20:15.029971 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:20:15.029993 kernel: BIOS-provided physical RAM map: Feb 9 19:20:15.030006 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:20:15.030019 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:20:15.030031 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:20:15.030047 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 9 19:20:15.030060 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 9 19:20:15.030076 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 9 19:20:15.030088 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:20:15.030101 kernel: NX (Execute Disable) protection: active Feb 9 19:20:15.030113 kernel: SMBIOS 2.8 present. Feb 9 19:20:15.030125 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 9 19:20:15.030138 kernel: Hypervisor detected: KVM Feb 9 19:20:15.030153 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:20:15.030170 kernel: kvm-clock: cpu 0, msr 71faa001, primary cpu clock Feb 9 19:20:15.030183 kernel: kvm-clock: using sched offset of 4941263657 cycles Feb 9 19:20:15.030198 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:20:15.030212 kernel: tsc: Detected 1996.249 MHz processor Feb 9 19:20:15.030226 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:20:15.030240 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:20:15.030254 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 9 19:20:15.030268 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:20:15.030285 kernel: ACPI: Early table checksum verification disabled Feb 9 19:20:15.030299 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 9 19:20:15.030313 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:20:15.030327 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:20:15.030341 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:20:15.030355 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 9 19:20:15.030369 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:20:15.030382 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:20:15.030396 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 9 19:20:15.030413 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 9 19:20:15.030427 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 9 19:20:15.030441 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 9 19:20:15.030454 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 9 19:20:15.030468 kernel: No NUMA configuration found Feb 9 19:20:15.030481 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 9 19:20:15.030495 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 9 19:20:15.030509 kernel: Zone ranges: Feb 9 19:20:15.030531 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:20:15.030571 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 9 19:20:15.030585 kernel: Normal empty Feb 9 19:20:15.030600 kernel: Movable zone start for each node Feb 9 19:20:15.030614 kernel: Early memory node ranges Feb 9 19:20:15.030628 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:20:15.030646 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 9 19:20:15.030661 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 9 19:20:15.030675 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:20:15.030689 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:20:15.030703 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 9 19:20:15.030717 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 9 19:20:15.030731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:20:15.030746 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:20:15.030760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:20:15.030777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:20:15.030791 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:20:15.030806 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:20:15.030820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:20:15.030834 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:20:15.030848 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:20:15.030863 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 9 19:20:15.030877 kernel: Booting paravirtualized kernel on KVM Feb 9 19:20:15.030891 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:20:15.030906 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:20:15.030924 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:20:15.030938 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:20:15.030952 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:20:15.030966 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 9 19:20:15.030980 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 9 19:20:15.030995 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 9 19:20:15.031009 kernel: Policy zone: DMA32 Feb 9 19:20:15.031026 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:20:15.031045 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:20:15.031059 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:20:15.031074 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:20:15.031089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:20:15.031104 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 9 19:20:15.031119 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:20:15.031133 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:20:15.031147 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:20:15.031164 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:20:15.031179 kernel: rcu: RCU event tracing is enabled. Feb 9 19:20:15.031194 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:20:15.031209 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:20:15.031223 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:20:15.031238 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:20:15.031252 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:20:15.031266 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:20:15.031281 kernel: Console: colour VGA+ 80x25 Feb 9 19:20:15.031296 kernel: printk: console [tty0] enabled Feb 9 19:20:15.031306 kernel: printk: console [ttyS0] enabled Feb 9 19:20:15.031315 kernel: ACPI: Core revision 20210730 Feb 9 19:20:15.031325 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:20:15.031335 kernel: x2apic enabled Feb 9 19:20:15.031344 kernel: Switched APIC routing to physical x2apic. Feb 9 19:20:15.031354 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:20:15.031362 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:20:15.031370 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 9 19:20:15.031377 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 9 19:20:15.031387 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 9 19:20:15.031394 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:20:15.031402 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:20:15.031410 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:20:15.031417 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:20:15.031425 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:20:15.031433 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 9 19:20:15.031440 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:20:15.031448 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:20:15.031457 kernel: LSM: Security Framework initializing Feb 9 19:20:15.031465 kernel: SELinux: Initializing. Feb 9 19:20:15.031472 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:20:15.031480 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:20:15.031488 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 9 19:20:15.031496 kernel: Performance Events: AMD PMU driver. Feb 9 19:20:15.031503 kernel: ... version: 0 Feb 9 19:20:15.031511 kernel: ... bit width: 48 Feb 9 19:20:15.031518 kernel: ... generic registers: 4 Feb 9 19:20:15.031534 kernel: ... value mask: 0000ffffffffffff Feb 9 19:20:15.033578 kernel: ... max period: 00007fffffffffff Feb 9 19:20:15.033590 kernel: ... fixed-purpose events: 0 Feb 9 19:20:15.033599 kernel: ... event mask: 000000000000000f Feb 9 19:20:15.033607 kernel: signal: max sigframe size: 1440 Feb 9 19:20:15.033615 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:20:15.033623 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:20:15.033631 kernel: x86: Booting SMP configuration: Feb 9 19:20:15.033641 kernel: .... node #0, CPUs: #1 Feb 9 19:20:15.033649 kernel: kvm-clock: cpu 1, msr 71faa041, secondary cpu clock Feb 9 19:20:15.033657 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 9 19:20:15.033664 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:20:15.033672 kernel: smpboot: Max logical packages: 2 Feb 9 19:20:15.033680 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 9 19:20:15.033688 kernel: devtmpfs: initialized Feb 9 19:20:15.033696 kernel: x86/mm: Memory block size: 128MB Feb 9 19:20:15.033705 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:20:15.033714 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:20:15.033722 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:20:15.033731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:20:15.033739 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:20:15.033747 kernel: audit: type=2000 audit(1707506414.226:1): state=initialized audit_enabled=0 res=1 Feb 9 19:20:15.033754 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:20:15.033762 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:20:15.033770 kernel: cpuidle: using governor menu Feb 9 19:20:15.033778 kernel: ACPI: bus type PCI registered Feb 9 19:20:15.033788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:20:15.033796 kernel: dca service started, version 1.12.1 Feb 9 19:20:15.033804 kernel: PCI: Using configuration type 1 for base access Feb 9 19:20:15.033812 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:20:15.033820 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:20:15.033828 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:20:15.033836 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:20:15.033844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:20:15.033851 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:20:15.033861 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:20:15.033869 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:20:15.033877 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:20:15.033885 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:20:15.033893 kernel: ACPI: Interpreter enabled Feb 9 19:20:15.033901 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:20:15.033909 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:20:15.033917 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:20:15.033925 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:20:15.033944 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:20:15.034095 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:20:15.034181 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:20:15.034193 kernel: acpiphp: Slot [3] registered Feb 9 19:20:15.034202 kernel: acpiphp: Slot [4] registered Feb 9 19:20:15.034210 kernel: acpiphp: Slot [5] registered Feb 9 19:20:15.034217 kernel: acpiphp: Slot [6] registered Feb 9 19:20:15.034228 kernel: acpiphp: Slot [7] registered Feb 9 19:20:15.034236 kernel: acpiphp: Slot [8] registered Feb 9 19:20:15.034244 kernel: acpiphp: Slot [9] registered Feb 9 19:20:15.034252 kernel: acpiphp: Slot [10] registered Feb 9 19:20:15.034260 kernel: acpiphp: Slot [11] registered Feb 9 19:20:15.034268 kernel: acpiphp: Slot [12] registered Feb 9 19:20:15.034276 kernel: acpiphp: Slot [13] registered Feb 9 19:20:15.034284 kernel: acpiphp: Slot [14] registered Feb 9 19:20:15.034291 kernel: acpiphp: Slot [15] registered Feb 9 19:20:15.034299 kernel: acpiphp: Slot [16] registered Feb 9 19:20:15.034309 kernel: acpiphp: Slot [17] registered Feb 9 19:20:15.034317 kernel: acpiphp: Slot [18] registered Feb 9 19:20:15.034325 kernel: acpiphp: Slot [19] registered Feb 9 19:20:15.034333 kernel: acpiphp: Slot [20] registered Feb 9 19:20:15.034340 kernel: acpiphp: Slot [21] registered Feb 9 19:20:15.034348 kernel: acpiphp: Slot [22] registered Feb 9 19:20:15.034356 kernel: acpiphp: Slot [23] registered Feb 9 19:20:15.034364 kernel: acpiphp: Slot [24] registered Feb 9 19:20:15.034371 kernel: acpiphp: Slot [25] registered Feb 9 19:20:15.034381 kernel: acpiphp: Slot [26] registered Feb 9 19:20:15.034389 kernel: acpiphp: Slot [27] registered Feb 9 19:20:15.034396 kernel: acpiphp: Slot [28] registered Feb 9 19:20:15.034404 kernel: acpiphp: Slot [29] registered Feb 9 19:20:15.034412 kernel: acpiphp: Slot [30] registered Feb 9 19:20:15.034420 kernel: acpiphp: Slot [31] registered Feb 9 19:20:15.034427 kernel: PCI host bridge to bus 0000:00 Feb 9 19:20:15.034522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:20:15.034617 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:20:15.034695 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:20:15.034776 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:20:15.034854 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 9 19:20:15.034927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:20:15.035026 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:20:15.035119 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:20:15.035214 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:20:15.035297 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 9 19:20:15.035380 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:20:15.035462 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:20:15.035562 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:20:15.035646 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:20:15.035745 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:20:15.035835 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 9 19:20:15.035923 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 9 19:20:15.036039 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 9 19:20:15.036125 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 9 19:20:15.036209 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 9 19:20:15.036298 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 9 19:20:15.036385 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 9 19:20:15.036467 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:20:15.036575 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:20:15.036663 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 9 19:20:15.036746 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 9 19:20:15.036829 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 9 19:20:15.036911 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 9 19:20:15.037015 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:20:15.037099 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:20:15.037182 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 9 19:20:15.037263 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 9 19:20:15.037352 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 9 19:20:15.037437 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 9 19:20:15.037528 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 9 19:20:15.044732 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:20:15.044824 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 9 19:20:15.044910 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 9 19:20:15.044922 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:20:15.044931 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:20:15.044939 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:20:15.044948 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:20:15.044956 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:20:15.044974 kernel: iommu: Default domain type: Translated Feb 9 19:20:15.044982 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:20:15.045080 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:20:15.045164 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:20:15.045245 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:20:15.045257 kernel: vgaarb: loaded Feb 9 19:20:15.045266 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:20:15.045274 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:20:15.045282 kernel: PTP clock support registered Feb 9 19:20:15.045293 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:20:15.045301 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:20:15.045309 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:20:15.045317 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 9 19:20:15.045325 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:20:15.045333 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:20:15.045341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:20:15.045348 kernel: pnp: PnP ACPI init Feb 9 19:20:15.045447 kernel: pnp 00:03: [dma 2] Feb 9 19:20:15.045464 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:20:15.045472 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:20:15.045480 kernel: NET: Registered PF_INET protocol family Feb 9 19:20:15.045488 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:20:15.045497 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:20:15.045505 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:20:15.045513 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:20:15.045521 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:20:15.045531 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:20:15.045553 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:20:15.045562 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:20:15.045570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:20:15.045578 kernel: NET: Registered PF_XDP protocol family Feb 9 19:20:15.045665 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:20:15.045741 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:20:15.045815 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:20:15.045886 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:20:15.045976 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 9 19:20:15.046067 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:20:15.046158 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:20:15.046246 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:20:15.046258 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:20:15.046266 kernel: Initialise system trusted keyrings Feb 9 19:20:15.046274 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:20:15.046285 kernel: Key type asymmetric registered Feb 9 19:20:15.046293 kernel: Asymmetric key parser 'x509' registered Feb 9 19:20:15.046301 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:20:15.046309 kernel: io scheduler mq-deadline registered Feb 9 19:20:15.046317 kernel: io scheduler kyber registered Feb 9 19:20:15.046325 kernel: io scheduler bfq registered Feb 9 19:20:15.046333 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:20:15.046342 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 9 19:20:15.046350 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:20:15.046358 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:20:15.046368 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:20:15.046376 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:20:15.046384 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:20:15.046392 kernel: random: crng init done Feb 9 19:20:15.046400 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:20:15.046408 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:20:15.046416 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:20:15.046509 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 9 19:20:15.046525 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:20:15.046637 kernel: rtc_cmos 00:04: registered as rtc0 Feb 9 19:20:15.046715 kernel: rtc_cmos 00:04: setting system clock to 2024-02-09T19:20:14 UTC (1707506414) Feb 9 19:20:15.046790 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 9 19:20:15.046801 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:20:15.046810 kernel: Segment Routing with IPv6 Feb 9 19:20:15.046818 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:20:15.046826 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:20:15.046834 kernel: Key type dns_resolver registered Feb 9 19:20:15.046847 kernel: IPI shorthand broadcast: enabled Feb 9 19:20:15.046855 kernel: sched_clock: Marking stable (699232945, 118802075)->(842015250, -23980230) Feb 9 19:20:15.046863 kernel: registered taskstats version 1 Feb 9 19:20:15.046871 kernel: Loading compiled-in X.509 certificates Feb 9 19:20:15.046879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:20:15.046888 kernel: Key type .fscrypt registered Feb 9 19:20:15.046895 kernel: Key type fscrypt-provisioning registered Feb 9 19:20:15.046904 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:20:15.046915 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:20:15.046923 kernel: ima: No architecture policies found Feb 9 19:20:15.046931 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:20:15.046939 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:20:15.046948 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:20:15.046956 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:20:15.046964 kernel: Run /init as init process Feb 9 19:20:15.046972 kernel: with arguments: Feb 9 19:20:15.046980 kernel: /init Feb 9 19:20:15.046991 kernel: with environment: Feb 9 19:20:15.046998 kernel: HOME=/ Feb 9 19:20:15.047006 kernel: TERM=linux Feb 9 19:20:15.047014 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:20:15.047025 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:20:15.047035 systemd[1]: Detected virtualization kvm. Feb 9 19:20:15.047044 systemd[1]: Detected architecture x86-64. Feb 9 19:20:15.047053 systemd[1]: Running in initrd. Feb 9 19:20:15.047064 systemd[1]: No hostname configured, using default hostname. Feb 9 19:20:15.047073 systemd[1]: Hostname set to . Feb 9 19:20:15.047082 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:20:15.047090 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:20:15.047099 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:20:15.047107 systemd[1]: Reached target cryptsetup.target. Feb 9 19:20:15.047116 systemd[1]: Reached target paths.target. Feb 9 19:20:15.047124 systemd[1]: Reached target slices.target. Feb 9 19:20:15.047135 systemd[1]: Reached target swap.target. Feb 9 19:20:15.047143 systemd[1]: Reached target timers.target. Feb 9 19:20:15.047152 systemd[1]: Listening on iscsid.socket. Feb 9 19:20:15.047161 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:20:15.047169 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:20:15.047178 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:20:15.047186 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:20:15.047196 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:20:15.047205 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:20:15.047213 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:20:15.047222 systemd[1]: Reached target sockets.target. Feb 9 19:20:15.047231 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:20:15.047249 systemd[1]: Finished network-cleanup.service. Feb 9 19:20:15.047261 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:20:15.047272 systemd[1]: Starting systemd-journald.service... Feb 9 19:20:15.047280 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:20:15.047289 systemd[1]: Starting systemd-resolved.service... Feb 9 19:20:15.047298 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:20:15.047307 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:20:15.047315 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:20:15.047328 systemd-journald[185]: Journal started Feb 9 19:20:15.047389 systemd-journald[185]: Runtime Journal (/run/log/journal/2209532305a44e44980dc1e945505345) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:20:15.008989 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 19:20:15.071014 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:20:15.071039 kernel: Bridge firewalling registered Feb 9 19:20:15.071058 systemd[1]: Started systemd-journald.service. Feb 9 19:20:15.071072 kernel: audit: type=1130 audit(1707506415.065:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.055755 systemd-resolved[187]: Positive Trust Anchors: Feb 9 19:20:15.075218 kernel: audit: type=1130 audit(1707506415.070:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.055765 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:20:15.055800 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:20:15.086398 kernel: audit: type=1130 audit(1707506415.076:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.086429 kernel: SCSI subsystem initialized Feb 9 19:20:15.086441 kernel: audit: type=1130 audit(1707506415.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.057760 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 19:20:15.058700 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 19:20:15.071793 systemd[1]: Started systemd-resolved.service. Feb 9 19:20:15.076855 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:20:15.102129 kernel: audit: type=1130 audit(1707506415.097:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.102155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:20:15.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.080288 systemd[1]: Reached target nss-lookup.target. Feb 9 19:20:15.087719 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:20:15.089736 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:20:15.097476 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:20:15.108665 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:20:15.108691 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:20:15.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.110317 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:20:15.115731 kernel: audit: type=1130 audit(1707506415.110:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.111628 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:20:15.120712 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 19:20:15.121826 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:20:15.126681 kernel: audit: type=1130 audit(1707506415.121:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.123052 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:20:15.133738 dracut-cmdline[202]: dracut-dracut-053 Feb 9 19:20:15.135898 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:20:15.137050 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:20:15.142938 kernel: audit: type=1130 audit(1707506415.136:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.212633 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:20:15.225599 kernel: iscsi: registered transport (tcp) Feb 9 19:20:15.249685 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:20:15.249770 kernel: QLogic iSCSI HBA Driver Feb 9 19:20:15.303292 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:20:15.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.306681 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:20:15.310584 kernel: audit: type=1130 audit(1707506415.303:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.390606 kernel: raid6: sse2x4 gen() 10564 MB/s Feb 9 19:20:15.407636 kernel: raid6: sse2x4 xor() 7151 MB/s Feb 9 19:20:15.424623 kernel: raid6: sse2x2 gen() 13918 MB/s Feb 9 19:20:15.441676 kernel: raid6: sse2x2 xor() 8544 MB/s Feb 9 19:20:15.458617 kernel: raid6: sse2x1 gen() 11323 MB/s Feb 9 19:20:15.476450 kernel: raid6: sse2x1 xor() 6693 MB/s Feb 9 19:20:15.476576 kernel: raid6: using algorithm sse2x2 gen() 13918 MB/s Feb 9 19:20:15.476607 kernel: raid6: .... xor() 8544 MB/s, rmw enabled Feb 9 19:20:15.477299 kernel: raid6: using ssse3x2 recovery algorithm Feb 9 19:20:15.493288 kernel: xor: measuring software checksum speed Feb 9 19:20:15.493352 kernel: prefetch64-sse : 17234 MB/sec Feb 9 19:20:15.495741 kernel: generic_sse : 16751 MB/sec Feb 9 19:20:15.495791 kernel: xor: using function: prefetch64-sse (17234 MB/sec) Feb 9 19:20:15.611005 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:20:15.627282 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:20:15.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.629000 audit: BPF prog-id=7 op=LOAD Feb 9 19:20:15.629000 audit: BPF prog-id=8 op=LOAD Feb 9 19:20:15.631758 systemd[1]: Starting systemd-udevd.service... Feb 9 19:20:15.645784 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 9 19:20:15.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.657718 systemd[1]: Started systemd-udevd.service. Feb 9 19:20:15.662039 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:20:15.677657 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Feb 9 19:20:15.725485 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:20:15.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.728416 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:20:15.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:15.783860 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:20:15.841569 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 9 19:20:15.850898 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:20:15.850940 kernel: GPT:17805311 != 41943039 Feb 9 19:20:15.850952 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:20:15.850962 kernel: GPT:17805311 != 41943039 Feb 9 19:20:15.850972 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:20:15.850982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:20:15.873596 kernel: libata version 3.00 loaded. Feb 9 19:20:15.878571 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:20:15.882174 kernel: scsi host0: ata_piix Feb 9 19:20:15.882398 kernel: scsi host1: ata_piix Feb 9 19:20:15.882509 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 9 19:20:15.884883 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 9 19:20:15.905181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:20:15.942009 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (441) Feb 9 19:20:15.944585 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:20:15.945137 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:20:15.955740 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:20:15.959957 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:20:15.961524 systemd[1]: Starting disk-uuid.service... Feb 9 19:20:15.975685 disk-uuid[461]: Primary Header is updated. Feb 9 19:20:15.975685 disk-uuid[461]: Secondary Entries is updated. Feb 9 19:20:15.975685 disk-uuid[461]: Secondary Header is updated. Feb 9 19:20:15.983569 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:20:17.001591 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:20:17.002245 disk-uuid[463]: The operation has completed successfully. Feb 9 19:20:17.088434 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:20:17.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.088687 systemd[1]: Finished disk-uuid.service. Feb 9 19:20:17.094217 systemd[1]: Starting verity-setup.service... Feb 9 19:20:17.127633 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 9 19:20:17.213015 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:20:17.214225 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:20:17.221403 systemd[1]: Finished verity-setup.service. Feb 9 19:20:17.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.375610 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:20:17.376163 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:20:17.376766 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:20:17.377475 systemd[1]: Starting ignition-setup.service... Feb 9 19:20:17.380621 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:20:17.396416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:20:17.396465 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:20:17.396477 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:20:17.423916 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:20:17.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.440174 systemd[1]: Finished ignition-setup.service. Feb 9 19:20:17.441456 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:20:17.507569 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:20:17.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.510000 audit: BPF prog-id=9 op=LOAD Feb 9 19:20:17.513307 systemd[1]: Starting systemd-networkd.service... Feb 9 19:20:17.553788 systemd-networkd[633]: lo: Link UP Feb 9 19:20:17.553801 systemd-networkd[633]: lo: Gained carrier Feb 9 19:20:17.554504 systemd-networkd[633]: Enumeration completed Feb 9 19:20:17.554928 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:20:17.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.556583 systemd[1]: Started systemd-networkd.service. Feb 9 19:20:17.556710 systemd-networkd[633]: eth0: Link UP Feb 9 19:20:17.556715 systemd-networkd[633]: eth0: Gained carrier Feb 9 19:20:17.557142 systemd[1]: Reached target network.target. Feb 9 19:20:17.558429 systemd[1]: Starting iscsiuio.service... Feb 9 19:20:17.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.565930 systemd[1]: Started iscsiuio.service. Feb 9 19:20:17.567669 systemd[1]: Starting iscsid.service... Feb 9 19:20:17.570943 iscsid[642]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:20:17.570943 iscsid[642]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:20:17.570943 iscsid[642]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:20:17.570943 iscsid[642]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:20:17.570943 iscsid[642]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:20:17.570943 iscsid[642]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:20:17.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.572383 systemd[1]: Started iscsid.service. Feb 9 19:20:17.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.573637 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.140/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:20:17.577691 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:20:17.585050 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:20:17.585558 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:20:17.586380 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:20:17.588063 systemd[1]: Reached target remote-fs.target. Feb 9 19:20:17.589830 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:20:17.599690 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:20:17.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.708015 ignition[565]: Ignition 2.14.0 Feb 9 19:20:17.708026 ignition[565]: Stage: fetch-offline Feb 9 19:20:17.708094 ignition[565]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:17.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:17.710698 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:20:17.708117 ignition[565]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:17.712305 systemd[1]: Starting ignition-fetch.service... Feb 9 19:20:17.709073 ignition[565]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:17.712636 systemd-resolved[187]: Detected conflict on linux IN A 172.24.4.140 Feb 9 19:20:17.709167 ignition[565]: parsed url from cmdline: "" Feb 9 19:20:17.712659 systemd-resolved[187]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Feb 9 19:20:17.709171 ignition[565]: no config URL provided Feb 9 19:20:17.709176 ignition[565]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:20:17.709184 ignition[565]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:20:17.709192 ignition[565]: failed to fetch config: resource requires networking Feb 9 19:20:17.709288 ignition[565]: Ignition finished successfully Feb 9 19:20:17.729340 ignition[656]: Ignition 2.14.0 Feb 9 19:20:17.729367 ignition[656]: Stage: fetch Feb 9 19:20:17.729644 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:17.729686 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:17.731924 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:17.732164 ignition[656]: parsed url from cmdline: "" Feb 9 19:20:17.732174 ignition[656]: no config URL provided Feb 9 19:20:17.732188 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:20:17.732208 ignition[656]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:20:17.740297 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 9 19:20:17.740330 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 9 19:20:17.740337 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 9 19:20:17.949847 ignition[656]: GET result: OK Feb 9 19:20:17.950102 ignition[656]: parsing config with SHA512: 3e4352bb04f13a0ecb9172b4aca010b1a28bd553890ceadb2afa275ee103757e42117215a52b76f1189c56622fc2bac748c5c2b5eeb784b00c7f7a83af943102 Feb 9 19:20:18.060240 unknown[656]: fetched base config from "system" Feb 9 19:20:18.060271 unknown[656]: fetched base config from "system" Feb 9 19:20:18.060287 unknown[656]: fetched user config from "openstack" Feb 9 19:20:18.061897 ignition[656]: fetch: fetch complete Feb 9 19:20:18.061911 ignition[656]: fetch: fetch passed Feb 9 19:20:18.062025 ignition[656]: Ignition finished successfully Feb 9 19:20:18.067449 systemd[1]: Finished ignition-fetch.service. Feb 9 19:20:18.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.071017 systemd[1]: Starting ignition-kargs.service... Feb 9 19:20:18.093404 ignition[662]: Ignition 2.14.0 Feb 9 19:20:18.093433 ignition[662]: Stage: kargs Feb 9 19:20:18.093758 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:18.093800 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:18.096008 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:18.099392 ignition[662]: kargs: kargs passed Feb 9 19:20:18.109771 ignition[662]: Ignition finished successfully Feb 9 19:20:18.111524 systemd[1]: Finished ignition-kargs.service. Feb 9 19:20:18.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.114337 systemd[1]: Starting ignition-disks.service... Feb 9 19:20:18.128877 ignition[668]: Ignition 2.14.0 Feb 9 19:20:18.128898 ignition[668]: Stage: disks Feb 9 19:20:18.129082 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:18.129121 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:18.130836 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:18.132346 ignition[668]: disks: disks passed Feb 9 19:20:18.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.134037 systemd[1]: Finished ignition-disks.service. Feb 9 19:20:18.133119 ignition[668]: Ignition finished successfully Feb 9 19:20:18.134612 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:20:18.135127 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:20:18.136060 systemd[1]: Reached target local-fs.target. Feb 9 19:20:18.136894 systemd[1]: Reached target sysinit.target. Feb 9 19:20:18.137759 systemd[1]: Reached target basic.target. Feb 9 19:20:18.139442 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:20:18.157977 systemd-fsck[676]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 9 19:20:18.166902 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:20:18.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.168353 systemd[1]: Mounting sysroot.mount... Feb 9 19:20:18.186585 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:20:18.187132 systemd[1]: Mounted sysroot.mount. Feb 9 19:20:18.187725 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:20:18.190619 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:20:18.191409 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:20:18.192137 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 9 19:20:18.194312 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:20:18.194341 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:20:18.202239 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:20:18.204073 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:20:18.216284 initrd-setup-root[687]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:20:18.228450 initrd-setup-root[695]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:20:18.238680 initrd-setup-root[703]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:20:18.241328 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:20:18.247609 initrd-setup-root[712]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:20:18.252575 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (706) Feb 9 19:20:18.259217 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:20:18.259248 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:20:18.259261 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:20:18.277007 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:20:18.342343 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:20:18.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.345324 systemd[1]: Starting ignition-mount.service... Feb 9 19:20:18.351647 systemd[1]: Starting sysroot-boot.service... Feb 9 19:20:18.381329 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:20:18.382246 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:20:18.404854 ignition[751]: INFO : Ignition 2.14.0 Feb 9 19:20:18.405690 ignition[751]: INFO : Stage: mount Feb 9 19:20:18.406337 ignition[751]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:18.407100 ignition[751]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:18.409071 ignition[751]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:18.413131 ignition[751]: INFO : mount: mount passed Feb 9 19:20:18.415893 ignition[751]: INFO : Ignition finished successfully Feb 9 19:20:18.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.417260 systemd[1]: Finished ignition-mount.service. Feb 9 19:20:18.422909 systemd[1]: Finished sysroot-boot.service. Feb 9 19:20:18.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.424004 coreos-metadata[682]: Feb 09 19:20:18.423 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 9 19:20:18.445060 coreos-metadata[682]: Feb 09 19:20:18.444 INFO Fetch successful Feb 9 19:20:18.445749 coreos-metadata[682]: Feb 09 19:20:18.445 INFO wrote hostname ci-3510-3-2-c-a855e53d7e.novalocal to /sysroot/etc/hostname Feb 9 19:20:18.450213 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 9 19:20:18.450335 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 9 19:20:18.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:18.452427 systemd[1]: Starting ignition-files.service... Feb 9 19:20:18.459908 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:20:18.469600 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (760) Feb 9 19:20:18.476062 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:20:18.476085 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:20:18.476096 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:20:18.486660 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:20:18.506104 ignition[779]: INFO : Ignition 2.14.0 Feb 9 19:20:18.506104 ignition[779]: INFO : Stage: files Feb 9 19:20:18.507249 ignition[779]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:18.507249 ignition[779]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:18.508936 ignition[779]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:18.513578 ignition[779]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:20:18.515503 ignition[779]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:20:18.515503 ignition[779]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:20:18.523018 ignition[779]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:20:18.523880 ignition[779]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:20:18.525968 unknown[779]: wrote ssh authorized keys file for user: core Feb 9 19:20:18.527778 ignition[779]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:20:18.527778 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:20:18.527778 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:20:18.575244 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:20:18.877257 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:20:18.878273 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:20:18.878273 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:20:18.938961 systemd-networkd[633]: eth0: Gained IPv6LL Feb 9 19:20:19.422284 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:20:19.890037 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:20:19.890037 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:20:19.896815 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:20:19.896815 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:20:20.382840 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:20:21.087663 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:20:21.100247 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:20:21.100247 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:20:21.100247 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:20:21.100247 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:20:21.100247 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:20:21.238199 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:20:22.113624 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:20:22.113624 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:20:22.113624 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:20:22.121494 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:20:22.224407 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:20:24.407953 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:20:24.411959 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:20:24.411959 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:20:24.411959 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:20:24.522362 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:20:25.444664 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:20:25.446351 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:20:25.447181 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:20:25.448061 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:20:25.959594 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:20:26.429643 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:20:26.430651 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:20:26.431709 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:20:26.432559 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:20:26.433454 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:20:26.434288 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:20:26.435131 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:20:26.435131 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:20:26.436804 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:20:26.436804 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:20:26.436804 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:20:26.436804 ignition[779]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(14): [started] processing unit "coreos-metadata.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(14): op(15): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(14): [finished] processing unit "coreos-metadata.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:20:26.439989 ignition[779]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:20:26.456024 ignition[779]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:20:26.456024 ignition[779]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:20:26.456024 ignition[779]: INFO : files: files passed Feb 9 19:20:26.456024 ignition[779]: INFO : Ignition finished successfully Feb 9 19:20:26.494793 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 9 19:20:26.494848 kernel: audit: type=1130 audit(1707506426.458:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.494881 kernel: audit: type=1130 audit(1707506426.477:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.494922 kernel: audit: type=1131 audit(1707506426.477:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.455517 systemd[1]: Finished ignition-files.service. Feb 9 19:20:26.502700 kernel: audit: type=1130 audit(1707506426.494:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.464039 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:20:26.465803 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:20:26.504335 initrd-setup-root-after-ignition[804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:20:26.467407 systemd[1]: Starting ignition-quench.service... Feb 9 19:20:26.476465 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:20:26.476707 systemd[1]: Finished ignition-quench.service. Feb 9 19:20:26.493892 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:20:26.495224 systemd[1]: Reached target ignition-complete.target. Feb 9 19:20:26.500010 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:20:26.529226 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:20:26.551134 kernel: audit: type=1130 audit(1707506426.533:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.551189 kernel: audit: type=1131 audit(1707506426.533:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.529331 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:20:26.533679 systemd[1]: Reached target initrd-fs.target. Feb 9 19:20:26.551441 systemd[1]: Reached target initrd.target. Feb 9 19:20:26.553039 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:20:26.553801 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:20:26.568014 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:20:26.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.572561 kernel: audit: type=1130 audit(1707506426.568:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.573217 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:20:26.583332 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:20:26.584360 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:20:26.585424 systemd[1]: Stopped target timers.target. Feb 9 19:20:26.586399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:20:26.587072 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:20:26.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.588232 systemd[1]: Stopped target initrd.target. Feb 9 19:20:26.595907 kernel: audit: type=1131 audit(1707506426.587:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.597242 systemd[1]: Stopped target basic.target. Feb 9 19:20:26.598967 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:20:26.600791 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:20:26.602600 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:20:26.604390 systemd[1]: Stopped target remote-fs.target. Feb 9 19:20:26.606160 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:20:26.607974 systemd[1]: Stopped target sysinit.target. Feb 9 19:20:26.609706 systemd[1]: Stopped target local-fs.target. Feb 9 19:20:26.611406 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:20:26.613172 systemd[1]: Stopped target swap.target. Feb 9 19:20:26.614812 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:20:26.615110 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:20:26.620052 kernel: audit: type=1131 audit(1707506426.616:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.616942 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:20:26.621262 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:20:26.626415 kernel: audit: type=1131 audit(1707506426.622:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.621517 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:20:26.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.623116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:20:26.623393 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:20:26.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.627891 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:20:26.628159 systemd[1]: Stopped ignition-files.service. Feb 9 19:20:26.631837 systemd[1]: Stopping ignition-mount.service... Feb 9 19:20:26.635696 iscsid[642]: iscsid shutting down. Feb 9 19:20:26.638937 systemd[1]: Stopping iscsid.service... Feb 9 19:20:26.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.639986 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:20:26.640284 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:20:26.643200 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:20:26.646148 ignition[817]: INFO : Ignition 2.14.0 Feb 9 19:20:26.646148 ignition[817]: INFO : Stage: umount Feb 9 19:20:26.646148 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:20:26.646148 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 9 19:20:26.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.653776 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:20:26.660862 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 9 19:20:26.660862 ignition[817]: INFO : umount: umount passed Feb 9 19:20:26.660862 ignition[817]: INFO : Ignition finished successfully Feb 9 19:20:26.654204 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:20:26.655402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:20:26.655631 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:20:26.659054 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:20:26.659149 systemd[1]: Stopped iscsid.service. Feb 9 19:20:26.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.666747 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:20:26.667358 systemd[1]: Stopped ignition-mount.service. Feb 9 19:20:26.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.668940 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:20:26.669045 systemd[1]: Stopped ignition-disks.service. Feb 9 19:20:26.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.670577 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:20:26.670616 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:20:26.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.672228 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:20:26.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.672266 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:20:26.674032 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:20:26.674075 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:20:26.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.675674 systemd[1]: Stopped target paths.target. Feb 9 19:20:26.676672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:20:26.682593 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:20:26.683085 systemd[1]: Stopped target slices.target. Feb 9 19:20:26.683478 systemd[1]: Stopped target sockets.target. Feb 9 19:20:26.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.683979 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:20:26.684017 systemd[1]: Closed iscsid.socket. Feb 9 19:20:26.685450 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:20:26.685491 systemd[1]: Stopped ignition-setup.service. Feb 9 19:20:26.687039 systemd[1]: Stopping iscsiuio.service... Feb 9 19:20:26.694174 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:20:26.695187 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:20:26.695374 systemd[1]: Stopped iscsiuio.service. Feb 9 19:20:26.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.697103 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:20:26.697259 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:20:26.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.699264 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:20:26.699431 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:20:26.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.702641 systemd[1]: Stopped target network.target. Feb 9 19:20:26.704081 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:20:26.704160 systemd[1]: Closed iscsiuio.socket. Feb 9 19:20:26.705593 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:20:26.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.705676 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:20:26.707652 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:20:26.708705 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:20:26.710598 systemd-networkd[633]: eth0: DHCPv6 lease lost Feb 9 19:20:26.711629 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:20:26.711750 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:20:26.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.713991 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:20:26.714028 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:20:26.715000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:20:26.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.716336 systemd[1]: Stopping network-cleanup.service... Feb 9 19:20:26.716803 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:20:26.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.726000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:20:26.716862 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:20:26.717373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:20:26.717442 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:20:26.718060 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:20:26.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.718099 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:20:26.718668 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:20:26.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.720165 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:20:26.720582 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:20:26.720674 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:20:26.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.728256 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:20:26.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.728428 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:20:26.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.730722 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:20:26.730823 systemd[1]: Stopped network-cleanup.service. Feb 9 19:20:26.731885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:20:26.731917 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:20:26.732661 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:20:26.732689 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:20:26.733699 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:20:26.733742 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:20:26.734646 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:20:26.734691 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:20:26.735533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:20:26.735604 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:20:26.737218 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:20:26.744528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:20:26.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:26.744613 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:20:26.745798 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:20:26.745887 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:20:26.746510 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:20:26.748197 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:20:26.767942 systemd[1]: Switching root. Feb 9 19:20:26.788592 systemd-journald[185]: Journal stopped Feb 9 19:20:31.092042 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Feb 9 19:20:31.092102 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:20:31.092117 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:20:31.092129 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:20:31.092140 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:20:31.092154 kernel: SELinux: policy capability open_perms=1 Feb 9 19:20:31.092171 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:20:31.092181 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:20:31.092191 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:20:31.092201 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:20:31.092212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:20:31.092223 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:20:31.092235 systemd[1]: Successfully loaded SELinux policy in 92.024ms. Feb 9 19:20:31.092253 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.067ms. Feb 9 19:20:31.092269 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:20:31.092281 systemd[1]: Detected virtualization kvm. Feb 9 19:20:31.092292 systemd[1]: Detected architecture x86-64. Feb 9 19:20:31.092303 systemd[1]: Detected first boot. Feb 9 19:20:31.092315 systemd[1]: Hostname set to . Feb 9 19:20:31.092329 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:20:31.092344 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:20:31.092355 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:20:31.092369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:20:31.092381 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:20:31.092394 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:20:31.092406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:20:31.092417 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:20:31.092428 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:20:31.092442 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:20:31.092453 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:20:31.092465 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:20:31.092476 systemd[1]: Created slice system-getty.slice. Feb 9 19:20:31.092487 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:20:31.092502 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:20:31.092526 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:20:31.095598 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:20:31.095645 systemd[1]: Created slice user.slice. Feb 9 19:20:31.095666 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:20:31.095679 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:20:31.095691 systemd[1]: Set up automount boot.automount. Feb 9 19:20:31.095705 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:20:31.095718 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:20:31.095730 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:20:31.095745 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:20:31.095758 systemd[1]: Reached target integritysetup.target. Feb 9 19:20:31.095770 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:20:31.095783 systemd[1]: Reached target remote-fs.target. Feb 9 19:20:31.095795 systemd[1]: Reached target slices.target. Feb 9 19:20:31.095808 systemd[1]: Reached target swap.target. Feb 9 19:20:31.095820 systemd[1]: Reached target torcx.target. Feb 9 19:20:31.095832 systemd[1]: Reached target veritysetup.target. Feb 9 19:20:31.095845 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:20:31.095857 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:20:31.095870 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:20:31.095882 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:20:31.095894 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:20:31.095906 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:20:31.095918 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:20:31.095930 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:20:31.095942 systemd[1]: Mounting media.mount... Feb 9 19:20:31.095955 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:20:31.095967 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:20:31.095981 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:20:31.095994 systemd[1]: Mounting tmp.mount... Feb 9 19:20:31.096006 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:20:31.096018 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:20:31.096030 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:20:31.096043 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:20:31.096054 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:20:31.096067 systemd[1]: Starting modprobe@drm.service... Feb 9 19:20:31.096079 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:20:31.096092 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:20:31.096105 systemd[1]: Starting modprobe@loop.service... Feb 9 19:20:31.096117 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:20:31.096130 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:20:31.096142 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:20:31.096154 kernel: fuse: init (API version 7.34) Feb 9 19:20:31.096166 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:20:31.096178 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:20:31.096190 systemd[1]: Stopped systemd-journald.service. Feb 9 19:20:31.096204 systemd[1]: Starting systemd-journald.service... Feb 9 19:20:31.096216 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:20:31.096228 kernel: loop: module loaded Feb 9 19:20:31.096240 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:20:31.096252 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:20:31.096264 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:20:31.096276 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:20:31.096288 systemd[1]: Stopped verity-setup.service. Feb 9 19:20:31.096300 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:20:31.096314 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:20:31.096326 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:20:31.096338 systemd[1]: Mounted media.mount. Feb 9 19:20:31.096350 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:20:31.096362 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:20:31.096375 systemd[1]: Mounted tmp.mount. Feb 9 19:20:31.096387 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:20:31.096399 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:20:31.096411 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:20:31.096426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:20:31.096438 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:20:31.096451 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:20:31.096482 systemd-journald[936]: Journal started Feb 9 19:20:31.096533 systemd-journald[936]: Runtime Journal (/run/log/journal/2209532305a44e44980dc1e945505345) is 4.9M, max 39.5M, 34.5M free. Feb 9 19:20:27.069000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:20:27.151000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:20:27.151000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:20:27.151000 audit: BPF prog-id=10 op=LOAD Feb 9 19:20:27.151000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:20:27.151000 audit: BPF prog-id=11 op=LOAD Feb 9 19:20:27.151000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:20:27.310000 audit[850]: AVC avc: denied { associate } for pid=850 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:20:27.310000 audit[850]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=833 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:20:27.310000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:20:27.313000 audit[850]: AVC avc: denied { associate } for pid=850 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:20:27.313000 audit[850]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=833 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:20:27.313000 audit: CWD cwd="/" Feb 9 19:20:27.313000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:27.313000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:27.313000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:20:30.871000 audit: BPF prog-id=12 op=LOAD Feb 9 19:20:30.871000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:20:30.871000 audit: BPF prog-id=13 op=LOAD Feb 9 19:20:30.871000 audit: BPF prog-id=14 op=LOAD Feb 9 19:20:30.871000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:20:30.871000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:20:30.872000 audit: BPF prog-id=15 op=LOAD Feb 9 19:20:30.872000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:20:30.872000 audit: BPF prog-id=16 op=LOAD Feb 9 19:20:30.872000 audit: BPF prog-id=17 op=LOAD Feb 9 19:20:30.872000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:20:30.872000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:20:30.873000 audit: BPF prog-id=18 op=LOAD Feb 9 19:20:30.873000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:20:30.874000 audit: BPF prog-id=19 op=LOAD Feb 9 19:20:30.874000 audit: BPF prog-id=20 op=LOAD Feb 9 19:20:30.874000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:20:30.874000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:20:30.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.099626 systemd[1]: Finished modprobe@drm.service. Feb 9 19:20:31.099648 systemd[1]: Started systemd-journald.service. Feb 9 19:20:30.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:30.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:30.883000 audit: BPF prog-id=18 op=UNLOAD Feb 9 19:20:31.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.041000 audit: BPF prog-id=21 op=LOAD Feb 9 19:20:31.041000 audit: BPF prog-id=22 op=LOAD Feb 9 19:20:31.041000 audit: BPF prog-id=23 op=LOAD Feb 9 19:20:31.041000 audit: BPF prog-id=19 op=UNLOAD Feb 9 19:20:31.043000 audit: BPF prog-id=20 op=UNLOAD Feb 9 19:20:31.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.090000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:20:31.090000 audit[936]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe3d6b7840 a2=4000 a3=7ffe3d6b78dc items=0 ppid=1 pid=936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:20:31.090000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:20:31.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:27.308097 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:20:30.868565 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:20:27.309160 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:20:31.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:30.868580 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:20:27.309182 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:20:30.875200 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:20:27.309214 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:20:31.101301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:20:27.309227 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:20:27.309258 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:20:27.309272 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:20:27.309472 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:20:27.309512 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:20:27.309526 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:20:27.310509 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:20:27.310567 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:20:27.310588 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:20:27.310605 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:20:27.310626 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:20:27.310642 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:20:30.442096 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:20:30.442398 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:20:30.442523 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:20:30.442788 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:20:30.442856 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:20:30.442930 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-09T19:20:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:20:31.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.105326 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:20:31.106456 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:20:31.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.107123 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:20:31.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.108122 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:20:31.108864 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:20:31.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.109137 systemd[1]: Finished modprobe@loop.service. Feb 9 19:20:31.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.109902 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:20:31.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.110680 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:20:31.111431 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:20:31.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.112409 systemd[1]: Reached target network-pre.target. Feb 9 19:20:31.114163 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:20:31.115776 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:20:31.119062 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:20:31.121435 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:20:31.125742 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:20:31.126591 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:20:31.127857 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:20:31.128404 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:20:31.130453 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:20:31.132931 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:20:31.137013 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:20:31.138991 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:20:31.143042 systemd-journald[936]: Time spent on flushing to /var/log/journal/2209532305a44e44980dc1e945505345 is 34.765ms for 1144 entries. Feb 9 19:20:31.143042 systemd-journald[936]: System Journal (/var/log/journal/2209532305a44e44980dc1e945505345) is 8.0M, max 584.8M, 576.8M free. Feb 9 19:20:31.195416 systemd-journald[936]: Received client request to flush runtime journal. Feb 9 19:20:31.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.158376 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:20:31.159140 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:20:31.172930 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:20:31.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.196933 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:20:31.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.198936 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:20:31.200105 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:20:31.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.201899 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:20:31.215882 udevadm[961]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:20:31.934674 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:20:31.947627 kernel: kauditd_printk_skb: 106 callbacks suppressed Feb 9 19:20:31.947817 kernel: audit: type=1130 audit(1707506431.935:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:31.951126 kernel: audit: type=1334 audit(1707506431.943:146): prog-id=24 op=LOAD Feb 9 19:20:31.943000 audit: BPF prog-id=24 op=LOAD Feb 9 19:20:31.949265 systemd[1]: Starting systemd-udevd.service... Feb 9 19:20:31.947000 audit: BPF prog-id=25 op=LOAD Feb 9 19:20:31.947000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:20:31.947000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:20:31.958677 kernel: audit: type=1334 audit(1707506431.947:147): prog-id=25 op=LOAD Feb 9 19:20:31.958783 kernel: audit: type=1334 audit(1707506431.947:148): prog-id=7 op=UNLOAD Feb 9 19:20:31.958825 kernel: audit: type=1334 audit(1707506431.947:149): prog-id=8 op=UNLOAD Feb 9 19:20:31.994416 systemd-udevd[962]: Using default interface naming scheme 'v252'. Feb 9 19:20:32.028097 systemd[1]: Started systemd-udevd.service. Feb 9 19:20:32.039530 kernel: audit: type=1130 audit(1707506432.033:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.039716 kernel: audit: type=1334 audit(1707506432.038:151): prog-id=26 op=LOAD Feb 9 19:20:32.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.038000 audit: BPF prog-id=26 op=LOAD Feb 9 19:20:32.041447 systemd[1]: Starting systemd-networkd.service... Feb 9 19:20:32.066671 kernel: audit: type=1334 audit(1707506432.058:152): prog-id=27 op=LOAD Feb 9 19:20:32.066844 kernel: audit: type=1334 audit(1707506432.060:153): prog-id=28 op=LOAD Feb 9 19:20:32.066892 kernel: audit: type=1334 audit(1707506432.061:154): prog-id=29 op=LOAD Feb 9 19:20:32.058000 audit: BPF prog-id=27 op=LOAD Feb 9 19:20:32.060000 audit: BPF prog-id=28 op=LOAD Feb 9 19:20:32.061000 audit: BPF prog-id=29 op=LOAD Feb 9 19:20:32.063724 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:20:32.092971 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:20:32.144612 systemd[1]: Started systemd-userdbd.service. Feb 9 19:20:32.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.173060 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:20:32.191566 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 9 19:20:32.186000 audit[973]: AVC avc: denied { confidentiality } for pid=973 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:20:32.186000 audit[973]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=561ee497c5f0 a1=32194 a2=7f4c0edcebc5 a3=5 items=108 ppid=962 pid=973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:20:32.186000 audit: CWD cwd="/" Feb 9 19:20:32.186000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=1 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=2 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=3 name=(null) inode=14252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=4 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=5 name=(null) inode=14253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=6 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=7 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=8 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=9 name=(null) inode=14255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=10 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=11 name=(null) inode=14256 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=12 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=13 name=(null) inode=14257 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=14 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=15 name=(null) inode=14258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=16 name=(null) inode=14254 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=17 name=(null) inode=14259 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=18 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=19 name=(null) inode=14260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=20 name=(null) inode=14260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=21 name=(null) inode=14261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=22 name=(null) inode=14260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=23 name=(null) inode=14262 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=24 name=(null) inode=14260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=25 name=(null) inode=14263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=26 name=(null) inode=14260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=27 name=(null) inode=14264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=28 name=(null) inode=14260 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=29 name=(null) inode=14265 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=30 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=31 name=(null) inode=14266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=32 name=(null) inode=14266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=33 name=(null) inode=14267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=34 name=(null) inode=14266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=35 name=(null) inode=14268 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=36 name=(null) inode=14266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=37 name=(null) inode=14269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=38 name=(null) inode=14266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=39 name=(null) inode=14270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=40 name=(null) inode=14266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=41 name=(null) inode=14271 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=42 name=(null) inode=14251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=43 name=(null) inode=14272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=44 name=(null) inode=14272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=45 name=(null) inode=14273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=46 name=(null) inode=14272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=47 name=(null) inode=14274 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=48 name=(null) inode=14272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=49 name=(null) inode=14275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=50 name=(null) inode=14272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=51 name=(null) inode=14276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=52 name=(null) inode=14272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=53 name=(null) inode=14277 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=55 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=56 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=57 name=(null) inode=14279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=58 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=59 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=60 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=61 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=62 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=63 name=(null) inode=14282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=64 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=65 name=(null) inode=14283 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=66 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=67 name=(null) inode=14284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=68 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=69 name=(null) inode=14285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=70 name=(null) inode=14281 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=71 name=(null) inode=14286 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=72 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=73 name=(null) inode=14287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=74 name=(null) inode=14287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=75 name=(null) inode=14288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=76 name=(null) inode=14287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=77 name=(null) inode=14289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=78 name=(null) inode=14287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=79 name=(null) inode=14290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=80 name=(null) inode=14287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=81 name=(null) inode=14291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=82 name=(null) inode=14287 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=83 name=(null) inode=14292 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=84 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=85 name=(null) inode=14293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=86 name=(null) inode=14293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=87 name=(null) inode=14294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=88 name=(null) inode=14293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=89 name=(null) inode=14295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=90 name=(null) inode=14293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=91 name=(null) inode=14296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=92 name=(null) inode=14293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=93 name=(null) inode=14297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=94 name=(null) inode=14293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=95 name=(null) inode=14298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=96 name=(null) inode=14278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=97 name=(null) inode=14299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=98 name=(null) inode=14299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=99 name=(null) inode=14300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=100 name=(null) inode=14299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=101 name=(null) inode=14301 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=102 name=(null) inode=14299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=103 name=(null) inode=14302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=104 name=(null) inode=14299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=105 name=(null) inode=14303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=106 name=(null) inode=14299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PATH item=107 name=(null) inode=14304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:20:32.186000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:20:32.227624 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 9 19:20:32.233710 systemd-networkd[978]: lo: Link UP Feb 9 19:20:32.233719 systemd-networkd[978]: lo: Gained carrier Feb 9 19:20:32.234306 systemd-networkd[978]: Enumeration completed Feb 9 19:20:32.234429 systemd[1]: Started systemd-networkd.service. Feb 9 19:20:32.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.234456 systemd-networkd[978]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:20:32.236270 systemd-networkd[978]: eth0: Link UP Feb 9 19:20:32.236281 systemd-networkd[978]: eth0: Gained carrier Feb 9 19:20:32.241571 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:20:32.251569 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:20:32.251699 systemd-networkd[978]: eth0: DHCPv4 address 172.24.4.140/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 9 19:20:32.268577 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:20:32.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.310145 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:20:32.312158 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:20:32.336777 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:20:32.364064 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:20:32.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.365639 systemd[1]: Reached target cryptsetup.target. Feb 9 19:20:32.369911 systemd[1]: Starting lvm2-activation.service... Feb 9 19:20:32.374242 lvm[992]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:20:32.397873 systemd[1]: Finished lvm2-activation.service. Feb 9 19:20:32.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.399280 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:20:32.400430 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:20:32.400491 systemd[1]: Reached target local-fs.target. Feb 9 19:20:32.401620 systemd[1]: Reached target machines.target. Feb 9 19:20:32.405455 systemd[1]: Starting ldconfig.service... Feb 9 19:20:32.408056 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:20:32.408156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:20:32.412375 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:20:32.416400 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:20:32.425763 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:20:32.426515 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:20:32.426585 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:20:32.427984 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:20:32.428775 systemd[1]: boot.automount: Got automount request for /boot, triggered by 994 (bootctl) Feb 9 19:20:32.430247 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:20:32.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:32.456657 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:20:32.747883 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:20:33.104363 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:20:33.395678 systemd-tmpfiles[997]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:20:33.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.401393 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:20:33.402929 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:20:33.554617 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Feb 9 19:20:33.554617 systemd-fsck[1003]: /dev/vda1: 789 files, 115339/258078 clusters Feb 9 19:20:33.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.557055 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:20:33.562657 systemd[1]: Mounting boot.mount... Feb 9 19:20:33.596970 systemd[1]: Mounted boot.mount. Feb 9 19:20:33.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.629790 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:20:33.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.743236 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:20:33.745058 systemd[1]: Starting audit-rules.service... Feb 9 19:20:33.746489 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:20:33.750000 audit: BPF prog-id=30 op=LOAD Feb 9 19:20:33.753000 audit: BPF prog-id=31 op=LOAD Feb 9 19:20:33.749249 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:20:33.751711 systemd[1]: Starting systemd-resolved.service... Feb 9 19:20:33.755906 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:20:33.760715 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:20:33.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.769086 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:20:33.769821 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:20:33.780000 audit[1011]: SYSTEM_BOOT pid=1011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.782330 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:20:33.786736 systemd-networkd[978]: eth0: Gained IPv6LL Feb 9 19:20:33.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.818636 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:20:33.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:20:33.875347 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:20:33.876028 systemd[1]: Reached target time-set.target. Feb 9 19:20:33.888897 systemd-resolved[1009]: Positive Trust Anchors: Feb 9 19:20:33.889249 systemd-resolved[1009]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:20:33.889356 systemd-resolved[1009]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:20:33.890000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:20:33.890000 audit[1027]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcbb1bfb20 a2=420 a3=0 items=0 ppid=1006 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:20:33.890000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:20:33.891025 augenrules[1027]: No rules Feb 9 19:20:33.891508 systemd[1]: Finished audit-rules.service. Feb 9 19:20:33.899875 systemd-resolved[1009]: Using system hostname 'ci-3510-3-2-c-a855e53d7e.novalocal'. Feb 9 19:20:33.901717 systemd[1]: Started systemd-resolved.service. Feb 9 19:20:33.902480 systemd[1]: Reached target network.target. Feb 9 19:20:33.902992 systemd[1]: Reached target nss-lookup.target. Feb 9 19:20:33.932361 systemd-timesyncd[1010]: Contacted time server 95.81.173.74:123 (0.flatcar.pool.ntp.org). Feb 9 19:20:33.932480 systemd-timesyncd[1010]: Initial clock synchronization to Fri 2024-02-09 19:20:34.272280 UTC. Feb 9 19:20:34.062958 ldconfig[993]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:20:34.080548 systemd[1]: Finished ldconfig.service. Feb 9 19:20:34.085245 systemd[1]: Starting systemd-update-done.service... Feb 9 19:20:34.102988 systemd[1]: Finished systemd-update-done.service. Feb 9 19:20:34.104384 systemd[1]: Reached target sysinit.target. Feb 9 19:20:34.105771 systemd[1]: Started motdgen.path. Feb 9 19:20:34.107054 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:20:34.108910 systemd[1]: Started logrotate.timer. Feb 9 19:20:34.110176 systemd[1]: Started mdadm.timer. Feb 9 19:20:34.111278 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:20:34.112432 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:20:34.112520 systemd[1]: Reached target paths.target. Feb 9 19:20:34.113718 systemd[1]: Reached target timers.target. Feb 9 19:20:34.116766 systemd[1]: Listening on dbus.socket. Feb 9 19:20:34.120620 systemd[1]: Starting docker.socket... Feb 9 19:20:34.128557 systemd[1]: Listening on sshd.socket. Feb 9 19:20:34.130259 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:20:34.131389 systemd[1]: Listening on docker.socket. Feb 9 19:20:34.132858 systemd[1]: Reached target sockets.target. Feb 9 19:20:34.133978 systemd[1]: Reached target basic.target. Feb 9 19:20:34.135192 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:20:34.135295 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:20:34.138068 systemd[1]: Starting containerd.service... Feb 9 19:20:34.141996 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:20:34.145904 systemd[1]: Starting dbus.service... Feb 9 19:20:34.154026 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:20:34.170417 systemd[1]: Starting extend-filesystems.service... Feb 9 19:20:34.171893 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:20:34.175034 systemd[1]: Starting motdgen.service... Feb 9 19:20:34.180786 jq[1040]: false Feb 9 19:20:34.181565 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:20:34.187133 systemd[1]: Starting prepare-critools.service... Feb 9 19:20:34.194910 systemd[1]: Starting prepare-helm.service... Feb 9 19:20:34.200798 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:20:34.205887 systemd[1]: Starting sshd-keygen.service... Feb 9 19:20:34.208538 extend-filesystems[1041]: Found vda Feb 9 19:20:34.211334 extend-filesystems[1041]: Found vda1 Feb 9 19:20:34.211966 extend-filesystems[1041]: Found vda2 Feb 9 19:20:34.212512 extend-filesystems[1041]: Found vda3 Feb 9 19:20:34.212736 systemd[1]: Starting systemd-logind.service... Feb 9 19:20:34.216827 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:20:34.216893 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:20:34.217416 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:20:34.218170 systemd[1]: Starting update-engine.service... Feb 9 19:20:34.219725 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:20:34.222114 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:20:34.222288 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:20:34.227127 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:20:34.227815 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:20:34.239402 jq[1053]: true Feb 9 19:20:34.246573 tar[1055]: ./ Feb 9 19:20:34.246573 tar[1055]: ./macvlan Feb 9 19:20:34.251448 tar[1056]: crictl Feb 9 19:20:34.253102 extend-filesystems[1041]: Found usr Feb 9 19:20:34.253102 extend-filesystems[1041]: Found vda4 Feb 9 19:20:34.253102 extend-filesystems[1041]: Found vda6 Feb 9 19:20:34.253102 extend-filesystems[1041]: Found vda7 Feb 9 19:20:34.253102 extend-filesystems[1041]: Found vda9 Feb 9 19:20:34.253102 extend-filesystems[1041]: Checking size of /dev/vda9 Feb 9 19:20:34.288774 tar[1057]: linux-amd64/helm Feb 9 19:20:34.289048 jq[1068]: true Feb 9 19:20:34.257079 dbus-daemon[1037]: [system] SELinux support is enabled Feb 9 19:20:34.257267 systemd[1]: Started dbus.service. Feb 9 19:20:34.268248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:20:34.268277 systemd[1]: Reached target system-config.target. Feb 9 19:20:34.269413 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:20:34.269433 systemd[1]: Reached target user-config.target. Feb 9 19:20:34.295720 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:20:34.295911 systemd[1]: Finished motdgen.service. Feb 9 19:20:34.323040 extend-filesystems[1041]: Resized partition /dev/vda9 Feb 9 19:20:34.341261 extend-filesystems[1094]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:20:34.399601 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 9 19:20:34.412952 env[1063]: time="2024-02-09T19:20:34.412839957Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:20:34.439167 update_engine[1052]: I0209 19:20:34.437894 1052 main.cc:92] Flatcar Update Engine starting Feb 9 19:20:34.485800 update_engine[1052]: I0209 19:20:34.444846 1052 update_check_scheduler.cc:74] Next update check in 7m48s Feb 9 19:20:34.444785 systemd[1]: Started update-engine.service. Feb 9 19:20:34.447833 systemd[1]: Started locksmithd.service. Feb 9 19:20:34.483067 systemd-logind[1051]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:20:34.483095 systemd-logind[1051]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:20:34.486601 systemd-logind[1051]: New seat seat0. Feb 9 19:20:34.493148 systemd[1]: Started systemd-logind.service. Feb 9 19:20:34.497483 env[1063]: time="2024-02-09T19:20:34.497416965Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:20:34.500414 bash[1095]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:20:34.501073 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:20:34.506624 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 9 19:20:35.141839 coreos-metadata[1036]: Feb 09 19:20:34.514 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 9 19:20:35.141839 coreos-metadata[1036]: Feb 09 19:20:34.717 INFO Fetch successful Feb 9 19:20:35.141839 coreos-metadata[1036]: Feb 09 19:20:34.717 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:20:35.141839 coreos-metadata[1036]: Feb 09 19:20:34.731 INFO Fetch successful Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:34.513290849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:34.515194887Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:34.515230305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:35.141187923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:35.141265598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:35.141322855Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:20:35.146646 env[1063]: time="2024-02-09T19:20:35.141352730Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:20:35.151023 env[1063]: time="2024-02-09T19:20:35.147474099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:20:35.151023 env[1063]: time="2024-02-09T19:20:35.148289863Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:20:35.151023 env[1063]: time="2024-02-09T19:20:35.148655982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:20:35.151023 env[1063]: time="2024-02-09T19:20:35.148706432Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:20:35.151338 extend-filesystems[1094]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:20:35.151338 extend-filesystems[1094]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 9 19:20:35.151338 extend-filesystems[1094]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 9 19:20:35.148653 unknown[1036]: wrote ssh authorized keys file for user: core Feb 9 19:20:35.172757 env[1063]: time="2024-02-09T19:20:35.151485375Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:20:35.172757 env[1063]: time="2024-02-09T19:20:35.151559144Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:20:35.172950 extend-filesystems[1041]: Resized filesystem in /dev/vda9 Feb 9 19:20:35.150615 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:20:35.187459 sshd_keygen[1071]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.174963302Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175066011Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175102277Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175208622Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175370301Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175425490Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175463024Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175501046Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175536688Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175626874Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175663327Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175696736Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.175940912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:20:35.191062 env[1063]: time="2024-02-09T19:20:35.176138348Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:20:35.151037 systemd[1]: Finished extend-filesystems.service. Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177193467Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177261094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177299802Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177429091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177616999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177663552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177695319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177727718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177763923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177798401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177830926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.177872616Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.178212756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.178257284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.193812 env[1063]: time="2024-02-09T19:20:35.178290795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.188649 systemd[1]: Started containerd.service. Feb 9 19:20:35.204414 env[1063]: time="2024-02-09T19:20:35.178325212Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:20:35.204414 env[1063]: time="2024-02-09T19:20:35.178364128Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:20:35.204414 env[1063]: time="2024-02-09T19:20:35.178471003Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:20:35.204414 env[1063]: time="2024-02-09T19:20:35.178527740Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:20:35.204414 env[1063]: time="2024-02-09T19:20:35.178653423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.179162996Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.179322358Z" level=info msg="Connect containerd service" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.179386836Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186390255Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186523192Z" level=info msg="Start subscribing containerd event" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186649125Z" level=info msg="Start recovering state" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186775973Z" level=info msg="Start event monitor" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186817756Z" level=info msg="Start snapshots syncer" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186841438Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.186861452Z" level=info msg="Start streaming server" Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.188257262Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.188355720Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:20:35.204794 env[1063]: time="2024-02-09T19:20:35.188465319Z" level=info msg="containerd successfully booted in 0.800737s" Feb 9 19:20:35.211905 update-ssh-keys[1103]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:20:35.205771 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:20:35.227104 tar[1055]: ./static Feb 9 19:20:35.254855 systemd[1]: Finished sshd-keygen.service. Feb 9 19:20:35.257102 systemd[1]: Starting issuegen.service... Feb 9 19:20:35.263227 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:20:35.263387 systemd[1]: Finished issuegen.service. Feb 9 19:20:35.265312 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:20:35.273124 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:20:35.277698 systemd[1]: Created slice system-sshd.slice. Feb 9 19:20:35.279488 systemd[1]: Started getty@tty1.service. Feb 9 19:20:35.281302 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:20:35.283644 systemd[1]: Reached target getty.target. Feb 9 19:20:35.285617 systemd[1]: Started sshd@0-172.24.4.140:22-172.24.4.1:35688.service. Feb 9 19:20:35.299330 tar[1055]: ./vlan Feb 9 19:20:35.372836 tar[1055]: ./portmap Feb 9 19:20:35.447525 tar[1055]: ./host-local Feb 9 19:20:35.517497 tar[1055]: ./vrf Feb 9 19:20:35.593440 tar[1055]: ./bridge Feb 9 19:20:35.671910 tar[1055]: ./tuning Feb 9 19:20:35.713987 tar[1055]: ./firewall Feb 9 19:20:35.772795 tar[1055]: ./host-device Feb 9 19:20:35.772850 systemd[1]: Finished prepare-critools.service. Feb 9 19:20:35.825455 tar[1055]: ./sbr Feb 9 19:20:35.867869 tar[1055]: ./loopback Feb 9 19:20:35.907354 tar[1055]: ./dhcp Feb 9 19:20:35.919361 tar[1057]: linux-amd64/LICENSE Feb 9 19:20:35.919361 tar[1057]: linux-amd64/README.md Feb 9 19:20:35.925189 systemd[1]: Finished prepare-helm.service. Feb 9 19:20:36.007064 tar[1055]: ./ptp Feb 9 19:20:36.037201 locksmithd[1100]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:20:36.048837 tar[1055]: ./ipvlan Feb 9 19:20:36.087419 tar[1055]: ./bandwidth Feb 9 19:20:36.184141 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:20:36.186159 systemd[1]: Reached target multi-user.target. Feb 9 19:20:36.190618 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:20:36.202342 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:20:36.202747 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:20:36.207263 systemd[1]: Startup finished in 954ms (kernel) + 12.189s (initrd) + 9.253s (userspace) = 22.397s. Feb 9 19:20:36.595324 sshd[1120]: Accepted publickey for core from 172.24.4.1 port 35688 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:20:36.600271 sshd[1120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:20:36.626260 systemd[1]: Created slice user-500.slice. Feb 9 19:20:36.628869 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:20:36.636512 systemd-logind[1051]: New session 1 of user core. Feb 9 19:20:36.651495 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:20:36.655356 systemd[1]: Starting user@500.service... Feb 9 19:20:36.663101 (systemd)[1131]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:20:36.795882 systemd[1131]: Queued start job for default target default.target. Feb 9 19:20:36.797170 systemd[1131]: Reached target paths.target. Feb 9 19:20:36.797281 systemd[1131]: Reached target sockets.target. Feb 9 19:20:36.797394 systemd[1131]: Reached target timers.target. Feb 9 19:20:36.797499 systemd[1131]: Reached target basic.target. Feb 9 19:20:36.797690 systemd[1]: Started user@500.service. Feb 9 19:20:36.798702 systemd[1]: Started session-1.scope. Feb 9 19:20:36.799517 systemd[1131]: Reached target default.target. Feb 9 19:20:36.799923 systemd[1131]: Startup finished in 123ms. Feb 9 19:20:37.282438 systemd[1]: Started sshd@1-172.24.4.140:22-172.24.4.1:35704.service. Feb 9 19:20:39.664775 sshd[1140]: Accepted publickey for core from 172.24.4.1 port 35704 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:20:39.667936 sshd[1140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:20:39.679523 systemd-logind[1051]: New session 2 of user core. Feb 9 19:20:39.681083 systemd[1]: Started session-2.scope. Feb 9 19:20:40.385762 systemd[1]: Started sshd@2-172.24.4.140:22-172.24.4.1:35720.service. Feb 9 19:20:40.387966 sshd[1140]: pam_unix(sshd:session): session closed for user core Feb 9 19:20:40.393913 systemd[1]: sshd@1-172.24.4.140:22-172.24.4.1:35704.service: Deactivated successfully. Feb 9 19:20:40.395527 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:20:40.397939 systemd-logind[1051]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:20:40.403036 systemd-logind[1051]: Removed session 2. Feb 9 19:20:42.209416 sshd[1145]: Accepted publickey for core from 172.24.4.1 port 35720 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:20:42.212979 sshd[1145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:20:42.225347 systemd[1]: Started session-3.scope. Feb 9 19:20:42.226254 systemd-logind[1051]: New session 3 of user core. Feb 9 19:20:42.868943 sshd[1145]: pam_unix(sshd:session): session closed for user core Feb 9 19:20:42.875357 systemd[1]: Started sshd@3-172.24.4.140:22-172.24.4.1:35732.service. Feb 9 19:20:42.879231 systemd[1]: sshd@2-172.24.4.140:22-172.24.4.1:35720.service: Deactivated successfully. Feb 9 19:20:42.880819 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:20:42.884150 systemd-logind[1051]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:20:42.887415 systemd-logind[1051]: Removed session 3. Feb 9 19:20:44.335093 sshd[1151]: Accepted publickey for core from 172.24.4.1 port 35732 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:20:44.338724 sshd[1151]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:20:44.350119 systemd-logind[1051]: New session 4 of user core. Feb 9 19:20:44.351533 systemd[1]: Started session-4.scope. Feb 9 19:20:45.144767 sshd[1151]: pam_unix(sshd:session): session closed for user core Feb 9 19:20:45.154393 systemd[1]: Started sshd@4-172.24.4.140:22-172.24.4.1:53846.service. Feb 9 19:20:45.156866 systemd[1]: sshd@3-172.24.4.140:22-172.24.4.1:35732.service: Deactivated successfully. Feb 9 19:20:45.158885 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:20:45.162185 systemd-logind[1051]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:20:45.164950 systemd-logind[1051]: Removed session 4. Feb 9 19:20:46.703372 sshd[1157]: Accepted publickey for core from 172.24.4.1 port 53846 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:20:46.706490 sshd[1157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:20:46.717413 systemd-logind[1051]: New session 5 of user core. Feb 9 19:20:46.719213 systemd[1]: Started session-5.scope. Feb 9 19:20:47.207303 sudo[1161]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:20:47.207875 sudo[1161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:20:47.909443 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:20:47.923032 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:20:47.923905 systemd[1]: Reached target network-online.target. Feb 9 19:20:47.926959 systemd[1]: Starting docker.service... Feb 9 19:20:48.006325 env[1177]: time="2024-02-09T19:20:48.006248628Z" level=info msg="Starting up" Feb 9 19:20:48.009490 env[1177]: time="2024-02-09T19:20:48.009451370Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:20:48.009490 env[1177]: time="2024-02-09T19:20:48.009474453Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:20:48.009702 env[1177]: time="2024-02-09T19:20:48.009502034Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:20:48.009702 env[1177]: time="2024-02-09T19:20:48.009517423Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:20:48.012328 env[1177]: time="2024-02-09T19:20:48.011878436Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:20:48.012328 env[1177]: time="2024-02-09T19:20:48.011898796Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:20:48.012328 env[1177]: time="2024-02-09T19:20:48.011913933Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:20:48.012328 env[1177]: time="2024-02-09T19:20:48.011923574Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:20:48.024777 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2924260430-merged.mount: Deactivated successfully. Feb 9 19:20:48.139841 env[1177]: time="2024-02-09T19:20:48.139718086Z" level=info msg="Loading containers: start." Feb 9 19:20:48.367035 kernel: Initializing XFRM netlink socket Feb 9 19:20:48.458494 env[1177]: time="2024-02-09T19:20:48.458397883Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:20:48.553669 systemd-networkd[978]: docker0: Link UP Feb 9 19:20:48.571991 env[1177]: time="2024-02-09T19:20:48.571924528Z" level=info msg="Loading containers: done." Feb 9 19:20:48.594448 env[1177]: time="2024-02-09T19:20:48.594335451Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:20:48.594931 env[1177]: time="2024-02-09T19:20:48.594877702Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:20:48.595182 env[1177]: time="2024-02-09T19:20:48.595137820Z" level=info msg="Daemon has completed initialization" Feb 9 19:20:48.628650 systemd[1]: Started docker.service. Feb 9 19:20:48.643084 env[1177]: time="2024-02-09T19:20:48.642981302Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:20:48.696183 systemd[1]: Reloading. Feb 9 19:20:48.831080 /usr/lib/systemd/system-generators/torcx-generator[1317]: time="2024-02-09T19:20:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:20:48.832612 /usr/lib/systemd/system-generators/torcx-generator[1317]: time="2024-02-09T19:20:48Z" level=info msg="torcx already run" Feb 9 19:20:48.904195 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:20:48.904218 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:20:48.926623 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:20:49.034526 systemd[1]: Started kubelet.service. Feb 9 19:20:49.158818 kubelet[1360]: E0209 19:20:49.158591 1360 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:20:49.161608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:20:49.161750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:20:50.332778 env[1063]: time="2024-02-09T19:20:50.332680217Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:20:51.190240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573504489.mount: Deactivated successfully. Feb 9 19:20:54.247779 env[1063]: time="2024-02-09T19:20:54.247514281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:54.252956 env[1063]: time="2024-02-09T19:20:54.252892169Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:54.259639 env[1063]: time="2024-02-09T19:20:54.259595052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:54.264399 env[1063]: time="2024-02-09T19:20:54.264346020Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:54.266486 env[1063]: time="2024-02-09T19:20:54.266440610Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:20:54.294576 env[1063]: time="2024-02-09T19:20:54.294479143Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:20:57.819240 env[1063]: time="2024-02-09T19:20:57.819071343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:57.823724 env[1063]: time="2024-02-09T19:20:57.823649998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:57.829489 env[1063]: time="2024-02-09T19:20:57.829421432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:57.834413 env[1063]: time="2024-02-09T19:20:57.834337161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:57.837168 env[1063]: time="2024-02-09T19:20:57.837083392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:20:57.862319 env[1063]: time="2024-02-09T19:20:57.862233591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:20:59.412662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:20:59.412896 systemd[1]: Stopped kubelet.service. Feb 9 19:20:59.414506 systemd[1]: Started kubelet.service. Feb 9 19:20:59.528346 kubelet[1390]: E0209 19:20:59.528281 1390 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:20:59.532044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:20:59.532193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:20:59.751185 env[1063]: time="2024-02-09T19:20:59.750011905Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:59.754438 env[1063]: time="2024-02-09T19:20:59.754381310Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:59.757472 env[1063]: time="2024-02-09T19:20:59.757400882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:59.763752 env[1063]: time="2024-02-09T19:20:59.763698724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:20:59.766076 env[1063]: time="2024-02-09T19:20:59.766015291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:20:59.787708 env[1063]: time="2024-02-09T19:20:59.787600911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:21:01.813894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3344348954.mount: Deactivated successfully. Feb 9 19:21:02.785129 env[1063]: time="2024-02-09T19:21:02.785017934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:02.811604 env[1063]: time="2024-02-09T19:21:02.811456986Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:02.815119 env[1063]: time="2024-02-09T19:21:02.815040929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:02.818831 env[1063]: time="2024-02-09T19:21:02.818756471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:02.820161 env[1063]: time="2024-02-09T19:21:02.820096555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:21:02.848153 env[1063]: time="2024-02-09T19:21:02.848086046Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:21:03.451897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346140705.mount: Deactivated successfully. Feb 9 19:21:03.464747 env[1063]: time="2024-02-09T19:21:03.464644472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:03.468085 env[1063]: time="2024-02-09T19:21:03.467982797Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:03.473081 env[1063]: time="2024-02-09T19:21:03.473012716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:03.477640 env[1063]: time="2024-02-09T19:21:03.477487245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:03.478965 env[1063]: time="2024-02-09T19:21:03.478897974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:21:03.500644 env[1063]: time="2024-02-09T19:21:03.500517249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:21:04.613594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476463794.mount: Deactivated successfully. Feb 9 19:21:09.783103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:21:09.783361 systemd[1]: Stopped kubelet.service. Feb 9 19:21:09.784984 systemd[1]: Started kubelet.service. Feb 9 19:21:09.948148 kubelet[1412]: E0209 19:21:09.948099 1412 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:21:09.950301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:21:09.950435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:21:11.373163 env[1063]: time="2024-02-09T19:21:11.373029940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:11.377104 env[1063]: time="2024-02-09T19:21:11.377057894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:11.381319 env[1063]: time="2024-02-09T19:21:11.381276851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:11.384531 env[1063]: time="2024-02-09T19:21:11.384491366Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:11.386327 env[1063]: time="2024-02-09T19:21:11.386275652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:21:11.410445 env[1063]: time="2024-02-09T19:21:11.410394953Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:21:12.120019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182619611.mount: Deactivated successfully. Feb 9 19:21:13.521021 env[1063]: time="2024-02-09T19:21:13.520934477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:13.524100 env[1063]: time="2024-02-09T19:21:13.524034481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:13.526289 env[1063]: time="2024-02-09T19:21:13.526237055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:13.528290 env[1063]: time="2024-02-09T19:21:13.528248809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:13.530201 env[1063]: time="2024-02-09T19:21:13.530142172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:21:18.648517 systemd[1]: Stopped kubelet.service. Feb 9 19:21:18.669077 systemd[1]: Reloading. Feb 9 19:21:18.775586 /usr/lib/systemd/system-generators/torcx-generator[1502]: time="2024-02-09T19:21:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:21:18.775618 /usr/lib/systemd/system-generators/torcx-generator[1502]: time="2024-02-09T19:21:18Z" level=info msg="torcx already run" Feb 9 19:21:18.855604 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:21:18.855626 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:21:18.879871 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:21:18.990758 systemd[1]: Started kubelet.service. Feb 9 19:21:19.074376 kubelet[1546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:21:19.074744 kubelet[1546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:21:19.074905 kubelet[1546]: I0209 19:21:19.074878 1546 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:21:19.076238 kubelet[1546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:21:19.076298 kubelet[1546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:21:19.785894 kubelet[1546]: I0209 19:21:19.785849 1546 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:21:19.785894 kubelet[1546]: I0209 19:21:19.785885 1546 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:21:19.786203 kubelet[1546]: I0209 19:21:19.786177 1546 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:21:19.790526 kubelet[1546]: E0209 19:21:19.790507 1546 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.790697 kubelet[1546]: I0209 19:21:19.790683 1546 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:21:19.794466 kubelet[1546]: I0209 19:21:19.794438 1546 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:21:19.794823 kubelet[1546]: I0209 19:21:19.794807 1546 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:21:19.794936 kubelet[1546]: I0209 19:21:19.794911 1546 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:21:19.795036 kubelet[1546]: I0209 19:21:19.794953 1546 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:21:19.795036 kubelet[1546]: I0209 19:21:19.794971 1546 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:21:19.795132 kubelet[1546]: I0209 19:21:19.795115 1546 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:21:19.798695 kubelet[1546]: I0209 19:21:19.798676 1546 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:21:19.798775 kubelet[1546]: I0209 19:21:19.798707 1546 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:21:19.798775 kubelet[1546]: I0209 19:21:19.798739 1546 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:21:19.798775 kubelet[1546]: I0209 19:21:19.798764 1546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:21:19.800676 kubelet[1546]: I0209 19:21:19.800654 1546 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:21:19.800973 kubelet[1546]: W0209 19:21:19.800949 1546 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:21:19.801468 kubelet[1546]: W0209 19:21:19.801404 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.801528 kubelet[1546]: E0209 19:21:19.801487 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.802024 kubelet[1546]: W0209 19:21:19.801977 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-a855e53d7e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.802079 kubelet[1546]: E0209 19:21:19.802036 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-a855e53d7e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.803056 kubelet[1546]: I0209 19:21:19.803030 1546 server.go:1186] "Started kubelet" Feb 9 19:21:19.806401 kubelet[1546]: E0209 19:21:19.806383 1546 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:21:19.806516 kubelet[1546]: E0209 19:21:19.806506 1546 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:21:19.806763 kubelet[1546]: E0209 19:21:19.806669 1546 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818bc55e4f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 803000054, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 803000054, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.140:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.140:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:21:19.809503 kubelet[1546]: I0209 19:21:19.809168 1546 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:21:19.811427 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:21:19.811492 kubelet[1546]: I0209 19:21:19.810089 1546 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:21:19.811698 kubelet[1546]: I0209 19:21:19.811684 1546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:21:19.819304 kubelet[1546]: E0209 19:21:19.819256 1546 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.24.4.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-a855e53d7e.novalocal?timeout=10s": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.819530 kubelet[1546]: W0209 19:21:19.819499 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.819643 kubelet[1546]: E0209 19:21:19.819632 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.822656 kubelet[1546]: I0209 19:21:19.822607 1546 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:21:19.823609 kubelet[1546]: I0209 19:21:19.823586 1546 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:21:19.842437 update_engine[1052]: I0209 19:21:19.841662 1052 update_attempter.cc:509] Updating boot flags... Feb 9 19:21:19.883719 kubelet[1546]: I0209 19:21:19.883678 1546 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:21:19.884001 kubelet[1546]: I0209 19:21:19.883989 1546 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:21:19.884132 kubelet[1546]: I0209 19:21:19.884120 1546 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:21:19.891858 kubelet[1546]: I0209 19:21:19.891695 1546 policy_none.go:49] "None policy: Start" Feb 9 19:21:19.894167 kubelet[1546]: I0209 19:21:19.894137 1546 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:21:19.894167 kubelet[1546]: I0209 19:21:19.894170 1546 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:21:19.919828 kubelet[1546]: I0209 19:21:19.919786 1546 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:19.920253 kubelet[1546]: E0209 19:21:19.920228 1546 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.140:6443/api/v1/nodes\": dial tcp 172.24.4.140:6443: connect: connection refused" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:19.928431 systemd[1]: Created slice kubepods.slice. Feb 9 19:21:19.930631 kubelet[1546]: I0209 19:21:19.930605 1546 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:21:19.937235 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:21:19.954457 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:21:19.978821 kubelet[1546]: I0209 19:21:19.977830 1546 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:21:19.978821 kubelet[1546]: I0209 19:21:19.978035 1546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:21:19.982533 kubelet[1546]: E0209 19:21:19.982503 1546 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-c-a855e53d7e.novalocal\" not found" Feb 9 19:21:19.994136 kubelet[1546]: I0209 19:21:19.994104 1546 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:21:19.994136 kubelet[1546]: I0209 19:21:19.994126 1546 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:21:19.994317 kubelet[1546]: I0209 19:21:19.994165 1546 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:21:19.994317 kubelet[1546]: E0209 19:21:19.994207 1546 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:21:19.994929 kubelet[1546]: W0209 19:21:19.994889 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:19.995010 kubelet[1546]: E0209 19:21:19.994956 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.020616 kubelet[1546]: E0209 19:21:20.020458 1546 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.24.4.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-a855e53d7e.novalocal?timeout=10s": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.097191 kubelet[1546]: I0209 19:21:20.094529 1546 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:20.106099 kubelet[1546]: I0209 19:21:20.105471 1546 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:20.109186 kubelet[1546]: I0209 19:21:20.109158 1546 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:20.110933 kubelet[1546]: I0209 19:21:20.110905 1546 status_manager.go:698] "Failed to get status for pod" podUID=df645136375404b2b4f46e9813624b71 pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" err="Get \"https://172.24.4.140:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\": dial tcp 172.24.4.140:6443: connect: connection refused" Feb 9 19:21:20.122854 systemd[1]: Created slice kubepods-burstable-poddf645136375404b2b4f46e9813624b71.slice. Feb 9 19:21:20.124884 kubelet[1546]: I0209 19:21:20.124850 1546 status_manager.go:698] "Failed to get status for pod" podUID=b651cbfab2a9c8aafbfe6dc37c18fa27 pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" err="Get \"https://172.24.4.140:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\": dial tcp 172.24.4.140:6443: connect: connection refused" Feb 9 19:21:20.126919 kubelet[1546]: I0209 19:21:20.126863 1546 status_manager.go:698] "Failed to get status for pod" podUID=925d41abc328b52cd6ceca0947d1cd4f pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" err="Get \"https://172.24.4.140:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal\": dial tcp 172.24.4.140:6443: connect: connection refused" Feb 9 19:21:20.127105 kubelet[1546]: I0209 19:21:20.127060 1546 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.129334 kubelet[1546]: E0209 19:21:20.129012 1546 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.140:6443/api/v1/nodes\": dial tcp 172.24.4.140:6443: connect: connection refused" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.140998 systemd[1]: Created slice kubepods-burstable-podb651cbfab2a9c8aafbfe6dc37c18fa27.slice. Feb 9 19:21:20.148265 systemd[1]: Created slice kubepods-burstable-pod925d41abc328b52cd6ceca0947d1cd4f.slice. Feb 9 19:21:20.224949 kubelet[1546]: I0209 19:21:20.224849 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225208 kubelet[1546]: I0209 19:21:20.225023 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df645136375404b2b4f46e9813624b71-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"df645136375404b2b4f46e9813624b71\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225208 kubelet[1546]: I0209 19:21:20.225203 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225358 kubelet[1546]: I0209 19:21:20.225317 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225437 kubelet[1546]: I0209 19:21:20.225396 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225512 kubelet[1546]: I0209 19:21:20.225494 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225673 kubelet[1546]: I0209 19:21:20.225621 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/925d41abc328b52cd6ceca0947d1cd4f-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"925d41abc328b52cd6ceca0947d1cd4f\") " pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225772 kubelet[1546]: I0209 19:21:20.225757 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df645136375404b2b4f46e9813624b71-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"df645136375404b2b4f46e9813624b71\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.225845 kubelet[1546]: I0209 19:21:20.225832 1546 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df645136375404b2b4f46e9813624b71-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"df645136375404b2b4f46e9813624b71\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.423678 kubelet[1546]: E0209 19:21:20.421814 1546 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.24.4.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-a855e53d7e.novalocal?timeout=10s": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.438169 env[1063]: time="2024-02-09T19:21:20.438013514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal,Uid:df645136375404b2b4f46e9813624b71,Namespace:kube-system,Attempt:0,}" Feb 9 19:21:20.451504 env[1063]: time="2024-02-09T19:21:20.451356337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal,Uid:b651cbfab2a9c8aafbfe6dc37c18fa27,Namespace:kube-system,Attempt:0,}" Feb 9 19:21:20.453333 env[1063]: time="2024-02-09T19:21:20.453202862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal,Uid:925d41abc328b52cd6ceca0947d1cd4f,Namespace:kube-system,Attempt:0,}" Feb 9 19:21:20.533299 kubelet[1546]: I0209 19:21:20.533176 1546 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.534072 kubelet[1546]: E0209 19:21:20.534011 1546 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.140:6443/api/v1/nodes\": dial tcp 172.24.4.140:6443: connect: connection refused" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:20.855029 kubelet[1546]: W0209 19:21:20.854894 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.855029 kubelet[1546]: E0209 19:21:20.854987 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.892978 kubelet[1546]: W0209 19:21:20.892773 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-a855e53d7e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.892978 kubelet[1546]: E0209 19:21:20.892924 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-c-a855e53d7e.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.959470 kubelet[1546]: W0209 19:21:20.959304 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:20.959470 kubelet[1546]: E0209 19:21:20.959428 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:21.066823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374705137.mount: Deactivated successfully. Feb 9 19:21:21.078630 env[1063]: time="2024-02-09T19:21:21.078420918Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.082945 env[1063]: time="2024-02-09T19:21:21.082854440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.090712 env[1063]: time="2024-02-09T19:21:21.090637150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.094288 env[1063]: time="2024-02-09T19:21:21.094201835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.098637 env[1063]: time="2024-02-09T19:21:21.098520005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.108058 env[1063]: time="2024-02-09T19:21:21.106814424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.111483 env[1063]: time="2024-02-09T19:21:21.111424735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.116855 env[1063]: time="2024-02-09T19:21:21.116779672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.119100 env[1063]: time="2024-02-09T19:21:21.119011231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.120999 env[1063]: time="2024-02-09T19:21:21.120921854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.125393 env[1063]: time="2024-02-09T19:21:21.125319875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.135327 kubelet[1546]: W0209 19:21:21.135129 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:21.135327 kubelet[1546]: E0209 19:21:21.135257 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:21.182511 env[1063]: time="2024-02-09T19:21:21.182414993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:21.223718 kubelet[1546]: E0209 19:21:21.223440 1546 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.24.4.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-a855e53d7e.novalocal?timeout=10s": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:21.231056 env[1063]: time="2024-02-09T19:21:21.230895810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:21:21.231369 env[1063]: time="2024-02-09T19:21:21.231005699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:21:21.231369 env[1063]: time="2024-02-09T19:21:21.231040217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:21:21.232104 env[1063]: time="2024-02-09T19:21:21.231937828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22eae8517079118d45fb44259e5bb107483cb6e7e2b3317d0028afc6f9c5a63d pid=1637 runtime=io.containerd.runc.v2 Feb 9 19:21:21.245901 env[1063]: time="2024-02-09T19:21:21.245793078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:21:21.246267 env[1063]: time="2024-02-09T19:21:21.246197343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:21:21.246486 env[1063]: time="2024-02-09T19:21:21.246422111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:21:21.247486 env[1063]: time="2024-02-09T19:21:21.247429561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/961871ebb9637e3872837574ba798791e5b5677a400d0341688953f51d76d98a pid=1659 runtime=io.containerd.runc.v2 Feb 9 19:21:21.251959 systemd[1]: Started cri-containerd-22eae8517079118d45fb44259e5bb107483cb6e7e2b3317d0028afc6f9c5a63d.scope. Feb 9 19:21:21.272924 env[1063]: time="2024-02-09T19:21:21.272815160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:21:21.272924 env[1063]: time="2024-02-09T19:21:21.272875637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:21:21.273262 env[1063]: time="2024-02-09T19:21:21.272899030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:21:21.273607 env[1063]: time="2024-02-09T19:21:21.273555385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ec20ab370beaf881ef50ee345db49b14b44bb0ab3513d533b480234550f5ff3 pid=1658 runtime=io.containerd.runc.v2 Feb 9 19:21:21.289596 systemd[1]: Started cri-containerd-961871ebb9637e3872837574ba798791e5b5677a400d0341688953f51d76d98a.scope. Feb 9 19:21:21.298779 systemd[1]: Started cri-containerd-2ec20ab370beaf881ef50ee345db49b14b44bb0ab3513d533b480234550f5ff3.scope. Feb 9 19:21:21.346917 kubelet[1546]: I0209 19:21:21.346380 1546 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:21.346917 kubelet[1546]: E0209 19:21:21.346868 1546 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.140:6443/api/v1/nodes\": dial tcp 172.24.4.140:6443: connect: connection refused" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:21.368178 env[1063]: time="2024-02-09T19:21:21.367314843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal,Uid:df645136375404b2b4f46e9813624b71,Namespace:kube-system,Attempt:0,} returns sandbox id \"22eae8517079118d45fb44259e5bb107483cb6e7e2b3317d0028afc6f9c5a63d\"" Feb 9 19:21:21.378266 env[1063]: time="2024-02-09T19:21:21.377527671Z" level=info msg="CreateContainer within sandbox \"22eae8517079118d45fb44259e5bb107483cb6e7e2b3317d0028afc6f9c5a63d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:21:21.379262 env[1063]: time="2024-02-09T19:21:21.379218247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal,Uid:925d41abc328b52cd6ceca0947d1cd4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ec20ab370beaf881ef50ee345db49b14b44bb0ab3513d533b480234550f5ff3\"" Feb 9 19:21:21.382393 env[1063]: time="2024-02-09T19:21:21.382353200Z" level=info msg="CreateContainer within sandbox \"2ec20ab370beaf881ef50ee345db49b14b44bb0ab3513d533b480234550f5ff3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:21:21.391088 env[1063]: time="2024-02-09T19:21:21.391043654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal,Uid:b651cbfab2a9c8aafbfe6dc37c18fa27,Namespace:kube-system,Attempt:0,} returns sandbox id \"961871ebb9637e3872837574ba798791e5b5677a400d0341688953f51d76d98a\"" Feb 9 19:21:21.393890 env[1063]: time="2024-02-09T19:21:21.393856608Z" level=info msg="CreateContainer within sandbox \"961871ebb9637e3872837574ba798791e5b5677a400d0341688953f51d76d98a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:21:21.715628 env[1063]: time="2024-02-09T19:21:21.714166949Z" level=info msg="CreateContainer within sandbox \"961871ebb9637e3872837574ba798791e5b5677a400d0341688953f51d76d98a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e0efa6cfc8bb312b8ab6c077a01e3dc6cc165627d4af34871c9674e8ef50536\"" Feb 9 19:21:21.717372 env[1063]: time="2024-02-09T19:21:21.717256990Z" level=info msg="StartContainer for \"4e0efa6cfc8bb312b8ab6c077a01e3dc6cc165627d4af34871c9674e8ef50536\"" Feb 9 19:21:21.725635 env[1063]: time="2024-02-09T19:21:21.725477791Z" level=info msg="CreateContainer within sandbox \"22eae8517079118d45fb44259e5bb107483cb6e7e2b3317d0028afc6f9c5a63d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"adc868aab126828f1a9c572b40a59f5220c618feda95c21a2972957d7513ac37\"" Feb 9 19:21:21.726643 env[1063]: time="2024-02-09T19:21:21.726534072Z" level=info msg="CreateContainer within sandbox \"2ec20ab370beaf881ef50ee345db49b14b44bb0ab3513d533b480234550f5ff3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f7b5b223ed67b90671e774b04b6610542092fb86883dcceb460e154a744fb02\"" Feb 9 19:21:21.726945 env[1063]: time="2024-02-09T19:21:21.726872106Z" level=info msg="StartContainer for \"adc868aab126828f1a9c572b40a59f5220c618feda95c21a2972957d7513ac37\"" Feb 9 19:21:21.730148 env[1063]: time="2024-02-09T19:21:21.730067828Z" level=info msg="StartContainer for \"6f7b5b223ed67b90671e774b04b6610542092fb86883dcceb460e154a744fb02\"" Feb 9 19:21:21.773466 systemd[1]: Started cri-containerd-4e0efa6cfc8bb312b8ab6c077a01e3dc6cc165627d4af34871c9674e8ef50536.scope. Feb 9 19:21:21.790516 systemd[1]: Started cri-containerd-6f7b5b223ed67b90671e774b04b6610542092fb86883dcceb460e154a744fb02.scope. Feb 9 19:21:21.797729 systemd[1]: Started cri-containerd-adc868aab126828f1a9c572b40a59f5220c618feda95c21a2972957d7513ac37.scope. Feb 9 19:21:21.863040 env[1063]: time="2024-02-09T19:21:21.862942651Z" level=info msg="StartContainer for \"6f7b5b223ed67b90671e774b04b6610542092fb86883dcceb460e154a744fb02\" returns successfully" Feb 9 19:21:21.868318 kubelet[1546]: E0209 19:21:21.868272 1546 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:21.875446 env[1063]: time="2024-02-09T19:21:21.875385936Z" level=info msg="StartContainer for \"4e0efa6cfc8bb312b8ab6c077a01e3dc6cc165627d4af34871c9674e8ef50536\" returns successfully" Feb 9 19:21:21.908839 env[1063]: time="2024-02-09T19:21:21.908776699Z" level=info msg="StartContainer for \"adc868aab126828f1a9c572b40a59f5220c618feda95c21a2972957d7513ac37\" returns successfully" Feb 9 19:21:22.015516 kubelet[1546]: I0209 19:21:22.015244 1546 status_manager.go:698] "Failed to get status for pod" podUID=925d41abc328b52cd6ceca0947d1cd4f pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" err="Get \"https://172.24.4.140:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal\": dial tcp 172.24.4.140:6443: connect: connection refused" Feb 9 19:21:22.019723 kubelet[1546]: I0209 19:21:22.019693 1546 status_manager.go:698] "Failed to get status for pod" podUID=df645136375404b2b4f46e9813624b71 pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" err="Get \"https://172.24.4.140:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\": dial tcp 172.24.4.140:6443: connect: connection refused" Feb 9 19:21:22.023338 kubelet[1546]: I0209 19:21:22.023304 1546 status_manager.go:698] "Failed to get status for pod" podUID=b651cbfab2a9c8aafbfe6dc37c18fa27 pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" err="Get \"https://172.24.4.140:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\": dial tcp 172.24.4.140:6443: connect: connection refused" Feb 9 19:21:22.477850 kubelet[1546]: W0209 19:21:22.477773 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:22.477850 kubelet[1546]: E0209 19:21:22.477846 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:22.824903 kubelet[1546]: E0209 19:21:22.824841 1546 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.24.4.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-c-a855e53d7e.novalocal?timeout=10s": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:22.948988 kubelet[1546]: I0209 19:21:22.948956 1546 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:22.949287 kubelet[1546]: E0209 19:21:22.949265 1546 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.140:6443/api/v1/nodes\": dial tcp 172.24.4.140:6443: connect: connection refused" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:22.970699 kubelet[1546]: W0209 19:21:22.970650 1546 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:22.970699 kubelet[1546]: E0209 19:21:22.970704 1546 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.140:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.140:6443: connect: connection refused Feb 9 19:21:25.801455 kubelet[1546]: I0209 19:21:25.801079 1546 apiserver.go:52] "Watching apiserver" Feb 9 19:21:25.924157 kubelet[1546]: I0209 19:21:25.924115 1546 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:21:25.968390 kubelet[1546]: I0209 19:21:25.968341 1546 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:21:26.037097 kubelet[1546]: E0209 19:21:26.037043 1546 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-c-a855e53d7e.novalocal\" not found" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:26.154609 kubelet[1546]: I0209 19:21:26.153895 1546 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:26.172008 kubelet[1546]: I0209 19:21:26.171940 1546 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:26.282819 kubelet[1546]: E0209 19:21:26.282636 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818bc55e4f6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 803000054, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 803000054, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.347339 kubelet[1546]: E0209 19:21:26.347123 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818bc8b1bb5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 806487477, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 806487477, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.405313 kubelet[1546]: E0209 19:21:26.404954 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2bc31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877241905, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877241905, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.464635 kubelet[1546]: E0209 19:21:26.464419 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2e32f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877251887, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877251887, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.523697 kubelet[1546]: E0209 19:21:26.523447 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2efc2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877255106, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877255106, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.595353 kubelet[1546]: E0209 19:21:26.595134 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2bc31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877241905, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 919723077, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.657978 kubelet[1546]: E0209 19:21:26.657645 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2e32f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877251887, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 919728841, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.721449 kubelet[1546]: E0209 19:21:26.721240 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2efc2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877255106, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 919733802, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:26.780697 kubelet[1546]: E0209 19:21:26.780452 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c79b55cf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 992100303, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 992100303, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:27.074193 kubelet[1546]: E0209 19:21:27.073994 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2bc31", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877241905, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 20, 105343819, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:27.478073 kubelet[1546]: E0209 19:21:27.477801 1546 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-c-a855e53d7e.novalocal.17b24818c0c2e32f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-c-a855e53d7e.novalocal", UID:"ci-3510-3-2-c-a855e53d7e.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510-3-2-c-a855e53d7e.novalocal status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-c-a855e53d7e.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 21, 19, 877251887, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 21, 20, 105353250, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 19:21:29.101580 systemd[1]: Reloading. Feb 9 19:21:29.232089 /usr/lib/systemd/system-generators/torcx-generator[1887]: time="2024-02-09T19:21:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:21:29.232600 /usr/lib/systemd/system-generators/torcx-generator[1887]: time="2024-02-09T19:21:29Z" level=info msg="torcx already run" Feb 9 19:21:29.319467 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:21:29.319726 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:21:29.342653 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:21:29.494073 kubelet[1546]: I0209 19:21:29.493815 1546 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:21:29.497410 systemd[1]: Stopping kubelet.service... Feb 9 19:21:29.513301 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:21:29.514246 systemd[1]: Stopped kubelet.service. Feb 9 19:21:29.514307 systemd[1]: kubelet.service: Consumed 1.410s CPU time. Feb 9 19:21:29.516747 systemd[1]: Started kubelet.service. Feb 9 19:21:29.651696 kubelet[1932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:21:29.652124 kubelet[1932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:21:29.652284 kubelet[1932]: I0209 19:21:29.652253 1932 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:21:29.653860 kubelet[1932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:21:29.653969 kubelet[1932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:21:29.660289 kubelet[1932]: I0209 19:21:29.660229 1932 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:21:29.660289 kubelet[1932]: I0209 19:21:29.660269 1932 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:21:29.660815 kubelet[1932]: I0209 19:21:29.660785 1932 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:21:29.662453 kubelet[1932]: I0209 19:21:29.662422 1932 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:21:29.668090 kubelet[1932]: I0209 19:21:29.667638 1932 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:21:29.685890 kubelet[1932]: I0209 19:21:29.685855 1932 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:21:29.686156 kubelet[1932]: I0209 19:21:29.686129 1932 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:21:29.686290 kubelet[1932]: I0209 19:21:29.686269 1932 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:21:29.686827 kubelet[1932]: I0209 19:21:29.686306 1932 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:21:29.686827 kubelet[1932]: I0209 19:21:29.686324 1932 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:21:29.686827 kubelet[1932]: I0209 19:21:29.686709 1932 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:21:29.697595 kubelet[1932]: I0209 19:21:29.696894 1932 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:21:29.697595 kubelet[1932]: I0209 19:21:29.696928 1932 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:21:29.697595 kubelet[1932]: I0209 19:21:29.696958 1932 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:21:29.697595 kubelet[1932]: I0209 19:21:29.696981 1932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:21:29.698434 sudo[1945]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:21:29.698701 sudo[1945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:21:29.702355 kubelet[1932]: I0209 19:21:29.702330 1932 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:21:29.703081 kubelet[1932]: I0209 19:21:29.703059 1932 server.go:1186] "Started kubelet" Feb 9 19:21:29.705594 kubelet[1932]: I0209 19:21:29.705530 1932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:21:29.709529 kubelet[1932]: I0209 19:21:29.709499 1932 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:21:29.710327 kubelet[1932]: I0209 19:21:29.710313 1932 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:21:29.713287 kubelet[1932]: I0209 19:21:29.713265 1932 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:21:29.715451 kubelet[1932]: I0209 19:21:29.715419 1932 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:21:29.729800 kubelet[1932]: E0209 19:21:29.729770 1932 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:21:29.730033 kubelet[1932]: E0209 19:21:29.730013 1932 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:21:29.734193 kubelet[1932]: I0209 19:21:29.734174 1932 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:21:29.818410 kubelet[1932]: I0209 19:21:29.818386 1932 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:29.825174 kubelet[1932]: I0209 19:21:29.825144 1932 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:21:29.825343 kubelet[1932]: I0209 19:21:29.825331 1932 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:21:29.825415 kubelet[1932]: I0209 19:21:29.825406 1932 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:21:29.825519 kubelet[1932]: E0209 19:21:29.825509 1932 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:21:29.867180 kubelet[1932]: I0209 19:21:29.867149 1932 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:29.867443 kubelet[1932]: I0209 19:21:29.867414 1932 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:29.873899 kubelet[1932]: I0209 19:21:29.873876 1932 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:21:29.874069 kubelet[1932]: I0209 19:21:29.874056 1932 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:21:29.874156 kubelet[1932]: I0209 19:21:29.874146 1932 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:21:29.874377 kubelet[1932]: I0209 19:21:29.874363 1932 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:21:29.874460 kubelet[1932]: I0209 19:21:29.874449 1932 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:21:29.874567 kubelet[1932]: I0209 19:21:29.874525 1932 policy_none.go:49] "None policy: Start" Feb 9 19:21:29.875323 kubelet[1932]: I0209 19:21:29.875310 1932 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:21:29.875417 kubelet[1932]: I0209 19:21:29.875406 1932 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:21:29.875712 kubelet[1932]: I0209 19:21:29.875699 1932 state_mem.go:75] "Updated machine memory state" Feb 9 19:21:29.883024 kubelet[1932]: I0209 19:21:29.883002 1932 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:21:29.883391 kubelet[1932]: I0209 19:21:29.883376 1932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:21:29.926916 kubelet[1932]: I0209 19:21:29.926871 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:29.927210 kubelet[1932]: I0209 19:21:29.927198 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:29.927336 kubelet[1932]: I0209 19:21:29.927323 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:29.943067 kubelet[1932]: E0209 19:21:29.943027 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:29.950877 kubelet[1932]: E0209 19:21:29.950810 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.018074 kubelet[1932]: I0209 19:21:30.018038 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df645136375404b2b4f46e9813624b71-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"df645136375404b2b4f46e9813624b71\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.018332 kubelet[1932]: I0209 19:21:30.018319 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.018456 kubelet[1932]: I0209 19:21:30.018444 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.018613 kubelet[1932]: I0209 19:21:30.018577 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.018820 kubelet[1932]: I0209 19:21:30.018809 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.018928 kubelet[1932]: I0209 19:21:30.018918 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/925d41abc328b52cd6ceca0947d1cd4f-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"925d41abc328b52cd6ceca0947d1cd4f\") " pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.019040 kubelet[1932]: I0209 19:21:30.019029 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df645136375404b2b4f46e9813624b71-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"df645136375404b2b4f46e9813624b71\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.019143 kubelet[1932]: I0209 19:21:30.019133 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df645136375404b2b4f46e9813624b71-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"df645136375404b2b4f46e9813624b71\") " pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.019252 kubelet[1932]: I0209 19:21:30.019242 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b651cbfab2a9c8aafbfe6dc37c18fa27-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" (UID: \"b651cbfab2a9c8aafbfe6dc37c18fa27\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:30.485093 sudo[1945]: pam_unix(sudo:session): session closed for user root Feb 9 19:21:30.720959 kubelet[1932]: I0209 19:21:30.720904 1932 apiserver.go:52] "Watching apiserver" Feb 9 19:21:30.816075 kubelet[1932]: I0209 19:21:30.815988 1932 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:21:30.826333 kubelet[1932]: I0209 19:21:30.826299 1932 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:21:31.112644 kubelet[1932]: E0209 19:21:31.109864 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:31.304558 kubelet[1932]: E0209 19:21:31.304487 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:31.509873 kubelet[1932]: E0209 19:21:31.509607 1932 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" Feb 9 19:21:31.742490 kubelet[1932]: I0209 19:21:31.742412 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-c-a855e53d7e.novalocal" podStartSLOduration=4.740785532 pod.CreationTimestamp="2024-02-09 19:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:21:31.740641617 +0000 UTC m=+2.211723427" watchObservedRunningTime="2024-02-09 19:21:31.740785532 +0000 UTC m=+2.211867292" Feb 9 19:21:32.301215 sudo[1161]: pam_unix(sudo:session): session closed for user root Feb 9 19:21:32.510679 kubelet[1932]: I0209 19:21:32.510594 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-c-a855e53d7e.novalocal" podStartSLOduration=2.5104797359999997 pod.CreationTimestamp="2024-02-09 19:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:21:32.107783736 +0000 UTC m=+2.578865476" watchObservedRunningTime="2024-02-09 19:21:32.510479736 +0000 UTC m=+2.981561516" Feb 9 19:21:32.607113 sshd[1157]: pam_unix(sshd:session): session closed for user core Feb 9 19:21:32.613774 systemd-logind[1051]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:21:32.615359 systemd[1]: sshd@4-172.24.4.140:22-172.24.4.1:53846.service: Deactivated successfully. Feb 9 19:21:32.617218 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:21:32.617614 systemd[1]: session-5.scope: Consumed 7.069s CPU time. Feb 9 19:21:32.619168 systemd-logind[1051]: Removed session 5. Feb 9 19:21:32.917595 kubelet[1932]: I0209 19:21:32.916836 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-c-a855e53d7e.novalocal" podStartSLOduration=4.916753482 pod.CreationTimestamp="2024-02-09 19:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:21:32.511372131 +0000 UTC m=+2.982453921" watchObservedRunningTime="2024-02-09 19:21:32.916753482 +0000 UTC m=+3.387835263" Feb 9 19:21:42.655683 kubelet[1932]: I0209 19:21:42.655573 1932 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:21:42.656290 env[1063]: time="2024-02-09T19:21:42.655991105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:21:42.657078 kubelet[1932]: I0209 19:21:42.657058 1932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:21:42.709681 kubelet[1932]: I0209 19:21:42.709648 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:42.716213 systemd[1]: Created slice kubepods-burstable-pod6eef21c9_6e16_4f08_b109_cd948d0b83be.slice. Feb 9 19:21:42.718101 kubelet[1932]: I0209 19:21:42.718066 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:42.725106 systemd[1]: Created slice kubepods-besteffort-podfdcda863_8c67_4f2e_82ce_da058ee7b91b.slice. Feb 9 19:21:42.736401 kubelet[1932]: W0209 19:21:42.736373 1932 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.736623 kubelet[1932]: E0209 19:21:42.736612 1932 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.736831 kubelet[1932]: W0209 19:21:42.736816 1932 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.736913 kubelet[1932]: E0209 19:21:42.736902 1932 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.737025 kubelet[1932]: W0209 19:21:42.737011 1932 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.737108 kubelet[1932]: E0209 19:21:42.737098 1932 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.737216 kubelet[1932]: W0209 19:21:42.737203 1932 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.737293 kubelet[1932]: E0209 19:21:42.737281 1932 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-c-a855e53d7e.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-c-a855e53d7e.novalocal' and this object Feb 9 19:21:42.807503 kubelet[1932]: I0209 19:21:42.807413 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-xtables-lock\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.807503 kubelet[1932]: I0209 19:21:42.807480 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-cgroup\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.807503 kubelet[1932]: I0209 19:21:42.807521 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eef21c9-6e16-4f08-b109-cd948d0b83be-clustermesh-secrets\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.807810 kubelet[1932]: I0209 19:21:42.807580 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-config-path\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.807810 kubelet[1932]: I0209 19:21:42.807611 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-net\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.807810 kubelet[1932]: I0209 19:21:42.807651 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-hubble-tls\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.807810 kubelet[1932]: I0209 19:21:42.807699 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhk8\" (UniqueName: \"kubernetes.io/projected/fdcda863-8c67-4f2e-82ce-da058ee7b91b-kube-api-access-nnhk8\") pod \"kube-proxy-tc5bt\" (UID: \"fdcda863-8c67-4f2e-82ce-da058ee7b91b\") " pod="kube-system/kube-proxy-tc5bt" Feb 9 19:21:42.807810 kubelet[1932]: I0209 19:21:42.807746 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-bpf-maps\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808001 kubelet[1932]: I0209 19:21:42.807772 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-kernel\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808001 kubelet[1932]: I0209 19:21:42.807822 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-lib-modules\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808001 kubelet[1932]: I0209 19:21:42.807850 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-hostproc\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808001 kubelet[1932]: I0209 19:21:42.807873 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-etc-cni-netd\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808001 kubelet[1932]: I0209 19:21:42.807924 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4fw\" (UniqueName: \"kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-kube-api-access-2p4fw\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808001 kubelet[1932]: I0209 19:21:42.807955 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdcda863-8c67-4f2e-82ce-da058ee7b91b-kube-proxy\") pod \"kube-proxy-tc5bt\" (UID: \"fdcda863-8c67-4f2e-82ce-da058ee7b91b\") " pod="kube-system/kube-proxy-tc5bt" Feb 9 19:21:42.808246 kubelet[1932]: I0209 19:21:42.807996 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-run\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808246 kubelet[1932]: I0209 19:21:42.808023 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cni-path\") pod \"cilium-kh7jv\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " pod="kube-system/cilium-kh7jv" Feb 9 19:21:42.808246 kubelet[1932]: I0209 19:21:42.808067 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdcda863-8c67-4f2e-82ce-da058ee7b91b-xtables-lock\") pod \"kube-proxy-tc5bt\" (UID: \"fdcda863-8c67-4f2e-82ce-da058ee7b91b\") " pod="kube-system/kube-proxy-tc5bt" Feb 9 19:21:42.808246 kubelet[1932]: I0209 19:21:42.808103 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdcda863-8c67-4f2e-82ce-da058ee7b91b-lib-modules\") pod \"kube-proxy-tc5bt\" (UID: \"fdcda863-8c67-4f2e-82ce-da058ee7b91b\") " pod="kube-system/kube-proxy-tc5bt" Feb 9 19:21:43.522113 kubelet[1932]: I0209 19:21:43.522012 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:21:43.531469 systemd[1]: Created slice kubepods-besteffort-podc98c8dec_98fe_4f8b_8f38_5e48fb453207.slice. Feb 9 19:21:43.615751 kubelet[1932]: I0209 19:21:43.615704 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xqh\" (UniqueName: \"kubernetes.io/projected/c98c8dec-98fe-4f8b-8f38-5e48fb453207-kube-api-access-p9xqh\") pod \"cilium-operator-f59cbd8c6-gzfrq\" (UID: \"c98c8dec-98fe-4f8b-8f38-5e48fb453207\") " pod="kube-system/cilium-operator-f59cbd8c6-gzfrq" Feb 9 19:21:43.616150 kubelet[1932]: I0209 19:21:43.616109 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c98c8dec-98fe-4f8b-8f38-5e48fb453207-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-gzfrq\" (UID: \"c98c8dec-98fe-4f8b-8f38-5e48fb453207\") " pod="kube-system/cilium-operator-f59cbd8c6-gzfrq" Feb 9 19:21:43.916906 kubelet[1932]: E0209 19:21:43.916830 1932 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 19:21:43.916906 kubelet[1932]: E0209 19:21:43.916902 1932 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-kh7jv: failed to sync secret cache: timed out waiting for the condition Feb 9 19:21:43.918105 kubelet[1932]: E0209 19:21:43.917016 1932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-hubble-tls podName:6eef21c9-6e16-4f08-b109-cd948d0b83be nodeName:}" failed. No retries permitted until 2024-02-09 19:21:44.416978532 +0000 UTC m=+14.888060302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-hubble-tls") pod "cilium-kh7jv" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:21:43.918712 kubelet[1932]: E0209 19:21:43.918668 1932 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 9 19:21:43.919303 kubelet[1932]: E0209 19:21:43.919274 1932 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6eef21c9-6e16-4f08-b109-cd948d0b83be-clustermesh-secrets podName:6eef21c9-6e16-4f08-b109-cd948d0b83be nodeName:}" failed. No retries permitted until 2024-02-09 19:21:44.419184107 +0000 UTC m=+14.890265888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/6eef21c9-6e16-4f08-b109-cd948d0b83be-clustermesh-secrets") pod "cilium-kh7jv" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:21:43.936275 env[1063]: time="2024-02-09T19:21:43.936184981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tc5bt,Uid:fdcda863-8c67-4f2e-82ce-da058ee7b91b,Namespace:kube-system,Attempt:0,}" Feb 9 19:21:43.973977 env[1063]: time="2024-02-09T19:21:43.973467496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:21:43.973977 env[1063]: time="2024-02-09T19:21:43.973588122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:21:43.973977 env[1063]: time="2024-02-09T19:21:43.973652253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:21:43.974386 env[1063]: time="2024-02-09T19:21:43.974151581Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a773212999df97f6204f8fb2366e76813dec22f33987f48fffb48395d1ac07 pid=2033 runtime=io.containerd.runc.v2 Feb 9 19:21:44.016094 systemd[1]: Started cri-containerd-f4a773212999df97f6204f8fb2366e76813dec22f33987f48fffb48395d1ac07.scope. Feb 9 19:21:44.057723 env[1063]: time="2024-02-09T19:21:44.057679010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tc5bt,Uid:fdcda863-8c67-4f2e-82ce-da058ee7b91b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4a773212999df97f6204f8fb2366e76813dec22f33987f48fffb48395d1ac07\"" Feb 9 19:21:44.062304 env[1063]: time="2024-02-09T19:21:44.062265878Z" level=info msg="CreateContainer within sandbox \"f4a773212999df97f6204f8fb2366e76813dec22f33987f48fffb48395d1ac07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:21:44.084682 env[1063]: time="2024-02-09T19:21:44.084634704Z" level=info msg="CreateContainer within sandbox \"f4a773212999df97f6204f8fb2366e76813dec22f33987f48fffb48395d1ac07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5add26fdfc3b27424ef27627d34e45e40dea6ed79c04eebadb1808f10a7d79d\"" Feb 9 19:21:44.085745 env[1063]: time="2024-02-09T19:21:44.085697256Z" level=info msg="StartContainer for \"f5add26fdfc3b27424ef27627d34e45e40dea6ed79c04eebadb1808f10a7d79d\"" Feb 9 19:21:44.103991 systemd[1]: Started cri-containerd-f5add26fdfc3b27424ef27627d34e45e40dea6ed79c04eebadb1808f10a7d79d.scope. Feb 9 19:21:44.141112 env[1063]: time="2024-02-09T19:21:44.141028363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-gzfrq,Uid:c98c8dec-98fe-4f8b-8f38-5e48fb453207,Namespace:kube-system,Attempt:0,}" Feb 9 19:21:44.163393 env[1063]: time="2024-02-09T19:21:44.163341205Z" level=info msg="StartContainer for \"f5add26fdfc3b27424ef27627d34e45e40dea6ed79c04eebadb1808f10a7d79d\" returns successfully" Feb 9 19:21:44.185425 env[1063]: time="2024-02-09T19:21:44.184406429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:21:44.185425 env[1063]: time="2024-02-09T19:21:44.184484057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:21:44.185425 env[1063]: time="2024-02-09T19:21:44.184499017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:21:44.185425 env[1063]: time="2024-02-09T19:21:44.184814960Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb pid=2105 runtime=io.containerd.runc.v2 Feb 9 19:21:44.200687 systemd[1]: Started cri-containerd-c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb.scope. Feb 9 19:21:44.267837 env[1063]: time="2024-02-09T19:21:44.267776866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-gzfrq,Uid:c98c8dec-98fe-4f8b-8f38-5e48fb453207,Namespace:kube-system,Attempt:0,} returns sandbox id \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\"" Feb 9 19:21:44.272811 env[1063]: time="2024-02-09T19:21:44.272779981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:21:44.522733 env[1063]: time="2024-02-09T19:21:44.522465784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kh7jv,Uid:6eef21c9-6e16-4f08-b109-cd948d0b83be,Namespace:kube-system,Attempt:0,}" Feb 9 19:21:44.745138 systemd[1]: run-containerd-runc-k8s.io-f4a773212999df97f6204f8fb2366e76813dec22f33987f48fffb48395d1ac07-runc.FqmRTl.mount: Deactivated successfully. Feb 9 19:21:44.846082 env[1063]: time="2024-02-09T19:21:44.846010599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:21:44.846319 env[1063]: time="2024-02-09T19:21:44.846293935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:21:44.846431 env[1063]: time="2024-02-09T19:21:44.846407045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:21:44.846717 env[1063]: time="2024-02-09T19:21:44.846689249Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448 pid=2149 runtime=io.containerd.runc.v2 Feb 9 19:21:44.888962 systemd[1]: Started cri-containerd-318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448.scope. Feb 9 19:21:44.945874 env[1063]: time="2024-02-09T19:21:44.945803677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kh7jv,Uid:6eef21c9-6e16-4f08-b109-cd948d0b83be,Namespace:kube-system,Attempt:0,} returns sandbox id \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\"" Feb 9 19:21:46.063797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448930064.mount: Deactivated successfully. Feb 9 19:21:47.295748 env[1063]: time="2024-02-09T19:21:47.295689581Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:47.299454 env[1063]: time="2024-02-09T19:21:47.299370304Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:47.303515 env[1063]: time="2024-02-09T19:21:47.303449893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:21:47.304254 env[1063]: time="2024-02-09T19:21:47.304223898Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:21:47.308046 env[1063]: time="2024-02-09T19:21:47.307999853Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:21:47.311712 env[1063]: time="2024-02-09T19:21:47.311671618Z" level=info msg="CreateContainer within sandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:21:47.340433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200381410.mount: Deactivated successfully. Feb 9 19:21:47.346112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827114510.mount: Deactivated successfully. Feb 9 19:21:47.365519 env[1063]: time="2024-02-09T19:21:47.365478449Z" level=info msg="CreateContainer within sandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\"" Feb 9 19:21:47.366528 env[1063]: time="2024-02-09T19:21:47.366507099Z" level=info msg="StartContainer for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\"" Feb 9 19:21:47.395094 systemd[1]: Started cri-containerd-615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac.scope. Feb 9 19:21:47.463479 env[1063]: time="2024-02-09T19:21:47.463396899Z" level=info msg="StartContainer for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" returns successfully" Feb 9 19:21:48.185413 kubelet[1932]: I0209 19:21:48.185338 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tc5bt" podStartSLOduration=6.185245132 pod.CreationTimestamp="2024-02-09 19:21:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:21:44.920428466 +0000 UTC m=+15.391510196" watchObservedRunningTime="2024-02-09 19:21:48.185245132 +0000 UTC m=+18.656326912" Feb 9 19:21:49.862960 kubelet[1932]: I0209 19:21:49.862793 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-gzfrq" podStartSLOduration=-9.223372029992035e+09 pod.CreationTimestamp="2024-02-09 19:21:43 +0000 UTC" firstStartedPulling="2024-02-09 19:21:44.269815712 +0000 UTC m=+14.740897442" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:21:48.18703752 +0000 UTC m=+18.658119340" watchObservedRunningTime="2024-02-09 19:21:49.862740778 +0000 UTC m=+20.333822508" Feb 9 19:21:55.640511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1011314933.mount: Deactivated successfully. Feb 9 19:22:00.197701 env[1063]: time="2024-02-09T19:22:00.197394530Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:00.202236 env[1063]: time="2024-02-09T19:22:00.201455164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:00.209760 env[1063]: time="2024-02-09T19:22:00.209649888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:22:00.211585 env[1063]: time="2024-02-09T19:22:00.211489119Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:22:00.216932 env[1063]: time="2024-02-09T19:22:00.216818929Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:22:00.237871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467515366.mount: Deactivated successfully. Feb 9 19:22:00.258630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725181677.mount: Deactivated successfully. Feb 9 19:22:00.261016 env[1063]: time="2024-02-09T19:22:00.260340738Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\"" Feb 9 19:22:00.264043 env[1063]: time="2024-02-09T19:22:00.261647339Z" level=info msg="StartContainer for \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\"" Feb 9 19:22:00.347504 systemd[1]: Started cri-containerd-d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563.scope. Feb 9 19:22:00.513233 systemd[1]: cri-containerd-d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563.scope: Deactivated successfully. Feb 9 19:22:00.539630 env[1063]: time="2024-02-09T19:22:00.539456715Z" level=info msg="StartContainer for \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\" returns successfully" Feb 9 19:22:00.897887 env[1063]: time="2024-02-09T19:22:00.897789650Z" level=info msg="shim disconnected" id=d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563 Feb 9 19:22:00.898357 env[1063]: time="2024-02-09T19:22:00.898311158Z" level=warning msg="cleaning up after shim disconnected" id=d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563 namespace=k8s.io Feb 9 19:22:00.898524 env[1063]: time="2024-02-09T19:22:00.898489062Z" level=info msg="cleaning up dead shim" Feb 9 19:22:00.915752 env[1063]: time="2024-02-09T19:22:00.915633943Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:22:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2410 runtime=io.containerd.runc.v2\n" Feb 9 19:22:00.977926 env[1063]: time="2024-02-09T19:22:00.977759917Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:22:01.034314 env[1063]: time="2024-02-09T19:22:01.034258388Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\"" Feb 9 19:22:01.035418 env[1063]: time="2024-02-09T19:22:01.035390731Z" level=info msg="StartContainer for \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\"" Feb 9 19:22:01.056955 systemd[1]: Started cri-containerd-0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d.scope. Feb 9 19:22:01.109863 env[1063]: time="2024-02-09T19:22:01.109741690Z" level=info msg="StartContainer for \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\" returns successfully" Feb 9 19:22:01.116271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:22:01.116668 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:22:01.116995 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:22:01.120719 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:22:01.125188 systemd[1]: cri-containerd-0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d.scope: Deactivated successfully. Feb 9 19:22:01.159474 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:22:01.233963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563-rootfs.mount: Deactivated successfully. Feb 9 19:22:01.288957 env[1063]: time="2024-02-09T19:22:01.288809584Z" level=info msg="shim disconnected" id=0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d Feb 9 19:22:01.288957 env[1063]: time="2024-02-09T19:22:01.288934783Z" level=warning msg="cleaning up after shim disconnected" id=0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d namespace=k8s.io Feb 9 19:22:01.288957 env[1063]: time="2024-02-09T19:22:01.288960465Z" level=info msg="cleaning up dead shim" Feb 9 19:22:01.329905 env[1063]: time="2024-02-09T19:22:01.306943957Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:22:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2477 runtime=io.containerd.runc.v2\n" Feb 9 19:22:01.977433 env[1063]: time="2024-02-09T19:22:01.977322791Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:22:02.022015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474356377.mount: Deactivated successfully. Feb 9 19:22:02.033774 env[1063]: time="2024-02-09T19:22:02.033626846Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\"" Feb 9 19:22:02.036597 env[1063]: time="2024-02-09T19:22:02.035175174Z" level=info msg="StartContainer for \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\"" Feb 9 19:22:02.084443 systemd[1]: Started cri-containerd-8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035.scope. Feb 9 19:22:02.157941 env[1063]: time="2024-02-09T19:22:02.157873364Z" level=info msg="StartContainer for \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\" returns successfully" Feb 9 19:22:02.171433 systemd[1]: cri-containerd-8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035.scope: Deactivated successfully. Feb 9 19:22:02.221888 env[1063]: time="2024-02-09T19:22:02.221835028Z" level=info msg="shim disconnected" id=8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035 Feb 9 19:22:02.222969 env[1063]: time="2024-02-09T19:22:02.222058532Z" level=warning msg="cleaning up after shim disconnected" id=8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035 namespace=k8s.io Feb 9 19:22:02.222969 env[1063]: time="2024-02-09T19:22:02.222075586Z" level=info msg="cleaning up dead shim" Feb 9 19:22:02.230785 systemd[1]: run-containerd-runc-k8s.io-8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035-runc.yiOwpG.mount: Deactivated successfully. Feb 9 19:22:02.230898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035-rootfs.mount: Deactivated successfully. Feb 9 19:22:02.238588 env[1063]: time="2024-02-09T19:22:02.238492571Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2536 runtime=io.containerd.runc.v2\n" Feb 9 19:22:02.995621 env[1063]: time="2024-02-09T19:22:02.990753202Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:22:03.046962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233807561.mount: Deactivated successfully. Feb 9 19:22:03.062572 env[1063]: time="2024-02-09T19:22:03.062444990Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\"" Feb 9 19:22:03.066690 env[1063]: time="2024-02-09T19:22:03.066647609Z" level=info msg="StartContainer for \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\"" Feb 9 19:22:03.089370 systemd[1]: Started cri-containerd-b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e.scope. Feb 9 19:22:03.124721 systemd[1]: cri-containerd-b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e.scope: Deactivated successfully. Feb 9 19:22:03.126102 env[1063]: time="2024-02-09T19:22:03.125766108Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eef21c9_6e16_4f08_b109_cd948d0b83be.slice/cri-containerd-b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e.scope/memory.events\": no such file or directory" Feb 9 19:22:03.133463 env[1063]: time="2024-02-09T19:22:03.133360414Z" level=info msg="StartContainer for \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\" returns successfully" Feb 9 19:22:03.175996 env[1063]: time="2024-02-09T19:22:03.175939227Z" level=info msg="shim disconnected" id=b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e Feb 9 19:22:03.176259 env[1063]: time="2024-02-09T19:22:03.176233372Z" level=warning msg="cleaning up after shim disconnected" id=b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e namespace=k8s.io Feb 9 19:22:03.176366 env[1063]: time="2024-02-09T19:22:03.176349932Z" level=info msg="cleaning up dead shim" Feb 9 19:22:03.186097 env[1063]: time="2024-02-09T19:22:03.186033529Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:22:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2595 runtime=io.containerd.runc.v2\n" Feb 9 19:22:03.233808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e-rootfs.mount: Deactivated successfully. Feb 9 19:22:03.999994 env[1063]: time="2024-02-09T19:22:03.999835446Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:22:04.073267 env[1063]: time="2024-02-09T19:22:04.073141340Z" level=info msg="CreateContainer within sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\"" Feb 9 19:22:04.076304 env[1063]: time="2024-02-09T19:22:04.075117243Z" level=info msg="StartContainer for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\"" Feb 9 19:22:04.134125 systemd[1]: Started cri-containerd-d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b.scope. Feb 9 19:22:04.175853 env[1063]: time="2024-02-09T19:22:04.175789605Z" level=info msg="StartContainer for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" returns successfully" Feb 9 19:22:04.231343 systemd[1]: run-containerd-runc-k8s.io-d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b-runc.ivR4Lo.mount: Deactivated successfully. Feb 9 19:22:04.337165 kubelet[1932]: I0209 19:22:04.336649 1932 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:22:04.394595 kubelet[1932]: I0209 19:22:04.394477 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:22:04.408136 kubelet[1932]: I0209 19:22:04.408003 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r2xk\" (UniqueName: \"kubernetes.io/projected/281881f1-8ea2-457d-b608-43b9ce4c70ab-kube-api-access-9r2xk\") pod \"coredns-787d4945fb-zj85s\" (UID: \"281881f1-8ea2-457d-b608-43b9ce4c70ab\") " pod="kube-system/coredns-787d4945fb-zj85s" Feb 9 19:22:04.408740 kubelet[1932]: I0209 19:22:04.408700 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281881f1-8ea2-457d-b608-43b9ce4c70ab-config-volume\") pod \"coredns-787d4945fb-zj85s\" (UID: \"281881f1-8ea2-457d-b608-43b9ce4c70ab\") " pod="kube-system/coredns-787d4945fb-zj85s" Feb 9 19:22:04.410525 kubelet[1932]: I0209 19:22:04.410494 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:22:04.411402 systemd[1]: Created slice kubepods-burstable-pod281881f1_8ea2_457d_b608_43b9ce4c70ab.slice. Feb 9 19:22:04.425210 systemd[1]: Created slice kubepods-burstable-pod85f54b85_dc25_4b04_9420_a695868bc022.slice. Feb 9 19:22:04.509275 kubelet[1932]: I0209 19:22:04.509212 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85f54b85-dc25-4b04-9420-a695868bc022-config-volume\") pod \"coredns-787d4945fb-s6xk5\" (UID: \"85f54b85-dc25-4b04-9420-a695868bc022\") " pod="kube-system/coredns-787d4945fb-s6xk5" Feb 9 19:22:04.509275 kubelet[1932]: I0209 19:22:04.509300 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmchm\" (UniqueName: \"kubernetes.io/projected/85f54b85-dc25-4b04-9420-a695868bc022-kube-api-access-wmchm\") pod \"coredns-787d4945fb-s6xk5\" (UID: \"85f54b85-dc25-4b04-9420-a695868bc022\") " pod="kube-system/coredns-787d4945fb-s6xk5" Feb 9 19:22:04.722055 env[1063]: time="2024-02-09T19:22:04.719922886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zj85s,Uid:281881f1-8ea2-457d-b608-43b9ce4c70ab,Namespace:kube-system,Attempt:0,}" Feb 9 19:22:04.729300 env[1063]: time="2024-02-09T19:22:04.729207468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-s6xk5,Uid:85f54b85-dc25-4b04-9420-a695868bc022,Namespace:kube-system,Attempt:0,}" Feb 9 19:22:05.034114 kubelet[1932]: I0209 19:22:05.034070 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-kh7jv" podStartSLOduration=-9.22337201382078e+09 pod.CreationTimestamp="2024-02-09 19:21:42 +0000 UTC" firstStartedPulling="2024-02-09 19:21:44.949324779 +0000 UTC m=+15.420406519" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:22:05.031817486 +0000 UTC m=+35.502899216" watchObservedRunningTime="2024-02-09 19:22:05.033996289 +0000 UTC m=+35.505078029" Feb 9 19:22:07.040765 systemd-networkd[978]: cilium_host: Link UP Feb 9 19:22:07.046254 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:22:07.047505 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:22:07.043016 systemd-networkd[978]: cilium_net: Link UP Feb 9 19:22:07.045800 systemd-networkd[978]: cilium_net: Gained carrier Feb 9 19:22:07.047997 systemd-networkd[978]: cilium_host: Gained carrier Feb 9 19:22:07.122899 systemd-networkd[978]: cilium_net: Gained IPv6LL Feb 9 19:22:07.223613 systemd-networkd[978]: cilium_vxlan: Link UP Feb 9 19:22:07.223628 systemd-networkd[978]: cilium_vxlan: Gained carrier Feb 9 19:22:07.394900 systemd-networkd[978]: cilium_host: Gained IPv6LL Feb 9 19:22:08.276576 kernel: NET: Registered PF_ALG protocol family Feb 9 19:22:08.596710 systemd-networkd[978]: cilium_vxlan: Gained IPv6LL Feb 9 19:22:09.404721 systemd-networkd[978]: lxc_health: Link UP Feb 9 19:22:09.417394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:22:09.416749 systemd-networkd[978]: lxc_health: Gained carrier Feb 9 19:22:09.840104 systemd-networkd[978]: lxca64cc8c66aa7: Link UP Feb 9 19:22:09.845593 kernel: eth0: renamed from tmp9a096 Feb 9 19:22:09.860274 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca64cc8c66aa7: link becomes ready Feb 9 19:22:09.859876 systemd-networkd[978]: lxca64cc8c66aa7: Gained carrier Feb 9 19:22:09.869212 systemd-networkd[978]: lxcf9c09f857921: Link UP Feb 9 19:22:09.876575 kernel: eth0: renamed from tmpab09e Feb 9 19:22:09.886949 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf9c09f857921: link becomes ready Feb 9 19:22:09.886730 systemd-networkd[978]: lxcf9c09f857921: Gained carrier Feb 9 19:22:10.633636 systemd-networkd[978]: lxc_health: Gained IPv6LL Feb 9 19:22:11.258773 systemd-networkd[978]: lxcf9c09f857921: Gained IPv6LL Feb 9 19:22:11.322683 systemd-networkd[978]: lxca64cc8c66aa7: Gained IPv6LL Feb 9 19:22:14.398692 env[1063]: time="2024-02-09T19:22:14.398588948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:22:14.399324 env[1063]: time="2024-02-09T19:22:14.399296675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:22:14.399453 env[1063]: time="2024-02-09T19:22:14.399429120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:22:14.399784 env[1063]: time="2024-02-09T19:22:14.399753857Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a096d0e7dad6708df0de3f24e7a572b5c87d27146bef17beec7ad72717916ba pid=3140 runtime=io.containerd.runc.v2 Feb 9 19:22:14.426935 systemd[1]: Started cri-containerd-9a096d0e7dad6708df0de3f24e7a572b5c87d27146bef17beec7ad72717916ba.scope. Feb 9 19:22:14.437792 systemd[1]: run-containerd-runc-k8s.io-9a096d0e7dad6708df0de3f24e7a572b5c87d27146bef17beec7ad72717916ba-runc.j5hZJ2.mount: Deactivated successfully. Feb 9 19:22:14.502191 env[1063]: time="2024-02-09T19:22:14.502128012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zj85s,Uid:281881f1-8ea2-457d-b608-43b9ce4c70ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a096d0e7dad6708df0de3f24e7a572b5c87d27146bef17beec7ad72717916ba\"" Feb 9 19:22:14.512441 env[1063]: time="2024-02-09T19:22:14.512381729Z" level=info msg="CreateContainer within sandbox \"9a096d0e7dad6708df0de3f24e7a572b5c87d27146bef17beec7ad72717916ba\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:22:14.539307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004051582.mount: Deactivated successfully. Feb 9 19:22:14.545196 env[1063]: time="2024-02-09T19:22:14.545152236Z" level=info msg="CreateContainer within sandbox \"9a096d0e7dad6708df0de3f24e7a572b5c87d27146bef17beec7ad72717916ba\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f38ff49d2f89a37b825973efdb05238de0b64684658a9183203f0b712e4045a9\"" Feb 9 19:22:14.546378 env[1063]: time="2024-02-09T19:22:14.546351115Z" level=info msg="StartContainer for \"f38ff49d2f89a37b825973efdb05238de0b64684658a9183203f0b712e4045a9\"" Feb 9 19:22:14.565069 systemd[1]: Started cri-containerd-f38ff49d2f89a37b825973efdb05238de0b64684658a9183203f0b712e4045a9.scope. Feb 9 19:22:14.740825 env[1063]: time="2024-02-09T19:22:14.740411779Z" level=info msg="StartContainer for \"f38ff49d2f89a37b825973efdb05238de0b64684658a9183203f0b712e4045a9\" returns successfully" Feb 9 19:22:14.832277 env[1063]: time="2024-02-09T19:22:14.832085145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:22:14.832648 env[1063]: time="2024-02-09T19:22:14.832296090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:22:14.832648 env[1063]: time="2024-02-09T19:22:14.832448100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:22:14.832990 env[1063]: time="2024-02-09T19:22:14.832864258Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab09ecd9864dbe235cafce94d4b5e1ca150919665315cde28b6a41d966bdcceb pid=3214 runtime=io.containerd.runc.v2 Feb 9 19:22:14.858024 systemd[1]: Started cri-containerd-ab09ecd9864dbe235cafce94d4b5e1ca150919665315cde28b6a41d966bdcceb.scope. Feb 9 19:22:14.949816 env[1063]: time="2024-02-09T19:22:14.949754158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-s6xk5,Uid:85f54b85-dc25-4b04-9420-a695868bc022,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab09ecd9864dbe235cafce94d4b5e1ca150919665315cde28b6a41d966bdcceb\"" Feb 9 19:22:14.955869 env[1063]: time="2024-02-09T19:22:14.955075957Z" level=info msg="CreateContainer within sandbox \"ab09ecd9864dbe235cafce94d4b5e1ca150919665315cde28b6a41d966bdcceb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:22:14.998479 env[1063]: time="2024-02-09T19:22:14.998299926Z" level=info msg="CreateContainer within sandbox \"ab09ecd9864dbe235cafce94d4b5e1ca150919665315cde28b6a41d966bdcceb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1a798dc5037a25532dad47119e17dac790989b41e880a4fa4c8ca571bcabc95\"" Feb 9 19:22:15.001335 env[1063]: time="2024-02-09T19:22:15.000951384Z" level=info msg="StartContainer for \"c1a798dc5037a25532dad47119e17dac790989b41e880a4fa4c8ca571bcabc95\"" Feb 9 19:22:15.055641 systemd[1]: Started cri-containerd-c1a798dc5037a25532dad47119e17dac790989b41e880a4fa4c8ca571bcabc95.scope. Feb 9 19:22:15.061349 kubelet[1932]: I0209 19:22:15.061174 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-zj85s" podStartSLOduration=32.061099977 pod.CreationTimestamp="2024-02-09 19:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:22:15.059325596 +0000 UTC m=+45.530407346" watchObservedRunningTime="2024-02-09 19:22:15.061099977 +0000 UTC m=+45.532181707" Feb 9 19:22:15.207916 env[1063]: time="2024-02-09T19:22:15.207806259Z" level=info msg="StartContainer for \"c1a798dc5037a25532dad47119e17dac790989b41e880a4fa4c8ca571bcabc95\" returns successfully" Feb 9 19:22:16.093905 kubelet[1932]: I0209 19:22:16.093836 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-s6xk5" podStartSLOduration=33.093729729 pod.CreationTimestamp="2024-02-09 19:21:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:22:16.067964704 +0000 UTC m=+46.539046485" watchObservedRunningTime="2024-02-09 19:22:16.093729729 +0000 UTC m=+46.564811509" Feb 9 19:22:39.388707 systemd[1]: Started sshd@5-172.24.4.140:22-172.24.4.1:41094.service. Feb 9 19:22:40.757816 sshd[3407]: Accepted publickey for core from 172.24.4.1 port 41094 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:40.763679 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:40.776223 systemd-logind[1051]: New session 6 of user core. Feb 9 19:22:40.778046 systemd[1]: Started session-6.scope. Feb 9 19:22:41.517115 sshd[3407]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:41.524369 systemd-logind[1051]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:22:41.525350 systemd[1]: sshd@5-172.24.4.140:22-172.24.4.1:41094.service: Deactivated successfully. Feb 9 19:22:41.527441 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:22:41.529303 systemd-logind[1051]: Removed session 6. Feb 9 19:22:46.530202 systemd[1]: Started sshd@6-172.24.4.140:22-172.24.4.1:49788.service. Feb 9 19:22:47.743459 sshd[3424]: Accepted publickey for core from 172.24.4.1 port 49788 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:47.751579 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:47.767758 systemd-logind[1051]: New session 7 of user core. Feb 9 19:22:47.771075 systemd[1]: Started session-7.scope. Feb 9 19:22:49.312111 sshd[3424]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:49.467288 systemd[1]: sshd@6-172.24.4.140:22-172.24.4.1:49788.service: Deactivated successfully. Feb 9 19:22:49.469484 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:22:49.471696 systemd-logind[1051]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:22:49.474831 systemd-logind[1051]: Removed session 7. Feb 9 19:22:54.318212 systemd[1]: Started sshd@7-172.24.4.140:22-172.24.4.1:49790.service. Feb 9 19:22:56.375781 sshd[3437]: Accepted publickey for core from 172.24.4.1 port 49790 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:22:56.379135 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:22:56.391432 systemd-logind[1051]: New session 8 of user core. Feb 9 19:22:56.393095 systemd[1]: Started session-8.scope. Feb 9 19:22:57.521515 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 9 19:22:57.525962 systemd-logind[1051]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:22:57.527631 systemd[1]: sshd@7-172.24.4.140:22-172.24.4.1:49790.service: Deactivated successfully. Feb 9 19:22:57.528664 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:22:57.530339 systemd-logind[1051]: Removed session 8. Feb 9 19:23:02.533504 systemd[1]: Started sshd@8-172.24.4.140:22-172.24.4.1:35454.service. Feb 9 19:23:03.937017 sshd[3450]: Accepted publickey for core from 172.24.4.1 port 35454 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:03.939868 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:03.950660 systemd-logind[1051]: New session 9 of user core. Feb 9 19:23:03.952229 systemd[1]: Started session-9.scope. Feb 9 19:23:04.726697 sshd[3450]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:04.733332 systemd[1]: sshd@8-172.24.4.140:22-172.24.4.1:35454.service: Deactivated successfully. Feb 9 19:23:04.734126 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:23:04.737364 systemd-logind[1051]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:23:04.739730 systemd[1]: Started sshd@9-172.24.4.140:22-172.24.4.1:46276.service. Feb 9 19:23:04.742147 systemd-logind[1051]: Removed session 9. Feb 9 19:23:05.971869 sshd[3462]: Accepted publickey for core from 172.24.4.1 port 46276 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:05.974367 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:05.986108 systemd[1]: Started session-10.scope. Feb 9 19:23:05.987301 systemd-logind[1051]: New session 10 of user core. Feb 9 19:23:08.021153 sshd[3462]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:08.035392 systemd[1]: Started sshd@10-172.24.4.140:22-172.24.4.1:46282.service. Feb 9 19:23:08.038358 systemd[1]: sshd@9-172.24.4.140:22-172.24.4.1:46276.service: Deactivated successfully. Feb 9 19:23:08.042027 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:23:08.045059 systemd-logind[1051]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:23:08.049991 systemd-logind[1051]: Removed session 10. Feb 9 19:23:09.415002 sshd[3471]: Accepted publickey for core from 172.24.4.1 port 46282 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:09.418346 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:09.429724 systemd-logind[1051]: New session 11 of user core. Feb 9 19:23:09.430860 systemd[1]: Started session-11.scope. Feb 9 19:23:10.195657 sshd[3471]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:10.204983 systemd-logind[1051]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:23:10.205507 systemd[1]: sshd@10-172.24.4.140:22-172.24.4.1:46282.service: Deactivated successfully. Feb 9 19:23:10.207407 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:23:10.210097 systemd-logind[1051]: Removed session 11. Feb 9 19:23:15.206162 systemd[1]: Started sshd@11-172.24.4.140:22-172.24.4.1:39400.service. Feb 9 19:23:16.712471 sshd[3486]: Accepted publickey for core from 172.24.4.1 port 39400 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:16.716074 sshd[3486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:16.727176 systemd-logind[1051]: New session 12 of user core. Feb 9 19:23:16.730317 systemd[1]: Started session-12.scope. Feb 9 19:23:17.536732 sshd[3486]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:17.544165 systemd[1]: sshd@11-172.24.4.140:22-172.24.4.1:39400.service: Deactivated successfully. Feb 9 19:23:17.545965 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:23:17.547831 systemd-logind[1051]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:23:17.550335 systemd-logind[1051]: Removed session 12. Feb 9 19:23:22.548628 systemd[1]: Started sshd@12-172.24.4.140:22-172.24.4.1:39416.service. Feb 9 19:23:24.184272 sshd[3498]: Accepted publickey for core from 172.24.4.1 port 39416 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:24.188054 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:24.198331 systemd-logind[1051]: New session 13 of user core. Feb 9 19:23:24.199116 systemd[1]: Started session-13.scope. Feb 9 19:23:25.088877 sshd[3498]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:25.099838 systemd[1]: Started sshd@13-172.24.4.140:22-172.24.4.1:43738.service. Feb 9 19:23:25.102273 systemd[1]: sshd@12-172.24.4.140:22-172.24.4.1:39416.service: Deactivated successfully. Feb 9 19:23:25.104111 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:23:25.107279 systemd-logind[1051]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:23:25.110193 systemd-logind[1051]: Removed session 13. Feb 9 19:23:26.605334 sshd[3509]: Accepted publickey for core from 172.24.4.1 port 43738 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:26.608659 sshd[3509]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:26.621167 systemd[1]: Started session-14.scope. Feb 9 19:23:26.622079 systemd-logind[1051]: New session 14 of user core. Feb 9 19:23:28.092617 sshd[3509]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:28.100631 systemd[1]: Started sshd@14-172.24.4.140:22-172.24.4.1:43742.service. Feb 9 19:23:28.103855 systemd[1]: sshd@13-172.24.4.140:22-172.24.4.1:43738.service: Deactivated successfully. Feb 9 19:23:28.106390 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:23:28.108611 systemd-logind[1051]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:23:28.112892 systemd-logind[1051]: Removed session 14. Feb 9 19:23:29.570152 sshd[3519]: Accepted publickey for core from 172.24.4.1 port 43742 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:29.573840 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:29.584677 systemd-logind[1051]: New session 15 of user core. Feb 9 19:23:29.587019 systemd[1]: Started session-15.scope. Feb 9 19:23:32.063803 sshd[3519]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:32.072983 systemd[1]: Started sshd@15-172.24.4.140:22-172.24.4.1:43754.service. Feb 9 19:23:32.085395 systemd[1]: sshd@14-172.24.4.140:22-172.24.4.1:43742.service: Deactivated successfully. Feb 9 19:23:32.087682 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:23:32.088269 systemd[1]: session-15.scope: Consumed 1.027s CPU time. Feb 9 19:23:32.094213 systemd-logind[1051]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:23:32.097487 systemd-logind[1051]: Removed session 15. Feb 9 19:23:33.518904 sshd[3586]: Accepted publickey for core from 172.24.4.1 port 43754 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:33.522114 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:33.534118 systemd-logind[1051]: New session 16 of user core. Feb 9 19:23:33.535216 systemd[1]: Started session-16.scope. Feb 9 19:23:35.136746 sshd[3586]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:35.146265 systemd[1]: Started sshd@16-172.24.4.140:22-172.24.4.1:54254.service. Feb 9 19:23:35.160621 systemd[1]: sshd@15-172.24.4.140:22-172.24.4.1:43754.service: Deactivated successfully. Feb 9 19:23:35.162342 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:23:35.165005 systemd-logind[1051]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:23:35.167298 systemd-logind[1051]: Removed session 16. Feb 9 19:23:36.745343 sshd[3618]: Accepted publickey for core from 172.24.4.1 port 54254 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:36.748326 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:36.760706 systemd[1]: Started session-17.scope. Feb 9 19:23:36.762176 systemd-logind[1051]: New session 17 of user core. Feb 9 19:23:37.486001 sshd[3618]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:37.493847 systemd[1]: sshd@16-172.24.4.140:22-172.24.4.1:54254.service: Deactivated successfully. Feb 9 19:23:37.496013 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:23:37.498651 systemd-logind[1051]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:23:37.501813 systemd-logind[1051]: Removed session 17. Feb 9 19:23:42.498158 systemd[1]: Started sshd@17-172.24.4.140:22-172.24.4.1:54260.service. Feb 9 19:23:43.727877 sshd[3659]: Accepted publickey for core from 172.24.4.1 port 54260 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:43.730912 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:43.742320 systemd-logind[1051]: New session 18 of user core. Feb 9 19:23:43.743193 systemd[1]: Started session-18.scope. Feb 9 19:23:44.599290 sshd[3659]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:44.605753 systemd[1]: sshd@17-172.24.4.140:22-172.24.4.1:54260.service: Deactivated successfully. Feb 9 19:23:44.607523 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:23:44.608960 systemd-logind[1051]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:23:44.610824 systemd-logind[1051]: Removed session 18. Feb 9 19:23:49.611333 systemd[1]: Started sshd@18-172.24.4.140:22-172.24.4.1:47316.service. Feb 9 19:23:51.129174 sshd[3673]: Accepted publickey for core from 172.24.4.1 port 47316 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:51.131815 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:51.143068 systemd-logind[1051]: New session 19 of user core. Feb 9 19:23:51.143941 systemd[1]: Started session-19.scope. Feb 9 19:23:51.963900 sshd[3673]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:51.970433 systemd[1]: sshd@18-172.24.4.140:22-172.24.4.1:47316.service: Deactivated successfully. Feb 9 19:23:51.972208 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:23:51.973861 systemd-logind[1051]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:23:51.976999 systemd-logind[1051]: Removed session 19. Feb 9 19:23:56.975487 systemd[1]: Started sshd@19-172.24.4.140:22-172.24.4.1:36014.service. Feb 9 19:23:58.258716 sshd[3685]: Accepted publickey for core from 172.24.4.1 port 36014 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:23:58.261279 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:23:58.272466 systemd-logind[1051]: New session 20 of user core. Feb 9 19:23:58.273331 systemd[1]: Started session-20.scope. Feb 9 19:23:59.206347 sshd[3685]: pam_unix(sshd:session): session closed for user core Feb 9 19:23:59.215351 systemd[1]: Started sshd@20-172.24.4.140:22-172.24.4.1:36022.service. Feb 9 19:23:59.216679 systemd[1]: sshd@19-172.24.4.140:22-172.24.4.1:36014.service: Deactivated successfully. Feb 9 19:23:59.219261 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:23:59.226046 systemd-logind[1051]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:23:59.229099 systemd-logind[1051]: Removed session 20. Feb 9 19:24:00.948442 sshd[3696]: Accepted publickey for core from 172.24.4.1 port 36022 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:24:00.952378 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:24:00.968482 systemd-logind[1051]: New session 21 of user core. Feb 9 19:24:00.970424 systemd[1]: Started session-21.scope. Feb 9 19:24:03.130106 systemd[1]: run-containerd-runc-k8s.io-d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b-runc.cvTYyf.mount: Deactivated successfully. Feb 9 19:24:03.133007 env[1063]: time="2024-02-09T19:24:03.132950285Z" level=info msg="StopContainer for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" with timeout 30 (s)" Feb 9 19:24:03.133758 env[1063]: time="2024-02-09T19:24:03.133715498Z" level=info msg="Stop container \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" with signal terminated" Feb 9 19:24:03.169799 systemd[1]: cri-containerd-615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac.scope: Deactivated successfully. Feb 9 19:24:03.181310 env[1063]: time="2024-02-09T19:24:03.181188461Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:24:03.202174 env[1063]: time="2024-02-09T19:24:03.202118703Z" level=info msg="StopContainer for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" with timeout 1 (s)" Feb 9 19:24:03.203017 env[1063]: time="2024-02-09T19:24:03.202965324Z" level=info msg="Stop container \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" with signal terminated" Feb 9 19:24:03.207619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac-rootfs.mount: Deactivated successfully. Feb 9 19:24:03.216313 systemd-networkd[978]: lxc_health: Link DOWN Feb 9 19:24:03.216322 systemd-networkd[978]: lxc_health: Lost carrier Feb 9 19:24:03.222729 env[1063]: time="2024-02-09T19:24:03.222677244Z" level=info msg="shim disconnected" id=615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac Feb 9 19:24:03.222896 env[1063]: time="2024-02-09T19:24:03.222748913Z" level=warning msg="cleaning up after shim disconnected" id=615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac namespace=k8s.io Feb 9 19:24:03.222896 env[1063]: time="2024-02-09T19:24:03.222763892Z" level=info msg="cleaning up dead shim" Feb 9 19:24:03.246337 env[1063]: time="2024-02-09T19:24:03.246272883Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3747 runtime=io.containerd.runc.v2\n" Feb 9 19:24:03.257318 env[1063]: time="2024-02-09T19:24:03.257261133Z" level=info msg="StopContainer for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" returns successfully" Feb 9 19:24:03.260944 env[1063]: time="2024-02-09T19:24:03.258260119Z" level=info msg="StopPodSandbox for \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\"" Feb 9 19:24:03.260944 env[1063]: time="2024-02-09T19:24:03.258347128Z" level=info msg="Container to stop \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.260260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb-shm.mount: Deactivated successfully. Feb 9 19:24:03.265003 systemd[1]: cri-containerd-d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b.scope: Deactivated successfully. Feb 9 19:24:03.265307 systemd[1]: cri-containerd-d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b.scope: Consumed 9.637s CPU time. Feb 9 19:24:03.276019 systemd[1]: cri-containerd-c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb.scope: Deactivated successfully. Feb 9 19:24:03.327324 env[1063]: time="2024-02-09T19:24:03.327248167Z" level=info msg="shim disconnected" id=d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b Feb 9 19:24:03.327324 env[1063]: time="2024-02-09T19:24:03.327308615Z" level=warning msg="cleaning up after shim disconnected" id=d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b namespace=k8s.io Feb 9 19:24:03.327324 env[1063]: time="2024-02-09T19:24:03.327325497Z" level=info msg="cleaning up dead shim" Feb 9 19:24:03.328388 env[1063]: time="2024-02-09T19:24:03.328334814Z" level=info msg="shim disconnected" id=c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb Feb 9 19:24:03.328489 env[1063]: time="2024-02-09T19:24:03.328469415Z" level=warning msg="cleaning up after shim disconnected" id=c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb namespace=k8s.io Feb 9 19:24:03.328663 env[1063]: time="2024-02-09T19:24:03.328645136Z" level=info msg="cleaning up dead shim" Feb 9 19:24:03.337591 env[1063]: time="2024-02-09T19:24:03.337514438Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Feb 9 19:24:03.340597 env[1063]: time="2024-02-09T19:24:03.340511819Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3794 runtime=io.containerd.runc.v2\n" Feb 9 19:24:03.341123 env[1063]: time="2024-02-09T19:24:03.341091812Z" level=info msg="TearDown network for sandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" successfully" Feb 9 19:24:03.341226 env[1063]: time="2024-02-09T19:24:03.341205322Z" level=info msg="StopPodSandbox for \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" returns successfully" Feb 9 19:24:03.343285 env[1063]: time="2024-02-09T19:24:03.343073173Z" level=info msg="StopContainer for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" returns successfully" Feb 9 19:24:03.344264 env[1063]: time="2024-02-09T19:24:03.344165952Z" level=info msg="StopPodSandbox for \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\"" Feb 9 19:24:03.345140 env[1063]: time="2024-02-09T19:24:03.344498186Z" level=info msg="Container to stop \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.345140 env[1063]: time="2024-02-09T19:24:03.344530558Z" level=info msg="Container to stop \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.345140 env[1063]: time="2024-02-09T19:24:03.344583140Z" level=info msg="Container to stop \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.345140 env[1063]: time="2024-02-09T19:24:03.344598370Z" level=info msg="Container to stop \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.345140 env[1063]: time="2024-02-09T19:24:03.344613389Z" level=info msg="Container to stop \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:03.355171 systemd[1]: cri-containerd-318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448.scope: Deactivated successfully. Feb 9 19:24:03.415571 env[1063]: time="2024-02-09T19:24:03.413118160Z" level=info msg="shim disconnected" id=318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448 Feb 9 19:24:03.415571 env[1063]: time="2024-02-09T19:24:03.413200110Z" level=warning msg="cleaning up after shim disconnected" id=318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448 namespace=k8s.io Feb 9 19:24:03.415571 env[1063]: time="2024-02-09T19:24:03.413213154Z" level=info msg="cleaning up dead shim" Feb 9 19:24:03.424748 env[1063]: time="2024-02-09T19:24:03.424705001Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3840 runtime=io.containerd.runc.v2\n" Feb 9 19:24:03.425285 env[1063]: time="2024-02-09T19:24:03.425255278Z" level=info msg="TearDown network for sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" successfully" Feb 9 19:24:03.425399 env[1063]: time="2024-02-09T19:24:03.425377885Z" level=info msg="StopPodSandbox for \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" returns successfully" Feb 9 19:24:03.454728 kubelet[1932]: I0209 19:24:03.454674 1932 scope.go:115] "RemoveContainer" containerID="615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac" Feb 9 19:24:03.455573 kubelet[1932]: I0209 19:24:03.455513 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c98c8dec-98fe-4f8b-8f38-5e48fb453207-cilium-config-path\") pod \"c98c8dec-98fe-4f8b-8f38-5e48fb453207\" (UID: \"c98c8dec-98fe-4f8b-8f38-5e48fb453207\") " Feb 9 19:24:03.455701 kubelet[1932]: I0209 19:24:03.455605 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9xqh\" (UniqueName: \"kubernetes.io/projected/c98c8dec-98fe-4f8b-8f38-5e48fb453207-kube-api-access-p9xqh\") pod \"c98c8dec-98fe-4f8b-8f38-5e48fb453207\" (UID: \"c98c8dec-98fe-4f8b-8f38-5e48fb453207\") " Feb 9 19:24:03.458443 kubelet[1932]: W0209 19:24:03.457499 1932 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c98c8dec-98fe-4f8b-8f38-5e48fb453207/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:24:03.461460 kubelet[1932]: I0209 19:24:03.460185 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98c8dec-98fe-4f8b-8f38-5e48fb453207-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c98c8dec-98fe-4f8b-8f38-5e48fb453207" (UID: "c98c8dec-98fe-4f8b-8f38-5e48fb453207"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:24:03.467117 kubelet[1932]: I0209 19:24:03.466508 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98c8dec-98fe-4f8b-8f38-5e48fb453207-kube-api-access-p9xqh" (OuterVolumeSpecName: "kube-api-access-p9xqh") pod "c98c8dec-98fe-4f8b-8f38-5e48fb453207" (UID: "c98c8dec-98fe-4f8b-8f38-5e48fb453207"). InnerVolumeSpecName "kube-api-access-p9xqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:03.469834 env[1063]: time="2024-02-09T19:24:03.469795476Z" level=info msg="RemoveContainer for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\"" Feb 9 19:24:03.491412 env[1063]: time="2024-02-09T19:24:03.491352853Z" level=info msg="RemoveContainer for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" returns successfully" Feb 9 19:24:03.495678 kubelet[1932]: I0209 19:24:03.494993 1932 scope.go:115] "RemoveContainer" containerID="615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac" Feb 9 19:24:03.495835 env[1063]: time="2024-02-09T19:24:03.495380921Z" level=error msg="ContainerStatus for \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\": not found" Feb 9 19:24:03.496655 kubelet[1932]: E0209 19:24:03.496620 1932 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\": not found" containerID="615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac" Feb 9 19:24:03.504574 kubelet[1932]: I0209 19:24:03.504507 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac} err="failed to get container status \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\": rpc error: code = NotFound desc = an error occurred when try to find container \"615ec6b7c4fe7fe93d7ae653be792d21e0037b9d0b1f58c04694cac979aaccac\": not found" Feb 9 19:24:03.504574 kubelet[1932]: I0209 19:24:03.504570 1932 scope.go:115] "RemoveContainer" containerID="d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b" Feb 9 19:24:03.506661 env[1063]: time="2024-02-09T19:24:03.506625859Z" level=info msg="RemoveContainer for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\"" Feb 9 19:24:03.516076 env[1063]: time="2024-02-09T19:24:03.516045468Z" level=info msg="RemoveContainer for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" returns successfully" Feb 9 19:24:03.516449 kubelet[1932]: I0209 19:24:03.516400 1932 scope.go:115] "RemoveContainer" containerID="b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e" Feb 9 19:24:03.517759 env[1063]: time="2024-02-09T19:24:03.517701799Z" level=info msg="RemoveContainer for \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\"" Feb 9 19:24:03.521958 env[1063]: time="2024-02-09T19:24:03.521917370Z" level=info msg="RemoveContainer for \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\" returns successfully" Feb 9 19:24:03.522272 kubelet[1932]: I0209 19:24:03.522175 1932 scope.go:115] "RemoveContainer" containerID="8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035" Feb 9 19:24:03.523300 env[1063]: time="2024-02-09T19:24:03.523275053Z" level=info msg="RemoveContainer for \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\"" Feb 9 19:24:03.527319 env[1063]: time="2024-02-09T19:24:03.527293592Z" level=info msg="RemoveContainer for \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\" returns successfully" Feb 9 19:24:03.527632 kubelet[1932]: I0209 19:24:03.527531 1932 scope.go:115] "RemoveContainer" containerID="0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d" Feb 9 19:24:03.528642 env[1063]: time="2024-02-09T19:24:03.528618791Z" level=info msg="RemoveContainer for \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\"" Feb 9 19:24:03.532173 env[1063]: time="2024-02-09T19:24:03.532145307Z" level=info msg="RemoveContainer for \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\" returns successfully" Feb 9 19:24:03.532525 kubelet[1932]: I0209 19:24:03.532395 1932 scope.go:115] "RemoveContainer" containerID="d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563" Feb 9 19:24:03.533667 env[1063]: time="2024-02-09T19:24:03.533632520Z" level=info msg="RemoveContainer for \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\"" Feb 9 19:24:03.537935 env[1063]: time="2024-02-09T19:24:03.537894962Z" level=info msg="RemoveContainer for \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\" returns successfully" Feb 9 19:24:03.538244 kubelet[1932]: I0209 19:24:03.538164 1932 scope.go:115] "RemoveContainer" containerID="d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b" Feb 9 19:24:03.538508 env[1063]: time="2024-02-09T19:24:03.538448155Z" level=error msg="ContainerStatus for \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\": not found" Feb 9 19:24:03.538983 kubelet[1932]: E0209 19:24:03.538844 1932 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\": not found" containerID="d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b" Feb 9 19:24:03.538983 kubelet[1932]: I0209 19:24:03.538899 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b} err="failed to get container status \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b\": not found" Feb 9 19:24:03.538983 kubelet[1932]: I0209 19:24:03.538911 1932 scope.go:115] "RemoveContainer" containerID="b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e" Feb 9 19:24:03.539169 env[1063]: time="2024-02-09T19:24:03.539091672Z" level=error msg="ContainerStatus for \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\": not found" Feb 9 19:24:03.539467 kubelet[1932]: E0209 19:24:03.539302 1932 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\": not found" containerID="b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e" Feb 9 19:24:03.539467 kubelet[1932]: I0209 19:24:03.539377 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e} err="failed to get container status \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2af5bd59e6359a7b5268947e9c5ad518aba3230558d0b826dcf71e38431a93e\": not found" Feb 9 19:24:03.539467 kubelet[1932]: I0209 19:24:03.539390 1932 scope.go:115] "RemoveContainer" containerID="8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035" Feb 9 19:24:03.539843 env[1063]: time="2024-02-09T19:24:03.539767141Z" level=error msg="ContainerStatus for \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\": not found" Feb 9 19:24:03.541132 env[1063]: time="2024-02-09T19:24:03.540200201Z" level=error msg="ContainerStatus for \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\": not found" Feb 9 19:24:03.541132 env[1063]: time="2024-02-09T19:24:03.540750578Z" level=error msg="ContainerStatus for \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\": not found" Feb 9 19:24:03.541207 kubelet[1932]: E0209 19:24:03.540008 1932 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\": not found" containerID="8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035" Feb 9 19:24:03.541207 kubelet[1932]: I0209 19:24:03.540059 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035} err="failed to get container status \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e934d73b566ab41fc39c1ecd9111559ecd10f031a6e70c00b2d6daeb06e9035\": not found" Feb 9 19:24:03.541207 kubelet[1932]: I0209 19:24:03.540071 1932 scope.go:115] "RemoveContainer" containerID="0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d" Feb 9 19:24:03.541207 kubelet[1932]: E0209 19:24:03.540365 1932 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\": not found" containerID="0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d" Feb 9 19:24:03.541207 kubelet[1932]: I0209 19:24:03.540418 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d} err="failed to get container status \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ed4bbf7cbc581e1d2a4563d23a9f9b6538ee9e46f95ca231f28ccc0445b764d\": not found" Feb 9 19:24:03.541207 kubelet[1932]: I0209 19:24:03.540432 1932 scope.go:115] "RemoveContainer" containerID="d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563" Feb 9 19:24:03.541386 kubelet[1932]: E0209 19:24:03.540927 1932 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\": not found" containerID="d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563" Feb 9 19:24:03.541386 kubelet[1932]: I0209 19:24:03.540973 1932 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563} err="failed to get container status \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\": rpc error: code = NotFound desc = an error occurred when try to find container \"d768ccde8663467658db12080b3ab238c56bd1b0f5e58aff578a1d584dc34563\": not found" Feb 9 19:24:03.556696 kubelet[1932]: I0209 19:24:03.556445 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-lib-modules\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.556696 kubelet[1932]: I0209 19:24:03.556588 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-net\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.556696 kubelet[1932]: I0209 19:24:03.556582 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.556696 kubelet[1932]: I0209 19:24:03.556655 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.556696 kubelet[1932]: I0209 19:24:03.556639 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-etc-cni-netd\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557001 kubelet[1932]: I0209 19:24:03.556830 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-config-path\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557001 kubelet[1932]: I0209 19:24:03.556921 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eef21c9-6e16-4f08-b109-cd948d0b83be-clustermesh-secrets\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557001 kubelet[1932]: I0209 19:24:03.556993 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-hostproc\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557166 kubelet[1932]: I0209 19:24:03.557083 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p4fw\" (UniqueName: \"kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-kube-api-access-2p4fw\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557166 kubelet[1932]: I0209 19:24:03.557161 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cni-path\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557226 kubelet[1932]: I0209 19:24:03.557211 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-xtables-lock\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557567 kubelet[1932]: I0209 19:24:03.557285 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-kernel\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557567 kubelet[1932]: I0209 19:24:03.557311 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-hostproc" (OuterVolumeSpecName: "hostproc") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.557567 kubelet[1932]: I0209 19:24:03.557346 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.557567 kubelet[1932]: I0209 19:24:03.557389 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-run\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557567 kubelet[1932]: I0209 19:24:03.557482 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-cgroup\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557842 kubelet[1932]: I0209 19:24:03.557580 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-hubble-tls\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.557842 kubelet[1932]: I0209 19:24:03.557624 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-bpf-maps\") pod \"6eef21c9-6e16-4f08-b109-cd948d0b83be\" (UID: \"6eef21c9-6e16-4f08-b109-cd948d0b83be\") " Feb 9 19:24:03.558738 kubelet[1932]: W0209 19:24:03.557525 1932 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/6eef21c9-6e16-4f08-b109-cd948d0b83be/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:24:03.558738 kubelet[1932]: I0209 19:24:03.558476 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c98c8dec-98fe-4f8b-8f38-5e48fb453207-cilium-config-path\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.558738 kubelet[1932]: I0209 19:24:03.558585 1932 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-lib-modules\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.558738 kubelet[1932]: I0209 19:24:03.558621 1932 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-p9xqh\" (UniqueName: \"kubernetes.io/projected/c98c8dec-98fe-4f8b-8f38-5e48fb453207-kube-api-access-p9xqh\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.558738 kubelet[1932]: I0209 19:24:03.558675 1932 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-net\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.558738 kubelet[1932]: I0209 19:24:03.558701 1932 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-etc-cni-netd\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.558928 kubelet[1932]: I0209 19:24:03.558762 1932 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-hostproc\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.558928 kubelet[1932]: I0209 19:24:03.558808 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.559941 kubelet[1932]: I0209 19:24:03.559902 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cni-path" (OuterVolumeSpecName: "cni-path") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.559999 kubelet[1932]: I0209 19:24:03.559977 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.560035 kubelet[1932]: I0209 19:24:03.560013 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.560074 kubelet[1932]: I0209 19:24:03.560048 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.560108 kubelet[1932]: I0209 19:24:03.560077 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:03.561011 kubelet[1932]: I0209 19:24:03.560990 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:24:03.566324 kubelet[1932]: I0209 19:24:03.566105 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-kube-api-access-2p4fw" (OuterVolumeSpecName: "kube-api-access-2p4fw") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "kube-api-access-2p4fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:03.566840 kubelet[1932]: I0209 19:24:03.566788 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:03.570917 kubelet[1932]: I0209 19:24:03.570882 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eef21c9-6e16-4f08-b109-cd948d0b83be-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6eef21c9-6e16-4f08-b109-cd948d0b83be" (UID: "6eef21c9-6e16-4f08-b109-cd948d0b83be"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:24:03.660040 kubelet[1932]: I0209 19:24:03.659984 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-config-path\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.660469 kubelet[1932]: I0209 19:24:03.660438 1932 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6eef21c9-6e16-4f08-b109-cd948d0b83be-clustermesh-secrets\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.660750 kubelet[1932]: I0209 19:24:03.660722 1932 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-xtables-lock\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.660949 kubelet[1932]: I0209 19:24:03.660925 1932 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2p4fw\" (UniqueName: \"kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-kube-api-access-2p4fw\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.661120 kubelet[1932]: I0209 19:24:03.661097 1932 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cni-path\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.661287 kubelet[1932]: I0209 19:24:03.661265 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-cgroup\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.661455 kubelet[1932]: I0209 19:24:03.661433 1932 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6eef21c9-6e16-4f08-b109-cd948d0b83be-hubble-tls\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.661678 kubelet[1932]: I0209 19:24:03.661652 1932 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-host-proc-sys-kernel\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.661864 kubelet[1932]: I0209 19:24:03.661841 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-cilium-run\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.662039 kubelet[1932]: I0209 19:24:03.662017 1932 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6eef21c9-6e16-4f08-b109-cd948d0b83be-bpf-maps\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:03.766514 systemd[1]: Removed slice kubepods-besteffort-podc98c8dec_98fe_4f8b_8f38_5e48fb453207.slice. Feb 9 19:24:03.797014 systemd[1]: Removed slice kubepods-burstable-pod6eef21c9_6e16_4f08_b109_cd948d0b83be.slice. Feb 9 19:24:03.797248 systemd[1]: kubepods-burstable-pod6eef21c9_6e16_4f08_b109_cd948d0b83be.slice: Consumed 9.804s CPU time. Feb 9 19:24:03.844864 kubelet[1932]: I0209 19:24:03.844837 1932 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c98c8dec-98fe-4f8b-8f38-5e48fb453207 path="/var/lib/kubelet/pods/c98c8dec-98fe-4f8b-8f38-5e48fb453207/volumes" Feb 9 19:24:04.124489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4a0df02ac462df5e29605fe08e4d63a6fe8058394e3aef2e5b68a1f49497f6b-rootfs.mount: Deactivated successfully. Feb 9 19:24:04.124782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448-rootfs.mount: Deactivated successfully. Feb 9 19:24:04.124969 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448-shm.mount: Deactivated successfully. Feb 9 19:24:04.125147 systemd[1]: var-lib-kubelet-pods-6eef21c9\x2d6e16\x2d4f08\x2db109\x2dcd948d0b83be-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:24:04.125320 systemd[1]: var-lib-kubelet-pods-6eef21c9\x2d6e16\x2d4f08\x2db109\x2dcd948d0b83be-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:24:04.125476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb-rootfs.mount: Deactivated successfully. Feb 9 19:24:04.125661 systemd[1]: var-lib-kubelet-pods-c98c8dec\x2d98fe\x2d4f8b\x2d8f38\x2d5e48fb453207-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9xqh.mount: Deactivated successfully. Feb 9 19:24:04.125829 systemd[1]: var-lib-kubelet-pods-6eef21c9\x2d6e16\x2d4f08\x2db109\x2dcd948d0b83be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2p4fw.mount: Deactivated successfully. Feb 9 19:24:04.944617 kubelet[1932]: E0209 19:24:04.944528 1932 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:24:05.239108 sshd[3696]: pam_unix(sshd:session): session closed for user core Feb 9 19:24:05.249288 systemd[1]: Started sshd@21-172.24.4.140:22-172.24.4.1:45064.service. Feb 9 19:24:05.252865 systemd[1]: sshd@20-172.24.4.140:22-172.24.4.1:36022.service: Deactivated successfully. Feb 9 19:24:05.255041 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:24:05.257000 systemd-logind[1051]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:24:05.263524 systemd-logind[1051]: Removed session 21. Feb 9 19:24:05.830065 kubelet[1932]: I0209 19:24:05.830027 1932 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=6eef21c9-6e16-4f08-b109-cd948d0b83be path="/var/lib/kubelet/pods/6eef21c9-6e16-4f08-b109-cd948d0b83be/volumes" Feb 9 19:24:06.442184 sshd[3858]: Accepted publickey for core from 172.24.4.1 port 45064 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:24:06.445128 sshd[3858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:24:06.457334 systemd-logind[1051]: New session 22 of user core. Feb 9 19:24:06.459999 systemd[1]: Started session-22.scope. Feb 9 19:24:07.707914 kubelet[1932]: I0209 19:24:07.707857 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:24:07.707914 kubelet[1932]: E0209 19:24:07.707940 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c98c8dec-98fe-4f8b-8f38-5e48fb453207" containerName="cilium-operator" Feb 9 19:24:07.708450 kubelet[1932]: E0209 19:24:07.707952 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eef21c9-6e16-4f08-b109-cd948d0b83be" containerName="mount-cgroup" Feb 9 19:24:07.708450 kubelet[1932]: E0209 19:24:07.707966 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eef21c9-6e16-4f08-b109-cd948d0b83be" containerName="apply-sysctl-overwrites" Feb 9 19:24:07.708450 kubelet[1932]: E0209 19:24:07.707974 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eef21c9-6e16-4f08-b109-cd948d0b83be" containerName="mount-bpf-fs" Feb 9 19:24:07.708450 kubelet[1932]: E0209 19:24:07.707985 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eef21c9-6e16-4f08-b109-cd948d0b83be" containerName="clean-cilium-state" Feb 9 19:24:07.708450 kubelet[1932]: E0209 19:24:07.707993 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6eef21c9-6e16-4f08-b109-cd948d0b83be" containerName="cilium-agent" Feb 9 19:24:07.708450 kubelet[1932]: I0209 19:24:07.708022 1932 memory_manager.go:346] "RemoveStaleState removing state" podUID="c98c8dec-98fe-4f8b-8f38-5e48fb453207" containerName="cilium-operator" Feb 9 19:24:07.708450 kubelet[1932]: I0209 19:24:07.708029 1932 memory_manager.go:346] "RemoveStaleState removing state" podUID="6eef21c9-6e16-4f08-b109-cd948d0b83be" containerName="cilium-agent" Feb 9 19:24:07.721995 systemd[1]: Created slice kubepods-burstable-pod4afbe729_9ee8_44e2_af17_8314ada1ebcc.slice. Feb 9 19:24:07.795292 kubelet[1932]: I0209 19:24:07.795261 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-cgroup\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796308 kubelet[1932]: I0209 19:24:07.796281 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-xtables-lock\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796397 kubelet[1932]: I0209 19:24:07.796328 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-run\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796397 kubelet[1932]: I0209 19:24:07.796364 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hostproc\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796397 kubelet[1932]: I0209 19:24:07.796389 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-clustermesh-secrets\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796502 kubelet[1932]: I0209 19:24:07.796416 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-lib-modules\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796502 kubelet[1932]: I0209 19:24:07.796447 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-ipsec-secrets\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796502 kubelet[1932]: I0209 19:24:07.796473 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-kernel\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796502 kubelet[1932]: I0209 19:24:07.796499 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-config-path\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796637 kubelet[1932]: I0209 19:24:07.796522 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cni-path\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796637 kubelet[1932]: I0209 19:24:07.796564 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m87sb\" (UniqueName: \"kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-kube-api-access-m87sb\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796637 kubelet[1932]: I0209 19:24:07.796591 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-bpf-maps\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796637 kubelet[1932]: I0209 19:24:07.796614 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-etc-cni-netd\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796637 kubelet[1932]: I0209 19:24:07.796637 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-net\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.796775 kubelet[1932]: I0209 19:24:07.796660 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hubble-tls\") pod \"cilium-4qjht\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " pod="kube-system/cilium-4qjht" Feb 9 19:24:07.868474 sshd[3858]: pam_unix(sshd:session): session closed for user core Feb 9 19:24:07.875955 systemd[1]: Started sshd@22-172.24.4.140:22-172.24.4.1:45072.service. Feb 9 19:24:07.877165 systemd[1]: sshd@21-172.24.4.140:22-172.24.4.1:45064.service: Deactivated successfully. Feb 9 19:24:07.881444 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:24:07.893517 systemd-logind[1051]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:24:07.901050 systemd-logind[1051]: Removed session 22. Feb 9 19:24:08.027512 env[1063]: time="2024-02-09T19:24:08.026911323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qjht,Uid:4afbe729-9ee8-44e2-af17-8314ada1ebcc,Namespace:kube-system,Attempt:0,}" Feb 9 19:24:08.067889 env[1063]: time="2024-02-09T19:24:08.067716353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:24:08.068172 env[1063]: time="2024-02-09T19:24:08.067929195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:24:08.068172 env[1063]: time="2024-02-09T19:24:08.068060981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:24:08.068509 env[1063]: time="2024-02-09T19:24:08.068414887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb pid=3882 runtime=io.containerd.runc.v2 Feb 9 19:24:08.101979 systemd[1]: Started cri-containerd-723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb.scope. Feb 9 19:24:08.147801 env[1063]: time="2024-02-09T19:24:08.147726137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qjht,Uid:4afbe729-9ee8-44e2-af17-8314ada1ebcc,Namespace:kube-system,Attempt:0,} returns sandbox id \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\"" Feb 9 19:24:08.152597 env[1063]: time="2024-02-09T19:24:08.152445580Z" level=info msg="CreateContainer within sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:24:08.170732 env[1063]: time="2024-02-09T19:24:08.170665933Z" level=info msg="CreateContainer within sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\"" Feb 9 19:24:08.173614 env[1063]: time="2024-02-09T19:24:08.172739844Z" level=info msg="StartContainer for \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\"" Feb 9 19:24:08.197940 systemd[1]: Started cri-containerd-6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe.scope. Feb 9 19:24:08.210770 systemd[1]: cri-containerd-6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe.scope: Deactivated successfully. Feb 9 19:24:08.281566 env[1063]: time="2024-02-09T19:24:08.281471658Z" level=info msg="shim disconnected" id=6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe Feb 9 19:24:08.281860 env[1063]: time="2024-02-09T19:24:08.281579247Z" level=warning msg="cleaning up after shim disconnected" id=6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe namespace=k8s.io Feb 9 19:24:08.281860 env[1063]: time="2024-02-09T19:24:08.281595478Z" level=info msg="cleaning up dead shim" Feb 9 19:24:08.293016 env[1063]: time="2024-02-09T19:24:08.292934968Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3942 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:24:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:24:08.293414 env[1063]: time="2024-02-09T19:24:08.293287902Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Feb 9 19:24:08.294672 env[1063]: time="2024-02-09T19:24:08.294616108Z" level=error msg="Failed to pipe stdout of container \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\"" error="reading from a closed fifo" Feb 9 19:24:08.295084 env[1063]: time="2024-02-09T19:24:08.294778552Z" level=error msg="Failed to pipe stderr of container \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\"" error="reading from a closed fifo" Feb 9 19:24:08.298355 env[1063]: time="2024-02-09T19:24:08.298244735Z" level=error msg="StartContainer for \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:24:08.299320 kubelet[1932]: E0209 19:24:08.299238 1932 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe" Feb 9 19:24:08.302637 kubelet[1932]: E0209 19:24:08.302585 1932 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:24:08.302637 kubelet[1932]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:24:08.302637 kubelet[1932]: rm /hostbin/cilium-mount Feb 9 19:24:08.302637 kubelet[1932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m87sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4qjht_kube-system(4afbe729-9ee8-44e2-af17-8314ada1ebcc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:24:08.306630 kubelet[1932]: E0209 19:24:08.306525 1932 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4qjht" podUID=4afbe729-9ee8-44e2-af17-8314ada1ebcc Feb 9 19:24:08.506439 env[1063]: time="2024-02-09T19:24:08.506083847Z" level=info msg="CreateContainer within sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:24:08.536785 env[1063]: time="2024-02-09T19:24:08.536435024Z" level=info msg="CreateContainer within sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\"" Feb 9 19:24:08.540079 env[1063]: time="2024-02-09T19:24:08.539982474Z" level=info msg="StartContainer for \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\"" Feb 9 19:24:08.593668 systemd[1]: Started cri-containerd-0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0.scope. Feb 9 19:24:08.606103 systemd[1]: cri-containerd-0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0.scope: Deactivated successfully. Feb 9 19:24:08.640465 env[1063]: time="2024-02-09T19:24:08.640398331Z" level=info msg="shim disconnected" id=0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0 Feb 9 19:24:08.640465 env[1063]: time="2024-02-09T19:24:08.640466363Z" level=warning msg="cleaning up after shim disconnected" id=0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0 namespace=k8s.io Feb 9 19:24:08.640793 env[1063]: time="2024-02-09T19:24:08.640481864Z" level=info msg="cleaning up dead shim" Feb 9 19:24:08.650767 env[1063]: time="2024-02-09T19:24:08.650714477Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3980 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:24:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:24:08.651188 env[1063]: time="2024-02-09T19:24:08.651124822Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Feb 9 19:24:08.654639 env[1063]: time="2024-02-09T19:24:08.654596975Z" level=error msg="Failed to pipe stderr of container \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\"" error="reading from a closed fifo" Feb 9 19:24:08.654767 env[1063]: time="2024-02-09T19:24:08.654635750Z" level=error msg="Failed to pipe stdout of container \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\"" error="reading from a closed fifo" Feb 9 19:24:08.660639 env[1063]: time="2024-02-09T19:24:08.660595367Z" level=error msg="StartContainer for \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:24:08.661500 kubelet[1932]: E0209 19:24:08.660952 1932 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0" Feb 9 19:24:08.661500 kubelet[1932]: E0209 19:24:08.661069 1932 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:24:08.661500 kubelet[1932]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:24:08.661500 kubelet[1932]: rm /hostbin/cilium-mount Feb 9 19:24:08.661783 kubelet[1932]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m87sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-4qjht_kube-system(4afbe729-9ee8-44e2-af17-8314ada1ebcc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:24:08.661944 kubelet[1932]: E0209 19:24:08.661112 1932 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4qjht" podUID=4afbe729-9ee8-44e2-af17-8314ada1ebcc Feb 9 19:24:09.461960 sshd[3868]: Accepted publickey for core from 172.24.4.1 port 45072 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:24:09.465437 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:24:09.476082 systemd-logind[1051]: New session 23 of user core. Feb 9 19:24:09.477833 systemd[1]: Started session-23.scope. Feb 9 19:24:09.507927 kubelet[1932]: I0209 19:24:09.507847 1932 scope.go:115] "RemoveContainer" containerID="6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe" Feb 9 19:24:09.509103 kubelet[1932]: I0209 19:24:09.508967 1932 scope.go:115] "RemoveContainer" containerID="6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe" Feb 9 19:24:09.527446 env[1063]: time="2024-02-09T19:24:09.526920074Z" level=info msg="RemoveContainer for \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\"" Feb 9 19:24:09.529233 env[1063]: time="2024-02-09T19:24:09.528796503Z" level=info msg="RemoveContainer for \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\"" Feb 9 19:24:09.529233 env[1063]: time="2024-02-09T19:24:09.528975670Z" level=error msg="RemoveContainer for \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\" failed" error="failed to set removing state for container \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\": container is already in removing state" Feb 9 19:24:09.529506 kubelet[1932]: E0209 19:24:09.529293 1932 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\": container is already in removing state" containerID="6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe" Feb 9 19:24:09.529506 kubelet[1932]: E0209 19:24:09.529394 1932 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe": container is already in removing state; Skipping pod "cilium-4qjht_kube-system(4afbe729-9ee8-44e2-af17-8314ada1ebcc)" Feb 9 19:24:09.530889 kubelet[1932]: E0209 19:24:09.530620 1932 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-4qjht_kube-system(4afbe729-9ee8-44e2-af17-8314ada1ebcc)\"" pod="kube-system/cilium-4qjht" podUID=4afbe729-9ee8-44e2-af17-8314ada1ebcc Feb 9 19:24:09.534442 env[1063]: time="2024-02-09T19:24:09.534381133Z" level=info msg="RemoveContainer for \"6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe\" returns successfully" Feb 9 19:24:09.947477 kubelet[1932]: E0209 19:24:09.947405 1932 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:24:10.269769 sshd[3868]: pam_unix(sshd:session): session closed for user core Feb 9 19:24:10.286967 systemd[1]: Started sshd@23-172.24.4.140:22-172.24.4.1:45082.service. Feb 9 19:24:10.288735 systemd[1]: sshd@22-172.24.4.140:22-172.24.4.1:45072.service: Deactivated successfully. Feb 9 19:24:10.290931 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:24:10.304051 systemd-logind[1051]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:24:10.309957 systemd-logind[1051]: Removed session 23. Feb 9 19:24:10.516341 kubelet[1932]: E0209 19:24:10.516241 1932 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-4qjht_kube-system(4afbe729-9ee8-44e2-af17-8314ada1ebcc)\"" pod="kube-system/cilium-4qjht" podUID=4afbe729-9ee8-44e2-af17-8314ada1ebcc Feb 9 19:24:11.399946 kubelet[1932]: W0209 19:24:11.399867 1932 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4afbe729_9ee8_44e2_af17_8314ada1ebcc.slice/cri-containerd-6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe.scope WatchSource:0}: container "6beddb03dcb08e02d5dab7b90536fbb2f7afeb87369723b2749e2b2bc5fe8abe" in namespace "k8s.io": not found Feb 9 19:24:11.524193 env[1063]: time="2024-02-09T19:24:11.518938625Z" level=info msg="StopPodSandbox for \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\"" Feb 9 19:24:11.524193 env[1063]: time="2024-02-09T19:24:11.519081563Z" level=info msg="Container to stop \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:24:11.523858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb-shm.mount: Deactivated successfully. Feb 9 19:24:11.540866 systemd[1]: cri-containerd-723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb.scope: Deactivated successfully. Feb 9 19:24:11.588996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb-rootfs.mount: Deactivated successfully. Feb 9 19:24:11.608997 env[1063]: time="2024-02-09T19:24:11.608942147Z" level=info msg="shim disconnected" id=723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb Feb 9 19:24:11.609559 env[1063]: time="2024-02-09T19:24:11.609519547Z" level=warning msg="cleaning up after shim disconnected" id=723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb namespace=k8s.io Feb 9 19:24:11.613424 env[1063]: time="2024-02-09T19:24:11.609646744Z" level=info msg="cleaning up dead shim" Feb 9 19:24:11.620334 env[1063]: time="2024-02-09T19:24:11.620273887Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4024 runtime=io.containerd.runc.v2\n" Feb 9 19:24:11.620696 env[1063]: time="2024-02-09T19:24:11.620665646Z" level=info msg="TearDown network for sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" successfully" Feb 9 19:24:11.620747 env[1063]: time="2024-02-09T19:24:11.620696657Z" level=info msg="StopPodSandbox for \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" returns successfully" Feb 9 19:24:11.623317 sshd[4003]: Accepted publickey for core from 172.24.4.1 port 45082 ssh2: RSA SHA256:0cKtuwQ+yBp2KK/6KUCEpkWDg4c+XXZ9qW4sy+pe7oM Feb 9 19:24:11.624936 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:24:11.633812 systemd[1]: Started session-24.scope. Feb 9 19:24:11.634739 systemd-logind[1051]: New session 24 of user core. Feb 9 19:24:11.743401 kubelet[1932]: I0209 19:24:11.743130 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-clustermesh-secrets\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.743401 kubelet[1932]: I0209 19:24:11.743233 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-cgroup\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.743401 kubelet[1932]: I0209 19:24:11.743287 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-run\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.743401 kubelet[1932]: I0209 19:24:11.743351 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m87sb\" (UniqueName: \"kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-kube-api-access-m87sb\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.746480 kubelet[1932]: I0209 19:24:11.744791 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-kernel\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.746480 kubelet[1932]: I0209 19:24:11.744888 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-lib-modules\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.746480 kubelet[1932]: I0209 19:24:11.745028 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-ipsec-secrets\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.746480 kubelet[1932]: I0209 19:24:11.745096 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-xtables-lock\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.746480 kubelet[1932]: I0209 19:24:11.745163 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-config-path\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.746480 kubelet[1932]: I0209 19:24:11.745219 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-etc-cni-netd\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.747110 kubelet[1932]: I0209 19:24:11.745307 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-bpf-maps\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.747110 kubelet[1932]: I0209 19:24:11.745364 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-net\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.747110 kubelet[1932]: I0209 19:24:11.745487 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hubble-tls\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.747110 kubelet[1932]: I0209 19:24:11.745609 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hostproc\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.747110 kubelet[1932]: I0209 19:24:11.745667 1932 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cni-path\") pod \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\" (UID: \"4afbe729-9ee8-44e2-af17-8314ada1ebcc\") " Feb 9 19:24:11.747110 kubelet[1932]: I0209 19:24:11.745820 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cni-path" (OuterVolumeSpecName: "cni-path") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.747974 kubelet[1932]: I0209 19:24:11.747926 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.749294 kubelet[1932]: I0209 19:24:11.748162 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.749978 kubelet[1932]: I0209 19:24:11.748247 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.750196 kubelet[1932]: W0209 19:24:11.748709 1932 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4afbe729-9ee8-44e2-af17-8314ada1ebcc/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:24:11.751805 kubelet[1932]: I0209 19:24:11.748770 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.752071 kubelet[1932]: I0209 19:24:11.748803 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.752267 kubelet[1932]: I0209 19:24:11.748833 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.752439 kubelet[1932]: I0209 19:24:11.749222 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hostproc" (OuterVolumeSpecName: "hostproc") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.752745 kubelet[1932]: I0209 19:24:11.749501 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.752999 kubelet[1932]: I0209 19:24:11.749579 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:24:11.757819 kubelet[1932]: I0209 19:24:11.757765 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:24:11.761730 systemd[1]: var-lib-kubelet-pods-4afbe729\x2d9ee8\x2d44e2\x2daf17\x2d8314ada1ebcc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:24:11.764986 kubelet[1932]: I0209 19:24:11.764907 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:24:11.770662 systemd[1]: var-lib-kubelet-pods-4afbe729\x2d9ee8\x2d44e2\x2daf17\x2d8314ada1ebcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm87sb.mount: Deactivated successfully. Feb 9 19:24:11.773080 kubelet[1932]: I0209 19:24:11.772973 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-kube-api-access-m87sb" (OuterVolumeSpecName: "kube-api-access-m87sb") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "kube-api-access-m87sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:11.777877 systemd[1]: var-lib-kubelet-pods-4afbe729\x2d9ee8\x2d44e2\x2daf17\x2d8314ada1ebcc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:24:11.780425 kubelet[1932]: I0209 19:24:11.780339 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:24:11.783297 kubelet[1932]: I0209 19:24:11.783243 1932 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4afbe729-9ee8-44e2-af17-8314ada1ebcc" (UID: "4afbe729-9ee8-44e2-af17-8314ada1ebcc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:24:11.840458 systemd[1]: Removed slice kubepods-burstable-pod4afbe729_9ee8_44e2_af17_8314ada1ebcc.slice. Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.846987 1932 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cni-path\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.847052 1932 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-bpf-maps\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.847093 1932 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-net\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.847124 1932 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hubble-tls\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.847154 1932 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-hostproc\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.847185 1932 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-clustermesh-secrets\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.850175 kubelet[1932]: I0209 19:24:11.847228 1932 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-m87sb\" (UniqueName: \"kubernetes.io/projected/4afbe729-9ee8-44e2-af17-8314ada1ebcc-kube-api-access-m87sb\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847260 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-cgroup\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847290 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-run\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847321 1932 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-host-proc-sys-kernel\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847350 1932 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-lib-modules\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847380 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-ipsec-secrets\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847411 1932 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-xtables-lock\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851201 kubelet[1932]: I0209 19:24:11.847441 1932 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4afbe729-9ee8-44e2-af17-8314ada1ebcc-etc-cni-netd\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:11.851792 kubelet[1932]: I0209 19:24:11.847521 1932 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4afbe729-9ee8-44e2-af17-8314ada1ebcc-cilium-config-path\") on node \"ci-3510-3-2-c-a855e53d7e.novalocal\" DevicePath \"\"" Feb 9 19:24:12.523463 kubelet[1932]: I0209 19:24:12.523442 1932 scope.go:115] "RemoveContainer" containerID="0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0" Feb 9 19:24:12.524757 systemd[1]: var-lib-kubelet-pods-4afbe729\x2d9ee8\x2d44e2\x2daf17\x2d8314ada1ebcc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:24:12.527648 env[1063]: time="2024-02-09T19:24:12.527008641Z" level=info msg="RemoveContainer for \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\"" Feb 9 19:24:12.532443 env[1063]: time="2024-02-09T19:24:12.532301778Z" level=info msg="RemoveContainer for \"0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0\" returns successfully" Feb 9 19:24:12.571435 kubelet[1932]: I0209 19:24:12.571355 1932 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:24:12.571627 kubelet[1932]: E0209 19:24:12.571471 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4afbe729-9ee8-44e2-af17-8314ada1ebcc" containerName="mount-cgroup" Feb 9 19:24:12.571627 kubelet[1932]: E0209 19:24:12.571489 1932 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4afbe729-9ee8-44e2-af17-8314ada1ebcc" containerName="mount-cgroup" Feb 9 19:24:12.571627 kubelet[1932]: I0209 19:24:12.571520 1932 memory_manager.go:346] "RemoveStaleState removing state" podUID="4afbe729-9ee8-44e2-af17-8314ada1ebcc" containerName="mount-cgroup" Feb 9 19:24:12.571627 kubelet[1932]: I0209 19:24:12.571530 1932 memory_manager.go:346] "RemoveStaleState removing state" podUID="4afbe729-9ee8-44e2-af17-8314ada1ebcc" containerName="mount-cgroup" Feb 9 19:24:12.578266 systemd[1]: Created slice kubepods-burstable-podc816d91a_f413_410a_aa8a_8f708e84f168.slice. Feb 9 19:24:12.657270 kubelet[1932]: I0209 19:24:12.657206 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-cilium-run\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.657270 kubelet[1932]: I0209 19:24:12.657330 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c816d91a-f413-410a-aa8a-8f708e84f168-cilium-ipsec-secrets\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.657270 kubelet[1932]: I0209 19:24:12.657372 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-host-proc-sys-net\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.657270 kubelet[1932]: I0209 19:24:12.657418 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-bpf-maps\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.657270 kubelet[1932]: I0209 19:24:12.657448 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-hostproc\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.657270 kubelet[1932]: I0209 19:24:12.657512 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-etc-cni-netd\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659051 kubelet[1932]: I0209 19:24:12.657569 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-lib-modules\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659051 kubelet[1932]: I0209 19:24:12.657600 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g9sf\" (UniqueName: \"kubernetes.io/projected/c816d91a-f413-410a-aa8a-8f708e84f168-kube-api-access-6g9sf\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659051 kubelet[1932]: I0209 19:24:12.657645 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-cni-path\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659051 kubelet[1932]: I0209 19:24:12.657677 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-xtables-lock\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659051 kubelet[1932]: I0209 19:24:12.657703 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c816d91a-f413-410a-aa8a-8f708e84f168-clustermesh-secrets\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659051 kubelet[1932]: I0209 19:24:12.657749 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c816d91a-f413-410a-aa8a-8f708e84f168-cilium-config-path\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659229 kubelet[1932]: I0209 19:24:12.657775 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-host-proc-sys-kernel\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659229 kubelet[1932]: I0209 19:24:12.657822 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c816d91a-f413-410a-aa8a-8f708e84f168-cilium-cgroup\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.659229 kubelet[1932]: I0209 19:24:12.657851 1932 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c816d91a-f413-410a-aa8a-8f708e84f168-hubble-tls\") pod \"cilium-5prvw\" (UID: \"c816d91a-f413-410a-aa8a-8f708e84f168\") " pod="kube-system/cilium-5prvw" Feb 9 19:24:12.883401 env[1063]: time="2024-02-09T19:24:12.882246110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5prvw,Uid:c816d91a-f413-410a-aa8a-8f708e84f168,Namespace:kube-system,Attempt:0,}" Feb 9 19:24:12.919479 env[1063]: time="2024-02-09T19:24:12.919355133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:24:12.920680 env[1063]: time="2024-02-09T19:24:12.920611810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:24:12.921004 env[1063]: time="2024-02-09T19:24:12.920904006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:24:12.921690 env[1063]: time="2024-02-09T19:24:12.921609425Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b pid=4060 runtime=io.containerd.runc.v2 Feb 9 19:24:12.955670 systemd[1]: Started cri-containerd-b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b.scope. Feb 9 19:24:13.027961 env[1063]: time="2024-02-09T19:24:13.027913832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5prvw,Uid:c816d91a-f413-410a-aa8a-8f708e84f168,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\"" Feb 9 19:24:13.031062 env[1063]: time="2024-02-09T19:24:13.031027271Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:24:13.053343 env[1063]: time="2024-02-09T19:24:13.053239561Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c\"" Feb 9 19:24:13.054478 env[1063]: time="2024-02-09T19:24:13.054430281Z" level=info msg="StartContainer for \"4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c\"" Feb 9 19:24:13.076248 systemd[1]: Started cri-containerd-4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c.scope. Feb 9 19:24:13.154609 env[1063]: time="2024-02-09T19:24:13.154429886Z" level=info msg="StartContainer for \"4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c\" returns successfully" Feb 9 19:24:13.166408 systemd[1]: cri-containerd-4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c.scope: Deactivated successfully. Feb 9 19:24:13.223140 env[1063]: time="2024-02-09T19:24:13.223048778Z" level=info msg="shim disconnected" id=4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c Feb 9 19:24:13.223140 env[1063]: time="2024-02-09T19:24:13.223121959Z" level=warning msg="cleaning up after shim disconnected" id=4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c namespace=k8s.io Feb 9 19:24:13.223140 env[1063]: time="2024-02-09T19:24:13.223134604Z" level=info msg="cleaning up dead shim" Feb 9 19:24:13.241327 env[1063]: time="2024-02-09T19:24:13.241261488Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4143 runtime=io.containerd.runc.v2\n" Feb 9 19:24:13.549808 env[1063]: time="2024-02-09T19:24:13.549685861Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:24:13.597137 env[1063]: time="2024-02-09T19:24:13.597042287Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8\"" Feb 9 19:24:13.598375 env[1063]: time="2024-02-09T19:24:13.598319594Z" level=info msg="StartContainer for \"475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8\"" Feb 9 19:24:13.618664 systemd[1]: Started cri-containerd-475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8.scope. Feb 9 19:24:13.666178 systemd[1]: cri-containerd-475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8.scope: Deactivated successfully. Feb 9 19:24:13.670072 env[1063]: time="2024-02-09T19:24:13.670024672Z" level=info msg="StartContainer for \"475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8\" returns successfully" Feb 9 19:24:13.711975 env[1063]: time="2024-02-09T19:24:13.711916978Z" level=info msg="shim disconnected" id=475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8 Feb 9 19:24:13.712240 env[1063]: time="2024-02-09T19:24:13.711980802Z" level=warning msg="cleaning up after shim disconnected" id=475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8 namespace=k8s.io Feb 9 19:24:13.712240 env[1063]: time="2024-02-09T19:24:13.711992735Z" level=info msg="cleaning up dead shim" Feb 9 19:24:13.721642 env[1063]: time="2024-02-09T19:24:13.721586595Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4205 runtime=io.containerd.runc.v2\n" Feb 9 19:24:13.832348 kubelet[1932]: I0209 19:24:13.832185 1932 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4afbe729-9ee8-44e2-af17-8314ada1ebcc path="/var/lib/kubelet/pods/4afbe729-9ee8-44e2-af17-8314ada1ebcc/volumes" Feb 9 19:24:13.925309 kubelet[1932]: I0209 19:24:13.925264 1932 setters.go:548] "Node became not ready" node="ci-3510-3-2-c-a855e53d7e.novalocal" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:24:13.925170832 +0000 UTC m=+164.396252602 LastTransitionTime:2024-02-09 19:24:13.925170832 +0000 UTC m=+164.396252602 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:24:14.513184 kubelet[1932]: W0209 19:24:14.513095 1932 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4afbe729_9ee8_44e2_af17_8314ada1ebcc.slice/cri-containerd-0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0.scope WatchSource:0}: container "0eb045803a69974fd946433adb2fb9a558871f00d72fe8fd46a4971ef9f1e1c0" in namespace "k8s.io": not found Feb 9 19:24:14.530876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8-rootfs.mount: Deactivated successfully. Feb 9 19:24:14.591062 env[1063]: time="2024-02-09T19:24:14.590990796Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:24:14.636286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835530223.mount: Deactivated successfully. Feb 9 19:24:14.645725 env[1063]: time="2024-02-09T19:24:14.645661866Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51\"" Feb 9 19:24:14.647760 env[1063]: time="2024-02-09T19:24:14.646379929Z" level=info msg="StartContainer for \"1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51\"" Feb 9 19:24:14.670191 systemd[1]: Started cri-containerd-1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51.scope. Feb 9 19:24:14.723963 env[1063]: time="2024-02-09T19:24:14.723915147Z" level=info msg="StartContainer for \"1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51\" returns successfully" Feb 9 19:24:14.745764 systemd[1]: cri-containerd-1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51.scope: Deactivated successfully. Feb 9 19:24:14.782792 env[1063]: time="2024-02-09T19:24:14.782730899Z" level=info msg="shim disconnected" id=1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51 Feb 9 19:24:14.782792 env[1063]: time="2024-02-09T19:24:14.782807978Z" level=warning msg="cleaning up after shim disconnected" id=1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51 namespace=k8s.io Feb 9 19:24:14.783222 env[1063]: time="2024-02-09T19:24:14.782822036Z" level=info msg="cleaning up dead shim" Feb 9 19:24:14.794231 env[1063]: time="2024-02-09T19:24:14.794155142Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4265 runtime=io.containerd.runc.v2\n" Feb 9 19:24:14.949224 kubelet[1932]: E0209 19:24:14.949071 1932 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:24:15.528002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51-rootfs.mount: Deactivated successfully. Feb 9 19:24:15.595887 env[1063]: time="2024-02-09T19:24:15.595842323Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:24:15.624898 env[1063]: time="2024-02-09T19:24:15.624844421Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8\"" Feb 9 19:24:15.625863 env[1063]: time="2024-02-09T19:24:15.625837007Z" level=info msg="StartContainer for \"eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8\"" Feb 9 19:24:15.667063 systemd[1]: Started cri-containerd-eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8.scope. Feb 9 19:24:15.703527 systemd[1]: cri-containerd-eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8.scope: Deactivated successfully. Feb 9 19:24:15.706653 env[1063]: time="2024-02-09T19:24:15.705819080Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc816d91a_f413_410a_aa8a_8f708e84f168.slice/cri-containerd-eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8.scope/memory.events\": no such file or directory" Feb 9 19:24:15.712387 env[1063]: time="2024-02-09T19:24:15.712276238Z" level=info msg="StartContainer for \"eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8\" returns successfully" Feb 9 19:24:15.745866 env[1063]: time="2024-02-09T19:24:15.745806530Z" level=info msg="shim disconnected" id=eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8 Feb 9 19:24:15.745866 env[1063]: time="2024-02-09T19:24:15.745863391Z" level=warning msg="cleaning up after shim disconnected" id=eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8 namespace=k8s.io Feb 9 19:24:15.745866 env[1063]: time="2024-02-09T19:24:15.745876465Z" level=info msg="cleaning up dead shim" Feb 9 19:24:15.756799 env[1063]: time="2024-02-09T19:24:15.756725805Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:24:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4324 runtime=io.containerd.runc.v2\n" Feb 9 19:24:16.526507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8-rootfs.mount: Deactivated successfully. Feb 9 19:24:16.599204 env[1063]: time="2024-02-09T19:24:16.599070779Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:24:16.657682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888755113.mount: Deactivated successfully. Feb 9 19:24:16.664754 env[1063]: time="2024-02-09T19:24:16.664629076Z" level=info msg="CreateContainer within sandbox \"b3c51419b5b68f2c7b2168183a6194b7e5b8bd2eea0c3ea82b07d6374bc6b63b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17\"" Feb 9 19:24:16.667633 env[1063]: time="2024-02-09T19:24:16.667501900Z" level=info msg="StartContainer for \"cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17\"" Feb 9 19:24:16.701652 systemd[1]: Started cri-containerd-cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17.scope. Feb 9 19:24:16.759240 env[1063]: time="2024-02-09T19:24:16.759178092Z" level=info msg="StartContainer for \"cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17\" returns successfully" Feb 9 19:24:17.655960 kubelet[1932]: W0209 19:24:17.655859 1932 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc816d91a_f413_410a_aa8a_8f708e84f168.slice/cri-containerd-4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c.scope WatchSource:0}: task 4f2ff9863093c19c863c022238b9468aaf4833577324ad28164a511c0a0ce45c not found: not found Feb 9 19:24:17.686893 kubelet[1932]: I0209 19:24:17.686826 1932 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-5prvw" podStartSLOduration=5.686733827 pod.CreationTimestamp="2024-02-09 19:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:24:17.682341553 +0000 UTC m=+168.153423283" watchObservedRunningTime="2024-02-09 19:24:17.686733827 +0000 UTC m=+168.157815567" Feb 9 19:24:17.977594 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:24:18.037623 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 9 19:24:18.887943 systemd[1]: run-containerd-runc-k8s.io-cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17-runc.0eFOj6.mount: Deactivated successfully. Feb 9 19:24:20.776242 kubelet[1932]: W0209 19:24:20.776183 1932 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc816d91a_f413_410a_aa8a_8f708e84f168.slice/cri-containerd-475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8.scope WatchSource:0}: task 475b0c6d905f278682215739c44e0584ef69fd18ae906615bcb8df955b5407b8 not found: not found Feb 9 19:24:21.024276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:24:21.011023 systemd-networkd[978]: lxc_health: Link UP Feb 9 19:24:21.018163 systemd-networkd[978]: lxc_health: Gained carrier Feb 9 19:24:21.122044 systemd[1]: run-containerd-runc-k8s.io-cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17-runc.nRzsjV.mount: Deactivated successfully. Feb 9 19:24:22.779177 systemd-networkd[978]: lxc_health: Gained IPv6LL Feb 9 19:24:23.389614 systemd[1]: run-containerd-runc-k8s.io-cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17-runc.bhvw6I.mount: Deactivated successfully. Feb 9 19:24:23.887649 kubelet[1932]: W0209 19:24:23.887525 1932 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc816d91a_f413_410a_aa8a_8f708e84f168.slice/cri-containerd-1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51.scope WatchSource:0}: task 1ad147257c847ac05ec2b028f83206fb4065a087025d372c2f03f1f0b9244e51 not found: not found Feb 9 19:24:25.681020 systemd[1]: run-containerd-runc-k8s.io-cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17-runc.j2KkZA.mount: Deactivated successfully. Feb 9 19:24:27.002461 kubelet[1932]: W0209 19:24:27.002266 1932 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc816d91a_f413_410a_aa8a_8f708e84f168.slice/cri-containerd-eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8.scope WatchSource:0}: task eae7db0bbc99c1966e58e64c86b1409f7b013a3591922fc229a7e4a53b6b4da8 not found: not found Feb 9 19:24:27.900735 systemd[1]: run-containerd-runc-k8s.io-cc6e7acca70eaa5fe90e343868201b7ca33c5a45d65a09e358f9569243639a17-runc.yr0aPQ.mount: Deactivated successfully. Feb 9 19:24:27.981723 kubelet[1932]: E0209 19:24:27.981691 1932 upgradeaware.go:426] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57268->127.0.0.1:36769: write tcp 127.0.0.1:57268->127.0.0.1:36769: write: broken pipe Feb 9 19:24:28.336015 sshd[4003]: pam_unix(sshd:session): session closed for user core Feb 9 19:24:28.342052 systemd[1]: sshd@23-172.24.4.140:22-172.24.4.1:45082.service: Deactivated successfully. Feb 9 19:24:28.343806 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:24:28.345698 systemd-logind[1051]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:24:28.348358 systemd-logind[1051]: Removed session 24. Feb 9 19:24:29.814937 env[1063]: time="2024-02-09T19:24:29.814471477Z" level=info msg="StopPodSandbox for \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\"" Feb 9 19:24:29.814937 env[1063]: time="2024-02-09T19:24:29.814748508Z" level=info msg="TearDown network for sandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" successfully" Feb 9 19:24:29.814937 env[1063]: time="2024-02-09T19:24:29.814831307Z" level=info msg="StopPodSandbox for \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" returns successfully" Feb 9 19:24:29.817069 env[1063]: time="2024-02-09T19:24:29.816446937Z" level=info msg="RemovePodSandbox for \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\"" Feb 9 19:24:29.817069 env[1063]: time="2024-02-09T19:24:29.816515270Z" level=info msg="Forcibly stopping sandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\"" Feb 9 19:24:29.817069 env[1063]: time="2024-02-09T19:24:29.816736220Z" level=info msg="TearDown network for sandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" successfully" Feb 9 19:24:29.831767 env[1063]: time="2024-02-09T19:24:29.831657580Z" level=info msg="RemovePodSandbox \"c11817a09944388d0f1345cf663b989f4f332135151ae43a7fcd5c3c3ee1d8fb\" returns successfully" Feb 9 19:24:29.834361 env[1063]: time="2024-02-09T19:24:29.833030561Z" level=info msg="StopPodSandbox for \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\"" Feb 9 19:24:29.834361 env[1063]: time="2024-02-09T19:24:29.833237595Z" level=info msg="TearDown network for sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" successfully" Feb 9 19:24:29.834361 env[1063]: time="2024-02-09T19:24:29.833317579Z" level=info msg="StopPodSandbox for \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" returns successfully" Feb 9 19:24:29.834361 env[1063]: time="2024-02-09T19:24:29.833918886Z" level=info msg="RemovePodSandbox for \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\"" Feb 9 19:24:29.834361 env[1063]: time="2024-02-09T19:24:29.833969328Z" level=info msg="Forcibly stopping sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\"" Feb 9 19:24:29.834361 env[1063]: time="2024-02-09T19:24:29.834107508Z" level=info msg="TearDown network for sandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" successfully" Feb 9 19:24:29.845009 env[1063]: time="2024-02-09T19:24:29.841894012Z" level=info msg="RemovePodSandbox \"723bb01d78222074121f1d444e8f1e871359d3cc3e4fbf26d2340eb3811681fb\" returns successfully" Feb 9 19:24:29.845864 env[1063]: time="2024-02-09T19:24:29.845766660Z" level=info msg="StopPodSandbox for \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\"" Feb 9 19:24:29.846098 env[1063]: time="2024-02-09T19:24:29.845986538Z" level=info msg="TearDown network for sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" successfully" Feb 9 19:24:29.846098 env[1063]: time="2024-02-09T19:24:29.846084235Z" level=info msg="StopPodSandbox for \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" returns successfully" Feb 9 19:24:29.846982 env[1063]: time="2024-02-09T19:24:29.846931426Z" level=info msg="RemovePodSandbox for \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\"" Feb 9 19:24:29.847288 env[1063]: time="2024-02-09T19:24:29.847199671Z" level=info msg="Forcibly stopping sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\"" Feb 9 19:24:29.847610 env[1063]: time="2024-02-09T19:24:29.847522404Z" level=info msg="TearDown network for sandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" successfully" Feb 9 19:24:29.853941 env[1063]: time="2024-02-09T19:24:29.853879915Z" level=info msg="RemovePodSandbox \"318be02cc0a7d24d72878bd3cc678cf53a4eb89d24f6dc7e37e3a53ba7003448\" returns successfully"