Dec 13 14:36:27.011064 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:36:27.011112 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:36:27.011141 kernel: BIOS-provided physical RAM map: Dec 13 14:36:27.011159 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:36:27.011175 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:36:27.011191 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:36:27.011210 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Dec 13 14:36:27.013308 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Dec 13 14:36:27.013335 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:36:27.013352 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:36:27.013368 kernel: NX (Execute Disable) protection: active Dec 13 14:36:27.013384 kernel: SMBIOS 2.8 present. Dec 13 14:36:27.013401 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Dec 13 14:36:27.013418 kernel: Hypervisor detected: KVM Dec 13 14:36:27.013438 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:36:27.013461 kernel: kvm-clock: cpu 0, msr 5919a001, primary cpu clock Dec 13 14:36:27.013479 kernel: kvm-clock: using sched offset of 5928647544 cycles Dec 13 14:36:27.013498 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:36:27.013516 kernel: tsc: Detected 1996.249 MHz processor Dec 13 14:36:27.013535 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:36:27.013554 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:36:27.013572 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Dec 13 14:36:27.013590 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:36:27.013611 kernel: ACPI: Early table checksum verification disabled Dec 13 14:36:27.013629 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Dec 13 14:36:27.013647 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:36:27.013666 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:36:27.013684 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:36:27.013701 kernel: ACPI: FACS 0x000000007FFE0000 000040 Dec 13 14:36:27.013719 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:36:27.013737 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:36:27.013755 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Dec 13 14:36:27.013776 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Dec 13 14:36:27.013794 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Dec 13 14:36:27.013811 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Dec 13 14:36:27.013829 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Dec 13 14:36:27.013847 kernel: No NUMA configuration found Dec 13 14:36:27.013864 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Dec 13 14:36:27.013882 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Dec 13 14:36:27.013900 kernel: Zone ranges: Dec 13 14:36:27.013928 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:36:27.013947 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Dec 13 14:36:27.013966 kernel: Normal empty Dec 13 14:36:27.013985 kernel: Movable zone start for each node Dec 13 14:36:27.014003 kernel: Early memory node ranges Dec 13 14:36:27.014022 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:36:27.014043 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Dec 13 14:36:27.014062 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Dec 13 14:36:27.014080 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:36:27.014098 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:36:27.014117 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Dec 13 14:36:27.014135 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:36:27.014154 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:36:27.014172 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:36:27.014191 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:36:27.014213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:36:27.014263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:36:27.014281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:36:27.014300 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:36:27.014313 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:36:27.014327 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:36:27.014341 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Dec 13 14:36:27.014355 kernel: Booting paravirtualized kernel on KVM Dec 13 14:36:27.014369 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:36:27.014384 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:36:27.014402 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:36:27.014416 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:36:27.014430 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:36:27.014443 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Dec 13 14:36:27.014457 kernel: kvm-guest: PV spinlocks disabled, no host support Dec 13 14:36:27.014471 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Dec 13 14:36:27.014485 kernel: Policy zone: DMA32 Dec 13 14:36:27.014501 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:36:27.014520 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:36:27.014535 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:36:27.014549 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:36:27.014563 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:36:27.014577 kernel: Memory: 1973284K/2096620K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123076K reserved, 0K cma-reserved) Dec 13 14:36:27.014592 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:36:27.014605 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:36:27.014619 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:36:27.014636 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:36:27.014651 kernel: rcu: RCU event tracing is enabled. Dec 13 14:36:27.014666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:36:27.014680 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:36:27.014694 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:36:27.014708 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:36:27.014722 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:36:27.014736 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:36:27.014750 kernel: Console: colour VGA+ 80x25 Dec 13 14:36:27.014769 kernel: printk: console [tty0] enabled Dec 13 14:36:27.014783 kernel: printk: console [ttyS0] enabled Dec 13 14:36:27.014797 kernel: ACPI: Core revision 20210730 Dec 13 14:36:27.014811 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:36:27.014825 kernel: x2apic enabled Dec 13 14:36:27.014839 kernel: Switched APIC routing to physical x2apic. Dec 13 14:36:27.014853 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:36:27.014867 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:36:27.014881 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Dec 13 14:36:27.014895 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 14:36:27.014912 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 14:36:27.014926 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:36:27.014940 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:36:27.014955 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:36:27.014969 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:36:27.014983 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:36:27.014997 kernel: x86/fpu: x87 FPU will use FXSAVE Dec 13 14:36:27.015010 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:36:27.015024 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:36:27.015041 kernel: LSM: Security Framework initializing Dec 13 14:36:27.015055 kernel: SELinux: Initializing. Dec 13 14:36:27.015069 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:36:27.015083 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:36:27.015098 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Dec 13 14:36:27.015112 kernel: Performance Events: AMD PMU driver. Dec 13 14:36:27.015126 kernel: ... version: 0 Dec 13 14:36:27.015140 kernel: ... bit width: 48 Dec 13 14:36:27.015154 kernel: ... generic registers: 4 Dec 13 14:36:27.015183 kernel: ... value mask: 0000ffffffffffff Dec 13 14:36:27.015198 kernel: ... max period: 00007fffffffffff Dec 13 14:36:27.018244 kernel: ... fixed-purpose events: 0 Dec 13 14:36:27.018258 kernel: ... event mask: 000000000000000f Dec 13 14:36:27.018266 kernel: signal: max sigframe size: 1440 Dec 13 14:36:27.018274 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:36:27.018282 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:36:27.018290 kernel: x86: Booting SMP configuration: Dec 13 14:36:27.018303 kernel: .... node #0, CPUs: #1 Dec 13 14:36:27.018311 kernel: kvm-clock: cpu 1, msr 5919a041, secondary cpu clock Dec 13 14:36:27.018319 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Dec 13 14:36:27.018327 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:36:27.018334 kernel: smpboot: Max logical packages: 2 Dec 13 14:36:27.018343 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Dec 13 14:36:27.018350 kernel: devtmpfs: initialized Dec 13 14:36:27.018358 kernel: x86/mm: Memory block size: 128MB Dec 13 14:36:27.018366 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:36:27.018377 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:36:27.018385 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:36:27.018393 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:36:27.018401 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:36:27.018409 kernel: audit: type=2000 audit(1734100586.285:1): state=initialized audit_enabled=0 res=1 Dec 13 14:36:27.018417 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:36:27.018425 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:36:27.018433 kernel: cpuidle: using governor menu Dec 13 14:36:27.018440 kernel: ACPI: bus type PCI registered Dec 13 14:36:27.018450 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:36:27.018457 kernel: dca service started, version 1.12.1 Dec 13 14:36:27.018465 kernel: PCI: Using configuration type 1 for base access Dec 13 14:36:27.018474 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:36:27.018482 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:36:27.018490 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:36:27.018497 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:36:27.018505 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:36:27.018513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:36:27.018523 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:36:27.018531 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:36:27.018538 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:36:27.018546 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:36:27.018554 kernel: ACPI: Interpreter enabled Dec 13 14:36:27.018562 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:36:27.018570 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:36:27.018578 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:36:27.018586 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Dec 13 14:36:27.018595 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:36:27.018728 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:36:27.018812 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:36:27.018826 kernel: acpiphp: Slot [3] registered Dec 13 14:36:27.018834 kernel: acpiphp: Slot [4] registered Dec 13 14:36:27.018841 kernel: acpiphp: Slot [5] registered Dec 13 14:36:27.018849 kernel: acpiphp: Slot [6] registered Dec 13 14:36:27.018860 kernel: acpiphp: Slot [7] registered Dec 13 14:36:27.018868 kernel: acpiphp: Slot [8] registered Dec 13 14:36:27.018875 kernel: acpiphp: Slot [9] registered Dec 13 14:36:27.018883 kernel: acpiphp: Slot [10] registered Dec 13 14:36:27.018891 kernel: acpiphp: Slot [11] registered Dec 13 14:36:27.018898 kernel: acpiphp: Slot [12] registered Dec 13 14:36:27.018906 kernel: acpiphp: Slot [13] registered Dec 13 14:36:27.018914 kernel: acpiphp: Slot [14] registered Dec 13 14:36:27.018921 kernel: acpiphp: Slot [15] registered Dec 13 14:36:27.018929 kernel: acpiphp: Slot [16] registered Dec 13 14:36:27.018939 kernel: acpiphp: Slot [17] registered Dec 13 14:36:27.018946 kernel: acpiphp: Slot [18] registered Dec 13 14:36:27.018954 kernel: acpiphp: Slot [19] registered Dec 13 14:36:27.018962 kernel: acpiphp: Slot [20] registered Dec 13 14:36:27.018969 kernel: acpiphp: Slot [21] registered Dec 13 14:36:27.018977 kernel: acpiphp: Slot [22] registered Dec 13 14:36:27.018985 kernel: acpiphp: Slot [23] registered Dec 13 14:36:27.018992 kernel: acpiphp: Slot [24] registered Dec 13 14:36:27.019000 kernel: acpiphp: Slot [25] registered Dec 13 14:36:27.019010 kernel: acpiphp: Slot [26] registered Dec 13 14:36:27.019017 kernel: acpiphp: Slot [27] registered Dec 13 14:36:27.019025 kernel: acpiphp: Slot [28] registered Dec 13 14:36:27.019032 kernel: acpiphp: Slot [29] registered Dec 13 14:36:27.019040 kernel: acpiphp: Slot [30] registered Dec 13 14:36:27.019048 kernel: acpiphp: Slot [31] registered Dec 13 14:36:27.019056 kernel: PCI host bridge to bus 0000:00 Dec 13 14:36:27.019150 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:36:27.019243 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:36:27.019322 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:36:27.019393 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:36:27.019463 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Dec 13 14:36:27.019543 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:36:27.019638 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:36:27.019729 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:36:27.019824 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Dec 13 14:36:27.019907 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Dec 13 14:36:27.019990 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Dec 13 14:36:27.020070 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Dec 13 14:36:27.020151 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Dec 13 14:36:27.020250 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Dec 13 14:36:27.020341 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:36:27.020429 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Dec 13 14:36:27.020510 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Dec 13 14:36:27.020605 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Dec 13 14:36:27.020688 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Dec 13 14:36:27.020771 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Dec 13 14:36:27.020867 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Dec 13 14:36:27.020954 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Dec 13 14:36:27.021036 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:36:27.021127 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:36:27.021209 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Dec 13 14:36:27.021312 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Dec 13 14:36:27.021393 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Dec 13 14:36:27.021473 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Dec 13 14:36:27.021567 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:36:27.021649 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:36:27.021731 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Dec 13 14:36:27.021813 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Dec 13 14:36:27.021900 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Dec 13 14:36:27.021993 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Dec 13 14:36:27.022074 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Dec 13 14:36:27.022166 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:36:27.025301 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Dec 13 14:36:27.025391 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Dec 13 14:36:27.025404 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:36:27.025413 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:36:27.025421 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:36:27.025429 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:36:27.025437 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:36:27.025449 kernel: iommu: Default domain type: Translated Dec 13 14:36:27.025456 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:36:27.025538 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Dec 13 14:36:27.025622 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:36:27.025705 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Dec 13 14:36:27.025717 kernel: vgaarb: loaded Dec 13 14:36:27.025725 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:36:27.025733 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:36:27.025741 kernel: PTP clock support registered Dec 13 14:36:27.025753 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:36:27.025760 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:36:27.025768 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:36:27.025776 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Dec 13 14:36:27.025784 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:36:27.025791 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:36:27.025799 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:36:27.025807 kernel: pnp: PnP ACPI init Dec 13 14:36:27.025904 kernel: pnp 00:03: [dma 2] Dec 13 14:36:27.025920 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:36:27.025928 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:36:27.025936 kernel: NET: Registered PF_INET protocol family Dec 13 14:36:27.025944 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:36:27.025952 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:36:27.025960 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:36:27.025968 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:36:27.025976 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:36:27.025986 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:36:27.025994 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:36:27.026002 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:36:27.026009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:36:27.026017 kernel: NET: Registered PF_XDP protocol family Dec 13 14:36:27.026088 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:36:27.026164 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:36:27.026258 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:36:27.026331 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:36:27.026407 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Dec 13 14:36:27.026489 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Dec 13 14:36:27.026572 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:36:27.026652 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:36:27.026664 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:36:27.026672 kernel: Initialise system trusted keyrings Dec 13 14:36:27.026680 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:36:27.026692 kernel: Key type asymmetric registered Dec 13 14:36:27.026699 kernel: Asymmetric key parser 'x509' registered Dec 13 14:36:27.026707 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:36:27.026715 kernel: io scheduler mq-deadline registered Dec 13 14:36:27.026723 kernel: io scheduler kyber registered Dec 13 14:36:27.026731 kernel: io scheduler bfq registered Dec 13 14:36:27.026739 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:36:27.026747 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Dec 13 14:36:27.026755 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:36:27.026763 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:36:27.026772 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:36:27.026780 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:36:27.026787 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:36:27.026796 kernel: random: crng init done Dec 13 14:36:27.026803 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:36:27.026811 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:36:27.026819 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:36:27.026908 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:36:27.026925 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:36:27.026999 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:36:27.027073 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:36:26 UTC (1734100586) Dec 13 14:36:27.027146 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 14:36:27.027158 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:36:27.027166 kernel: Segment Routing with IPv6 Dec 13 14:36:27.027174 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:36:27.027181 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:36:27.027189 kernel: Key type dns_resolver registered Dec 13 14:36:27.027201 kernel: IPI shorthand broadcast: enabled Dec 13 14:36:27.027209 kernel: sched_clock: Marking stable (724468392, 121880294)->(871266209, -24917523) Dec 13 14:36:27.027231 kernel: registered taskstats version 1 Dec 13 14:36:27.027239 kernel: Loading compiled-in X.509 certificates Dec 13 14:36:27.027247 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:36:27.027255 kernel: Key type .fscrypt registered Dec 13 14:36:27.027262 kernel: Key type fscrypt-provisioning registered Dec 13 14:36:27.027270 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:36:27.027282 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:36:27.027290 kernel: ima: No architecture policies found Dec 13 14:36:27.027297 kernel: clk: Disabling unused clocks Dec 13 14:36:27.027305 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:36:27.027313 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:36:27.027321 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:36:27.027329 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:36:27.027336 kernel: Run /init as init process Dec 13 14:36:27.027344 kernel: with arguments: Dec 13 14:36:27.027354 kernel: /init Dec 13 14:36:27.027361 kernel: with environment: Dec 13 14:36:27.027369 kernel: HOME=/ Dec 13 14:36:27.027377 kernel: TERM=linux Dec 13 14:36:27.027384 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:36:27.027395 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:36:27.027405 systemd[1]: Detected virtualization kvm. Dec 13 14:36:27.027414 systemd[1]: Detected architecture x86-64. Dec 13 14:36:27.027425 systemd[1]: Running in initrd. Dec 13 14:36:27.027433 systemd[1]: No hostname configured, using default hostname. Dec 13 14:36:27.027442 systemd[1]: Hostname set to . Dec 13 14:36:27.027451 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:36:27.027459 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:36:27.027468 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:36:27.027476 systemd[1]: Reached target cryptsetup.target. Dec 13 14:36:27.027484 systemd[1]: Reached target paths.target. Dec 13 14:36:27.027494 systemd[1]: Reached target slices.target. Dec 13 14:36:27.027502 systemd[1]: Reached target swap.target. Dec 13 14:36:27.027511 systemd[1]: Reached target timers.target. Dec 13 14:36:27.027519 systemd[1]: Listening on iscsid.socket. Dec 13 14:36:27.027528 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:36:27.027536 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:36:27.027545 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:36:27.027555 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:36:27.027563 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:36:27.027572 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:36:27.027580 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:36:27.027589 systemd[1]: Reached target sockets.target. Dec 13 14:36:27.027606 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:36:27.027618 systemd[1]: Finished network-cleanup.service. Dec 13 14:36:27.027628 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:36:27.027637 systemd[1]: Starting systemd-journald.service... Dec 13 14:36:27.027645 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:36:27.027654 systemd[1]: Starting systemd-resolved.service... Dec 13 14:36:27.027663 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:36:27.027671 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:36:27.027680 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:36:27.027692 systemd-journald[184]: Journal started Dec 13 14:36:27.027737 systemd-journald[184]: Runtime Journal (/run/log/journal/3557331bb9d04a65b5c71633788c3de7) is 4.9M, max 39.5M, 34.5M free. Dec 13 14:36:26.991603 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 14:36:27.058499 systemd[1]: Started systemd-journald.service. Dec 13 14:36:27.058531 kernel: audit: type=1130 audit(1734100587.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.058546 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:36:27.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.040031 systemd-resolved[186]: Positive Trust Anchors: Dec 13 14:36:27.062649 kernel: audit: type=1130 audit(1734100587.058:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.040040 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:36:27.066999 kernel: audit: type=1130 audit(1734100587.062:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.040075 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:36:27.074488 kernel: audit: type=1130 audit(1734100587.067:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.074505 kernel: Bridge firewalling registered Dec 13 14:36:27.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.044104 systemd-resolved[186]: Defaulting to hostname 'linux'. Dec 13 14:36:27.059057 systemd[1]: Started systemd-resolved.service. Dec 13 14:36:27.063311 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:36:27.068270 systemd[1]: Reached target nss-lookup.target. Dec 13 14:36:27.069781 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 14:36:27.075876 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:36:27.080989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:36:27.091040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:36:27.095435 kernel: audit: type=1130 audit(1734100587.091:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.103233 kernel: audit: type=1130 audit(1734100587.099:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.099547 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:36:27.104754 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:36:27.105312 kernel: SCSI subsystem initialized Dec 13 14:36:27.115088 dracut-cmdline[201]: dracut-dracut-053 Dec 13 14:36:27.115750 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:36:27.123606 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:36:27.123632 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:36:27.125189 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:36:27.130626 systemd-modules-load[185]: Inserted module 'dm_multipath' Dec 13 14:36:27.138505 kernel: audit: type=1130 audit(1734100587.132:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.132655 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:36:27.134010 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:36:27.146201 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:36:27.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.151247 kernel: audit: type=1130 audit(1734100587.146:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.177286 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:36:27.197254 kernel: iscsi: registered transport (tcp) Dec 13 14:36:27.224615 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:36:27.224674 kernel: QLogic iSCSI HBA Driver Dec 13 14:36:27.278069 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:36:27.287915 kernel: audit: type=1130 audit(1734100587.278:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.279636 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:36:27.337314 kernel: raid6: sse2x4 gen() 13050 MB/s Dec 13 14:36:27.354306 kernel: raid6: sse2x4 xor() 5067 MB/s Dec 13 14:36:27.371308 kernel: raid6: sse2x2 gen() 14267 MB/s Dec 13 14:36:27.388307 kernel: raid6: sse2x2 xor() 8723 MB/s Dec 13 14:36:27.405308 kernel: raid6: sse2x1 gen() 11005 MB/s Dec 13 14:36:27.423079 kernel: raid6: sse2x1 xor() 6822 MB/s Dec 13 14:36:27.423137 kernel: raid6: using algorithm sse2x2 gen() 14267 MB/s Dec 13 14:36:27.423164 kernel: raid6: .... xor() 8723 MB/s, rmw enabled Dec 13 14:36:27.423884 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 14:36:27.439373 kernel: xor: measuring software checksum speed Dec 13 14:36:27.439433 kernel: prefetch64-sse : 17137 MB/sec Dec 13 14:36:27.440429 kernel: generic_sse : 15880 MB/sec Dec 13 14:36:27.440485 kernel: xor: using function: prefetch64-sse (17137 MB/sec) Dec 13 14:36:27.558277 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:36:27.574634 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:36:27.576336 systemd[1]: Starting systemd-udevd.service... Dec 13 14:36:27.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.575000 audit: BPF prog-id=7 op=LOAD Dec 13 14:36:27.575000 audit: BPF prog-id=8 op=LOAD Dec 13 14:36:27.612846 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 14:36:27.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.624352 systemd[1]: Started systemd-udevd.service. Dec 13 14:36:27.630424 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:36:27.655487 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Dec 13 14:36:27.706402 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:36:27.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.709438 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:36:27.772998 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:36:27.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:27.855246 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Dec 13 14:36:27.864435 kernel: libata version 3.00 loaded. Dec 13 14:36:27.864452 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:36:27.864463 kernel: GPT:17805311 != 41943039 Dec 13 14:36:27.864473 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:36:27.864483 kernel: GPT:17805311 != 41943039 Dec 13 14:36:27.864493 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:36:27.864516 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:36:27.872368 kernel: ata_piix 0000:00:01.1: version 2.13 Dec 13 14:36:27.903990 kernel: scsi host0: ata_piix Dec 13 14:36:27.904117 kernel: scsi host1: ata_piix Dec 13 14:36:27.904259 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Dec 13 14:36:27.904274 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Dec 13 14:36:27.904285 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (434) Dec 13 14:36:27.901560 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:36:27.937479 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:36:27.938028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:36:27.942640 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:36:27.946782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:36:27.948210 systemd[1]: Starting disk-uuid.service... Dec 13 14:36:27.959463 disk-uuid[459]: Primary Header is updated. Dec 13 14:36:27.959463 disk-uuid[459]: Secondary Entries is updated. Dec 13 14:36:27.959463 disk-uuid[459]: Secondary Header is updated. Dec 13 14:36:27.970274 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:36:27.974932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:36:28.988272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:36:28.989051 disk-uuid[460]: The operation has completed successfully. Dec 13 14:36:29.055938 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:36:29.057751 systemd[1]: Finished disk-uuid.service. Dec 13 14:36:29.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.087867 systemd[1]: Starting verity-setup.service... Dec 13 14:36:29.125281 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Dec 13 14:36:29.255510 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:36:29.258590 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:36:29.261861 systemd[1]: Finished verity-setup.service. Dec 13 14:36:29.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.403291 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:36:29.404382 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:36:29.404997 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:36:29.405930 systemd[1]: Starting ignition-setup.service... Dec 13 14:36:29.407061 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:36:29.435829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:36:29.435883 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:36:29.435895 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:36:29.452832 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:36:29.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.466084 systemd[1]: Finished ignition-setup.service. Dec 13 14:36:29.467577 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:36:29.566780 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:36:29.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.570000 audit: BPF prog-id=9 op=LOAD Dec 13 14:36:29.571834 systemd[1]: Starting systemd-networkd.service... Dec 13 14:36:29.624471 systemd-networkd[633]: lo: Link UP Dec 13 14:36:29.625265 systemd-networkd[633]: lo: Gained carrier Dec 13 14:36:29.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.626577 systemd-networkd[633]: Enumeration completed Dec 13 14:36:29.626664 systemd[1]: Started systemd-networkd.service. Dec 13 14:36:29.627177 systemd[1]: Reached target network.target. Dec 13 14:36:29.628446 systemd[1]: Starting iscsiuio.service... Dec 13 14:36:29.628921 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:36:29.636390 systemd-networkd[633]: eth0: Link UP Dec 13 14:36:29.636396 systemd-networkd[633]: eth0: Gained carrier Dec 13 14:36:29.641088 systemd[1]: Started iscsiuio.service. Dec 13 14:36:29.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.642613 systemd[1]: Starting iscsid.service... Dec 13 14:36:29.647154 iscsid[642]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:36:29.647154 iscsid[642]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:36:29.647154 iscsid[642]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:36:29.647154 iscsid[642]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:36:29.647154 iscsid[642]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:36:29.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.654001 iscsid[642]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:36:29.650155 systemd[1]: Started iscsid.service. Dec 13 14:36:29.651854 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:36:29.656319 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.236/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 14:36:29.668288 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:36:29.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.668960 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:36:29.669831 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:36:29.670995 systemd[1]: Reached target remote-fs.target. Dec 13 14:36:29.673389 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:36:29.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.684562 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:36:29.789401 ignition[557]: Ignition 2.14.0 Dec 13 14:36:29.790333 ignition[557]: Stage: fetch-offline Dec 13 14:36:29.790524 ignition[557]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:29.790567 ignition[557]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:29.792777 ignition[557]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:29.793074 ignition[557]: parsed url from cmdline: "" Dec 13 14:36:29.793083 ignition[557]: no config URL provided Dec 13 14:36:29.793096 ignition[557]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:36:29.793115 ignition[557]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:36:29.793134 ignition[557]: failed to fetch config: resource requires networking Dec 13 14:36:29.795522 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:36:29.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:29.793826 ignition[557]: Ignition finished successfully Dec 13 14:36:29.798906 systemd[1]: Starting ignition-fetch.service... Dec 13 14:36:29.815691 ignition[656]: Ignition 2.14.0 Dec 13 14:36:29.815718 ignition[656]: Stage: fetch Dec 13 14:36:29.815966 ignition[656]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:29.816010 ignition[656]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:29.818281 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:29.818503 ignition[656]: parsed url from cmdline: "" Dec 13 14:36:29.818512 ignition[656]: no config URL provided Dec 13 14:36:29.818526 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:36:29.818545 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:36:29.824738 ignition[656]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 14:36:29.824794 ignition[656]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 14:36:29.824994 ignition[656]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 14:36:30.158112 ignition[656]: GET result: OK Dec 13 14:36:30.158496 ignition[656]: parsing config with SHA512: 4dde6b09b8569500d5d8b1e84a428be95b0f305eb6cb5edc4b4bcc470342bcce92b2df96e93e315287ff7511002abeb8b9c34182376571329eedfa9b43da0fae Dec 13 14:36:30.179651 unknown[656]: fetched base config from "system" Dec 13 14:36:30.180323 unknown[656]: fetched base config from "system" Dec 13 14:36:30.180343 unknown[656]: fetched user config from "openstack" Dec 13 14:36:30.181557 ignition[656]: fetch: fetch complete Dec 13 14:36:30.181571 ignition[656]: fetch: fetch passed Dec 13 14:36:30.184972 systemd[1]: Finished ignition-fetch.service. Dec 13 14:36:30.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.181677 ignition[656]: Ignition finished successfully Dec 13 14:36:30.188988 systemd[1]: Starting ignition-kargs.service... Dec 13 14:36:30.209896 ignition[662]: Ignition 2.14.0 Dec 13 14:36:30.209923 ignition[662]: Stage: kargs Dec 13 14:36:30.210175 ignition[662]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:30.210335 ignition[662]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:30.212620 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:30.215506 ignition[662]: kargs: kargs passed Dec 13 14:36:30.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.225130 systemd[1]: Finished ignition-kargs.service. Dec 13 14:36:30.215605 ignition[662]: Ignition finished successfully Dec 13 14:36:30.228754 systemd[1]: Starting ignition-disks.service... Dec 13 14:36:30.253138 ignition[668]: Ignition 2.14.0 Dec 13 14:36:30.253167 ignition[668]: Stage: disks Dec 13 14:36:30.253498 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:30.253546 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:30.255807 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:30.258596 ignition[668]: disks: disks passed Dec 13 14:36:30.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.260404 systemd[1]: Finished ignition-disks.service. Dec 13 14:36:30.258694 ignition[668]: Ignition finished successfully Dec 13 14:36:30.262292 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:36:30.264251 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:36:30.266527 systemd[1]: Reached target local-fs.target. Dec 13 14:36:30.268707 systemd[1]: Reached target sysinit.target. Dec 13 14:36:30.270929 systemd[1]: Reached target basic.target. Dec 13 14:36:30.275071 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:36:30.310203 systemd-fsck[676]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:36:30.322290 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:36:30.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.325346 systemd[1]: Mounting sysroot.mount... Dec 13 14:36:30.344597 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:36:30.343599 systemd[1]: Mounted sysroot.mount. Dec 13 14:36:30.345598 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:36:30.349454 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:36:30.351423 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:36:30.352900 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 14:36:30.357960 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:36:30.358041 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:36:30.366585 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:36:30.374665 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:36:30.379390 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:36:30.397744 initrd-setup-root[688]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:36:30.405248 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (683) Dec 13 14:36:30.410794 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:36:30.410829 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:36:30.410842 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:36:30.415775 initrd-setup-root[699]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:36:30.427641 initrd-setup-root[720]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:36:30.434378 initrd-setup-root[730]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:36:30.440299 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:36:30.526133 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:36:30.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.529149 systemd[1]: Starting ignition-mount.service... Dec 13 14:36:30.532311 systemd[1]: Starting sysroot-boot.service... Dec 13 14:36:30.546529 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:36:30.546865 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:36:30.570520 coreos-metadata[682]: Dec 13 14:36:30.570 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 14:36:30.575456 ignition[751]: INFO : Ignition 2.14.0 Dec 13 14:36:30.575456 ignition[751]: INFO : Stage: mount Dec 13 14:36:30.575456 ignition[751]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:30.575456 ignition[751]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:30.579619 ignition[751]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:30.579619 ignition[751]: INFO : mount: mount passed Dec 13 14:36:30.579619 ignition[751]: INFO : Ignition finished successfully Dec 13 14:36:30.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.578480 systemd[1]: Finished ignition-mount.service. Dec 13 14:36:30.590655 systemd[1]: Finished sysroot-boot.service. Dec 13 14:36:30.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.591993 coreos-metadata[682]: Dec 13 14:36:30.591 INFO Fetch successful Dec 13 14:36:30.593330 coreos-metadata[682]: Dec 13 14:36:30.593 INFO wrote hostname ci-3510-3-6-c-262737d7bc.novalocal to /sysroot/etc/hostname Dec 13 14:36:30.596636 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 14:36:30.596773 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 14:36:30.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:30.598990 systemd[1]: Starting ignition-files.service... Dec 13 14:36:30.606556 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:36:30.618357 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (760) Dec 13 14:36:30.622174 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:36:30.622197 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:36:30.622209 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:36:30.628950 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:36:30.639654 ignition[779]: INFO : Ignition 2.14.0 Dec 13 14:36:30.639654 ignition[779]: INFO : Stage: files Dec 13 14:36:30.640788 ignition[779]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:30.640788 ignition[779]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:30.640788 ignition[779]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:30.643452 ignition[779]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:36:30.644167 ignition[779]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:36:30.644167 ignition[779]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:36:30.647657 ignition[779]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:36:30.648843 ignition[779]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:36:30.651147 ignition[779]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:36:30.651147 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:36:30.651147 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:36:30.648993 unknown[779]: wrote ssh authorized keys file for user: core Dec 13 14:36:30.721911 systemd-networkd[633]: eth0: Gained IPv6LL Dec 13 14:36:30.726451 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:36:31.025533 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:36:31.027280 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:36:31.028152 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:36:31.636127 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:36:32.179728 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:36:32.179728 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:36:32.184376 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:36:32.784478 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:36:34.616274 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:36:34.616274 ignition[779]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:36:34.616274 ignition[779]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:36:34.616274 ignition[779]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:36:34.633041 ignition[779]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:36:34.633041 ignition[779]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:36:34.633041 ignition[779]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:36:34.633041 ignition[779]: INFO : files: files passed Dec 13 14:36:34.633041 ignition[779]: INFO : Ignition finished successfully Dec 13 14:36:34.696622 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 14:36:34.696668 kernel: audit: type=1130 audit(1734100594.634:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.696699 kernel: audit: type=1130 audit(1734100594.657:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.696738 kernel: audit: type=1130 audit(1734100594.668:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.696766 kernel: audit: type=1131 audit(1734100594.668:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.631661 systemd[1]: Finished ignition-files.service. Dec 13 14:36:34.635839 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:36:34.650539 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:36:34.702529 initrd-setup-root-after-ignition[802]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:36:34.716332 kernel: audit: type=1130 audit(1734100594.702:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.716348 kernel: audit: type=1131 audit(1734100594.702:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.652126 systemd[1]: Starting ignition-quench.service... Dec 13 14:36:34.653877 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:36:34.657715 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:36:34.657806 systemd[1]: Finished ignition-quench.service. Dec 13 14:36:34.668517 systemd[1]: Reached target ignition-complete.target. Dec 13 14:36:34.687155 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:36:34.701027 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:36:34.701106 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:36:34.702864 systemd[1]: Reached target initrd-fs.target. Dec 13 14:36:34.716758 systemd[1]: Reached target initrd.target. Dec 13 14:36:34.717686 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:36:34.718337 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:36:34.728914 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:36:34.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.733251 kernel: audit: type=1130 audit(1734100594.729:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.733879 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:36:34.743242 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:36:34.744256 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:36:34.745315 systemd[1]: Stopped target timers.target. Dec 13 14:36:34.746284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:36:34.746927 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:36:34.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.751237 kernel: audit: type=1131 audit(1734100594.747:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.751293 systemd[1]: Stopped target initrd.target. Dec 13 14:36:34.752347 systemd[1]: Stopped target basic.target. Dec 13 14:36:34.752905 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:36:34.753897 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:36:34.754754 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:36:34.755608 systemd[1]: Stopped target remote-fs.target. Dec 13 14:36:34.756447 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:36:34.757301 systemd[1]: Stopped target sysinit.target. Dec 13 14:36:34.758097 systemd[1]: Stopped target local-fs.target. Dec 13 14:36:34.758921 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:36:34.759788 systemd[1]: Stopped target swap.target. Dec 13 14:36:34.760555 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:36:34.764945 kernel: audit: type=1131 audit(1734100594.761:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.760709 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:36:34.761555 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:36:34.769824 kernel: audit: type=1131 audit(1734100594.766:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.765439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:36:34.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.765577 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:36:34.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.766391 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:36:34.766538 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:36:34.770469 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:36:34.777122 iscsid[642]: iscsid shutting down. Dec 13 14:36:34.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.770617 systemd[1]: Stopped ignition-files.service. Dec 13 14:36:34.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.789656 ignition[817]: INFO : Ignition 2.14.0 Dec 13 14:36:34.789656 ignition[817]: INFO : Stage: umount Dec 13 14:36:34.789656 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:36:34.789656 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 14:36:34.789656 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 14:36:34.789656 ignition[817]: INFO : umount: umount passed Dec 13 14:36:34.789656 ignition[817]: INFO : Ignition finished successfully Dec 13 14:36:34.772110 systemd[1]: Stopping ignition-mount.service... Dec 13 14:36:34.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.772972 systemd[1]: Stopping iscsid.service... Dec 13 14:36:34.782511 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:36:34.783006 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:36:34.783171 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:36:34.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.783943 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:36:34.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.784113 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:36:34.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.786839 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:36:34.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.786929 systemd[1]: Stopped iscsid.service. Dec 13 14:36:34.790308 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:36:34.790385 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:36:34.800198 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:36:34.800287 systemd[1]: Stopped ignition-mount.service. Dec 13 14:36:34.802336 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:36:34.802634 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:36:34.802668 systemd[1]: Stopped ignition-disks.service. Dec 13 14:36:34.803548 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:36:34.803585 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:36:34.804458 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:36:34.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.804491 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:36:34.805435 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:36:34.805470 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:36:34.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.806431 systemd[1]: Stopped target paths.target. Dec 13 14:36:34.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.807269 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:36:34.810099 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:36:34.810735 systemd[1]: Stopped target slices.target. Dec 13 14:36:34.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.811604 systemd[1]: Stopped target sockets.target. Dec 13 14:36:34.812477 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:36:34.812509 systemd[1]: Closed iscsid.socket. Dec 13 14:36:34.813330 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:36:34.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.813382 systemd[1]: Stopped ignition-setup.service. Dec 13 14:36:34.814869 systemd[1]: Stopping iscsiuio.service... Dec 13 14:36:34.816984 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:36:34.817067 systemd[1]: Stopped iscsiuio.service. Dec 13 14:36:34.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.818139 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:36:34.818241 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:36:34.818915 systemd[1]: Stopped target network.target. Dec 13 14:36:34.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.819703 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:36:34.819731 systemd[1]: Closed iscsiuio.socket. Dec 13 14:36:34.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.820684 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:36:34.839000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:36:34.820718 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:36:34.821673 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:36:34.822719 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:36:34.824257 systemd-networkd[633]: eth0: DHCPv6 lease lost Dec 13 14:36:34.842000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:36:34.825095 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:36:34.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.825187 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:36:34.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.827257 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:36:34.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.827319 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:36:34.828456 systemd[1]: Stopping network-cleanup.service... Dec 13 14:36:34.829208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:36:34.829300 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:36:34.831409 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:36:34.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.831446 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:36:34.831917 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:36:34.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.831951 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:36:34.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.832487 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:36:34.834612 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:36:34.835111 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:36:34.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.835278 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:36:34.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:34.837447 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:36:34.837587 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:36:34.839625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:36:34.839679 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:36:34.842179 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:36:34.842236 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:36:34.842951 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:36:34.843005 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:36:34.843844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:36:34.843880 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:36:34.844724 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:36:34.844760 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:36:34.846180 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:36:34.846979 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:36:34.847041 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:36:34.852611 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:36:34.852654 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:36:34.854015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:36:34.854066 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:36:34.855817 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:36:34.856424 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:36:34.856503 systemd[1]: Stopped network-cleanup.service. Dec 13 14:36:34.857169 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:36:34.857257 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:36:34.858014 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:36:34.859370 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:36:34.878849 systemd[1]: Switching root. Dec 13 14:36:34.896857 systemd-journald[184]: Journal stopped Dec 13 14:36:38.885761 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 14:36:38.885825 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:36:38.885841 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:36:38.885853 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:36:38.885869 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:36:38.885881 kernel: SELinux: policy capability open_perms=1 Dec 13 14:36:38.885893 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:36:38.885904 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:36:38.885923 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:36:38.885934 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:36:38.885945 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:36:38.885956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:36:38.885969 systemd[1]: Successfully loaded SELinux policy in 89.250ms. Dec 13 14:36:38.885986 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.432ms. Dec 13 14:36:38.886001 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:36:38.886013 systemd[1]: Detected virtualization kvm. Dec 13 14:36:38.886028 systemd[1]: Detected architecture x86-64. Dec 13 14:36:38.886040 systemd[1]: Detected first boot. Dec 13 14:36:38.886053 systemd[1]: Hostname set to . Dec 13 14:36:38.886065 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:36:38.886077 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:36:38.886089 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:36:38.886103 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:36:38.886120 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:36:38.886136 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:36:38.886150 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:36:38.886162 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:36:38.886175 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:36:38.886188 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:36:38.886201 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:36:38.886233 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:36:38.886254 systemd[1]: Created slice system-getty.slice. Dec 13 14:36:38.886266 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:36:38.886278 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:36:38.886291 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:36:38.886304 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:36:38.886315 systemd[1]: Created slice user.slice. Dec 13 14:36:38.886327 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:36:38.886339 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:36:38.886351 systemd[1]: Set up automount boot.automount. Dec 13 14:36:38.886367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:36:38.886381 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:36:38.886393 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:36:38.886405 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:36:38.886418 systemd[1]: Reached target integritysetup.target. Dec 13 14:36:38.886431 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:36:38.886444 systemd[1]: Reached target remote-fs.target. Dec 13 14:36:38.886456 systemd[1]: Reached target slices.target. Dec 13 14:36:38.886467 systemd[1]: Reached target swap.target. Dec 13 14:36:38.886480 systemd[1]: Reached target torcx.target. Dec 13 14:36:38.886491 systemd[1]: Reached target veritysetup.target. Dec 13 14:36:38.886503 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:36:38.886514 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:36:38.886526 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:36:38.886537 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:36:38.886550 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:36:38.886561 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:36:38.886572 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:36:38.886584 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:36:38.886595 systemd[1]: Mounting media.mount... Dec 13 14:36:38.886607 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:38.886619 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:36:38.886630 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:36:38.886641 systemd[1]: Mounting tmp.mount... Dec 13 14:36:38.886654 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:36:38.886665 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:36:38.886676 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:36:38.886688 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:36:38.886699 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:36:38.886710 systemd[1]: Starting modprobe@drm.service... Dec 13 14:36:38.886722 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:36:38.886733 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:36:38.886744 systemd[1]: Starting modprobe@loop.service... Dec 13 14:36:38.886758 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:36:38.886769 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:36:38.886781 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:36:38.886792 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:36:38.886803 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:36:38.886814 systemd[1]: Stopped systemd-journald.service. Dec 13 14:36:38.886825 systemd[1]: Starting systemd-journald.service... Dec 13 14:36:38.886836 kernel: loop: module loaded Dec 13 14:36:38.886847 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:36:38.886859 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:36:38.886871 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:36:38.886883 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:36:38.886894 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:36:38.886905 systemd[1]: Stopped verity-setup.service. Dec 13 14:36:38.886917 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:38.886928 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:36:38.886939 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:36:38.886951 systemd[1]: Mounted media.mount. Dec 13 14:36:38.886963 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:36:38.886976 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:36:38.886987 systemd[1]: Mounted tmp.mount. Dec 13 14:36:38.886999 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:36:38.887010 kernel: fuse: init (API version 7.34) Dec 13 14:36:38.887021 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:36:38.887032 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:36:38.887043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:36:38.887057 systemd-journald[934]: Journal started Dec 13 14:36:38.887104 systemd-journald[934]: Runtime Journal (/run/log/journal/3557331bb9d04a65b5c71633788c3de7) is 4.9M, max 39.5M, 34.5M free. Dec 13 14:36:35.182000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:36:35.305000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:36:35.305000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:36:35.305000 audit: BPF prog-id=10 op=LOAD Dec 13 14:36:35.305000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:36:35.305000 audit: BPF prog-id=11 op=LOAD Dec 13 14:36:35.305000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:36:35.448000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:36:35.448000 audit[849]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:36:35.448000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:36:35.450000 audit[849]: AVC avc: denied { associate } for pid=849 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:36:35.450000 audit[849]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=832 pid=849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:36:35.450000 audit: CWD cwd="/" Dec 13 14:36:35.450000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:35.450000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:35.450000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:36:38.668000 audit: BPF prog-id=12 op=LOAD Dec 13 14:36:38.668000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:36:38.668000 audit: BPF prog-id=13 op=LOAD Dec 13 14:36:38.668000 audit: BPF prog-id=14 op=LOAD Dec 13 14:36:38.668000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:36:38.668000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:36:38.669000 audit: BPF prog-id=15 op=LOAD Dec 13 14:36:38.669000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:36:38.669000 audit: BPF prog-id=16 op=LOAD Dec 13 14:36:38.669000 audit: BPF prog-id=17 op=LOAD Dec 13 14:36:38.669000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:36:38.669000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:36:38.670000 audit: BPF prog-id=18 op=LOAD Dec 13 14:36:38.670000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:36:38.670000 audit: BPF prog-id=19 op=LOAD Dec 13 14:36:38.670000 audit: BPF prog-id=20 op=LOAD Dec 13 14:36:38.670000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:36:38.670000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:36:38.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.678000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:36:38.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.833000 audit: BPF prog-id=21 op=LOAD Dec 13 14:36:38.833000 audit: BPF prog-id=22 op=LOAD Dec 13 14:36:38.833000 audit: BPF prog-id=23 op=LOAD Dec 13 14:36:38.833000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:36:38.833000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:36:38.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.879000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:36:38.879000 audit[934]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc0bcc1180 a2=4000 a3=7ffc0bcc121c items=0 ppid=1 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:36:38.879000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:36:38.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:35.445679 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:36:38.666555 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:36:38.892123 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:36:38.892145 systemd[1]: Started systemd-journald.service. Dec 13 14:36:38.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:35.446500 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:36:38.666567 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:36:35.446521 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:36:38.671118 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:36:35.446553 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:36:38.892244 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:36:35.446564 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:36:38.892380 systemd[1]: Finished modprobe@drm.service. Dec 13 14:36:38.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:35.446595 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:36:35.446609 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:36:35.446802 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:36:38.893125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:36:35.446842 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:36:38.893306 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:36:35.446857 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:36:35.447805 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:36:35.447841 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:36:38.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:35.447862 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:36:35.447879 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:36:35.447897 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:36:35.447912 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:36:38.310015 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:36:38.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.894111 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:36:38.311000 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:36:38.894234 systemd[1]: Finished modprobe@loop.service. Dec 13 14:36:38.311368 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:36:38.894949 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:36:38.312422 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:36:38.896033 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:36:38.312570 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:36:38.896146 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:36:38.312740 /usr/lib/systemd/system-generators/torcx-generator[849]: time="2024-12-13T14:36:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:36:38.897092 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:36:38.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.898786 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:36:38.899764 systemd[1]: Reached target network-pre.target. Dec 13 14:36:38.902166 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:36:38.903849 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:36:38.907495 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:36:38.910638 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:36:38.912365 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:36:38.912924 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:36:38.914067 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:36:38.914672 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:36:38.916276 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:36:38.918179 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:36:38.920856 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:36:38.925406 systemd-journald[934]: Time spent on flushing to /var/log/journal/3557331bb9d04a65b5c71633788c3de7 is 26.153ms for 1106 entries. Dec 13 14:36:38.925406 systemd-journald[934]: System Journal (/var/log/journal/3557331bb9d04a65b5c71633788c3de7) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:36:38.967147 systemd-journald[934]: Received client request to flush runtime journal. Dec 13 14:36:38.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.927596 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:36:38.929353 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:36:38.954003 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:36:38.954594 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:36:38.962592 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:36:38.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.968519 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:36:38.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.986949 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:36:38.988568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:36:38.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:38.994709 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:36:38.996189 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:36:39.004391 udevadm[960]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:36:39.031852 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:36:39.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.563428 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:36:39.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.565000 audit: BPF prog-id=24 op=LOAD Dec 13 14:36:39.565000 audit: BPF prog-id=25 op=LOAD Dec 13 14:36:39.565000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:36:39.565000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:36:39.567397 systemd[1]: Starting systemd-udevd.service... Dec 13 14:36:39.607259 systemd-udevd[961]: Using default interface naming scheme 'v252'. Dec 13 14:36:39.661506 systemd[1]: Started systemd-udevd.service. Dec 13 14:36:39.670154 kernel: kauditd_printk_skb: 113 callbacks suppressed Dec 13 14:36:39.670297 kernel: audit: type=1130 audit(1734100599.666:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.687877 systemd[1]: Starting systemd-networkd.service... Dec 13 14:36:39.668000 audit: BPF prog-id=26 op=LOAD Dec 13 14:36:39.704289 kernel: audit: type=1334 audit(1734100599.668:153): prog-id=26 op=LOAD Dec 13 14:36:39.715596 kernel: audit: type=1334 audit(1734100599.710:154): prog-id=27 op=LOAD Dec 13 14:36:39.717470 kernel: audit: type=1334 audit(1734100599.712:155): prog-id=28 op=LOAD Dec 13 14:36:39.717532 kernel: audit: type=1334 audit(1734100599.713:156): prog-id=29 op=LOAD Dec 13 14:36:39.710000 audit: BPF prog-id=27 op=LOAD Dec 13 14:36:39.712000 audit: BPF prog-id=28 op=LOAD Dec 13 14:36:39.713000 audit: BPF prog-id=29 op=LOAD Dec 13 14:36:39.715990 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:36:39.744079 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:36:39.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.778962 systemd[1]: Started systemd-userdbd.service. Dec 13 14:36:39.784270 kernel: audit: type=1130 audit(1734100599.779:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.852000 audit[965]: AVC avc: denied { confidentiality } for pid=965 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:36:39.869258 kernel: audit: type=1400 audit(1734100599.852:158): avc: denied { confidentiality } for pid=965 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:36:39.852000 audit[965]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559c261b5480 a1=337fc a2=7f6984e8bbc5 a3=5 items=110 ppid=961 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:36:39.852000 audit: CWD cwd="/" Dec 13 14:36:39.882271 kernel: audit: type=1300 audit(1734100599.852:158): arch=c000003e syscall=175 success=yes exit=0 a0=559c261b5480 a1=337fc a2=7f6984e8bbc5 a3=5 items=110 ppid=961 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:36:39.882315 kernel: audit: type=1307 audit(1734100599.852:158): cwd="/" Dec 13 14:36:39.882333 kernel: audit: type=1302 audit(1734100599.852:158): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.881577 systemd-networkd[970]: lo: Link UP Dec 13 14:36:39.881585 systemd-networkd[970]: lo: Gained carrier Dec 13 14:36:39.882102 systemd-networkd[970]: Enumeration completed Dec 13 14:36:39.882245 systemd[1]: Started systemd-networkd.service. Dec 13 14:36:39.883476 systemd-networkd[970]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:36:39.852000 audit: PATH item=1 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=2 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=3 name=(null) inode=13186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=4 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=5 name=(null) inode=13187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=6 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=7 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=8 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=9 name=(null) inode=13189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=10 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=11 name=(null) inode=13190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=12 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=13 name=(null) inode=13191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=14 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=15 name=(null) inode=13192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=16 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=17 name=(null) inode=13193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=18 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=19 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=20 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=21 name=(null) inode=13195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=22 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=23 name=(null) inode=13196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=24 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=25 name=(null) inode=13197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=26 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=27 name=(null) inode=13198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=28 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=29 name=(null) inode=13199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=30 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=31 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=32 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=33 name=(null) inode=13201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=34 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=35 name=(null) inode=13202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=36 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=37 name=(null) inode=13203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=38 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=39 name=(null) inode=13204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=40 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=41 name=(null) inode=13205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=42 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=43 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=44 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=45 name=(null) inode=13207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=46 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=47 name=(null) inode=13208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=48 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=49 name=(null) inode=13209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=50 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=51 name=(null) inode=13210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=52 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=53 name=(null) inode=13211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=55 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=56 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=57 name=(null) inode=13213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=58 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=59 name=(null) inode=13214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=60 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=61 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=62 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=63 name=(null) inode=13216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=64 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.889614 systemd-networkd[970]: eth0: Link UP Dec 13 14:36:39.889620 systemd-networkd[970]: eth0: Gained carrier Dec 13 14:36:39.852000 audit: PATH item=65 name=(null) inode=13217 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=66 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=67 name=(null) inode=13218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=68 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=69 name=(null) inode=13219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=70 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=71 name=(null) inode=13220 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=72 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=73 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=74 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=75 name=(null) inode=13222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=76 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=77 name=(null) inode=13223 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=78 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=79 name=(null) inode=13224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=80 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=81 name=(null) inode=13225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=82 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=83 name=(null) inode=13226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=84 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=85 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=86 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=87 name=(null) inode=13228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=88 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=89 name=(null) inode=13229 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=90 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=91 name=(null) inode=13230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=92 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=93 name=(null) inode=13231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=94 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=95 name=(null) inode=13232 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=96 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=97 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=98 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=99 name=(null) inode=13234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=100 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=101 name=(null) inode=13235 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=102 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=103 name=(null) inode=13236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=104 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=105 name=(null) inode=13237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=106 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=107 name=(null) inode=13238 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PATH item=109 name=(null) inode=14137 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:36:39.852000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:36:39.900269 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:36:39.904260 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:36:39.906920 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:36:39.907755 systemd-networkd[970]: eth0: DHCPv4 address 172.24.4.236/24, gateway 172.24.4.1 acquired from 172.24.4.1 Dec 13 14:36:39.915241 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:36:39.917247 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Dec 13 14:36:39.922598 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:36:39.962607 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:36:39.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:39.964196 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:36:39.992940 lvm[990]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:36:40.018205 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:36:40.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:40.018819 systemd[1]: Reached target cryptsetup.target. Dec 13 14:36:40.020356 systemd[1]: Starting lvm2-activation.service... Dec 13 14:36:40.024103 lvm[991]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:36:40.048995 systemd[1]: Finished lvm2-activation.service. Dec 13 14:36:40.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:40.049559 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:36:40.050680 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:36:40.050739 systemd[1]: Reached target local-fs.target. Dec 13 14:36:40.051814 systemd[1]: Reached target machines.target. Dec 13 14:36:40.055281 systemd[1]: Starting ldconfig.service... Dec 13 14:36:40.057820 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:36:40.057914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:40.061337 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:36:40.062971 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:36:40.064491 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:36:40.065844 systemd[1]: Starting systemd-sysext.service... Dec 13 14:36:40.076268 systemd[1]: boot.automount: Got automount request for /boot, triggered by 993 (bootctl) Dec 13 14:36:40.077408 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:36:40.112079 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:36:40.116013 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:36:40.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:40.178555 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:36:40.178927 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:36:40.230399 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:36:41.025846 systemd-networkd[970]: eth0: Gained IPv6LL Dec 13 14:36:41.238854 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:36:41.240144 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:36:41.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.289591 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:36:41.323291 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:36:41.367775 (sd-sysext)[1006]: Using extensions 'kubernetes'. Dec 13 14:36:41.371997 (sd-sysext)[1006]: Merged extensions into '/usr'. Dec 13 14:36:41.400914 systemd-fsck[1003]: fsck.fat 4.2 (2021-01-31) Dec 13 14:36:41.400914 systemd-fsck[1003]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:36:41.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.420653 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:36:41.422709 systemd[1]: Mounting boot.mount... Dec 13 14:36:41.423159 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.424339 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:36:41.425521 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.427500 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:36:41.429473 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:36:41.432061 systemd[1]: Starting modprobe@loop.service... Dec 13 14:36:41.432598 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.432719 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:41.432871 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.435891 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:36:41.436017 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:36:41.436996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:36:41.437101 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:36:41.441362 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:36:41.441484 systemd[1]: Finished modprobe@loop.service. Dec 13 14:36:41.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.442579 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:36:41.442691 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.448096 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:36:41.449674 systemd[1]: Finished systemd-sysext.service. Dec 13 14:36:41.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.451356 systemd[1]: Starting ensure-sysext.service... Dec 13 14:36:41.452811 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:36:41.463304 systemd[1]: Reloading. Dec 13 14:36:41.469703 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:36:41.471892 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:36:41.476482 systemd-tmpfiles[1014]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:36:41.543755 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-12-13T14:36:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:36:41.543789 /usr/lib/systemd/system-generators/torcx-generator[1034]: time="2024-12-13T14:36:41Z" level=info msg="torcx already run" Dec 13 14:36:41.677788 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:36:41.678122 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:36:41.706817 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:36:41.778000 audit: BPF prog-id=30 op=LOAD Dec 13 14:36:41.778000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:36:41.781000 audit: BPF prog-id=31 op=LOAD Dec 13 14:36:41.781000 audit: BPF prog-id=32 op=LOAD Dec 13 14:36:41.781000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:36:41.781000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:36:41.782000 audit: BPF prog-id=33 op=LOAD Dec 13 14:36:41.782000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:36:41.783000 audit: BPF prog-id=34 op=LOAD Dec 13 14:36:41.783000 audit: BPF prog-id=35 op=LOAD Dec 13 14:36:41.783000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:36:41.783000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:36:41.784000 audit: BPF prog-id=36 op=LOAD Dec 13 14:36:41.784000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:36:41.784000 audit: BPF prog-id=37 op=LOAD Dec 13 14:36:41.784000 audit: BPF prog-id=38 op=LOAD Dec 13 14:36:41.784000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:36:41.784000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:36:41.796322 systemd[1]: Mounted boot.mount. Dec 13 14:36:41.815332 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.815559 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.816940 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:36:41.818576 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:36:41.820858 systemd[1]: Starting modprobe@loop.service... Dec 13 14:36:41.821540 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.821691 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:41.821830 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.825136 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.825380 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.825510 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.825595 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:41.825692 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.827580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:36:41.828146 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:36:41.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.833879 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.834143 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.835978 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:36:41.838750 systemd[1]: Starting modprobe@drm.service... Dec 13 14:36:41.840264 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.840391 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:41.842032 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:36:41.842841 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:36:41.844332 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:36:41.845267 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:36:41.845376 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:36:41.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.846864 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:36:41.846972 systemd[1]: Finished modprobe@loop.service. Dec 13 14:36:41.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.848072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:36:41.848178 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:36:41.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.851399 systemd[1]: Finished ensure-sysext.service. Dec 13 14:36:41.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.852890 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:36:41.852925 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:36:41.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.858280 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:36:41.858399 systemd[1]: Finished modprobe@drm.service. Dec 13 14:36:41.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.865114 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:36:41.896886 ldconfig[992]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:36:41.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.906650 systemd[1]: Finished ldconfig.service. Dec 13 14:36:41.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.931000 audit: BPF prog-id=39 op=LOAD Dec 13 14:36:41.925714 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:36:41.927295 systemd[1]: Starting audit-rules.service... Dec 13 14:36:41.928706 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:36:41.930138 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:36:41.933085 systemd[1]: Starting systemd-resolved.service... Dec 13 14:36:41.934000 audit: BPF prog-id=40 op=LOAD Dec 13 14:36:41.936366 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:36:41.939101 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:36:41.949756 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:36:41.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.950370 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:36:41.953000 audit[1098]: SYSTEM_BOOT pid=1098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.955490 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:36:41.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.979021 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:36:41.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:41.980695 systemd[1]: Starting systemd-update-done.service... Dec 13 14:36:41.987938 systemd[1]: Finished systemd-update-done.service. Dec 13 14:36:41.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:36:42.007000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:36:42.007000 audit[1108]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff373d2d40 a2=420 a3=0 items=0 ppid=1087 pid=1108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:36:42.007000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:36:42.008138 augenrules[1108]: No rules Dec 13 14:36:42.008468 systemd[1]: Finished audit-rules.service. Dec 13 14:36:42.019843 systemd-resolved[1091]: Positive Trust Anchors: Dec 13 14:36:42.020161 systemd-resolved[1091]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:36:42.020274 systemd-resolved[1091]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:36:42.025738 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:36:42.026307 systemd[1]: Reached target time-set.target. Dec 13 14:36:42.027923 systemd-resolved[1091]: Using system hostname 'ci-3510-3-6-c-262737d7bc.novalocal'. Dec 13 14:36:42.030025 systemd[1]: Started systemd-resolved.service. Dec 13 14:36:42.030576 systemd[1]: Reached target network.target. Dec 13 14:36:42.030985 systemd[1]: Reached target network-online.target. Dec 13 14:36:42.031447 systemd[1]: Reached target nss-lookup.target. Dec 13 14:36:42.031893 systemd[1]: Reached target sysinit.target. Dec 13 14:36:42.032449 systemd[1]: Started motdgen.path. Dec 13 14:36:42.032895 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:36:42.033564 systemd[1]: Started logrotate.timer. Dec 13 14:36:42.034024 systemd[1]: Started mdadm.timer. Dec 13 14:36:42.034490 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:36:42.034924 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:36:42.034955 systemd[1]: Reached target paths.target. Dec 13 14:36:42.035376 systemd[1]: Reached target timers.target. Dec 13 14:36:42.036042 systemd[1]: Listening on dbus.socket. Dec 13 14:36:42.037533 systemd[1]: Starting docker.socket... Dec 13 14:36:42.040860 systemd[1]: Listening on sshd.socket. Dec 13 14:36:42.041388 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:42.041811 systemd[1]: Listening on docker.socket. Dec 13 14:36:42.042303 systemd[1]: Reached target sockets.target. Dec 13 14:36:42.042714 systemd[1]: Reached target basic.target. Dec 13 14:36:42.043150 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:36:42.043179 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:36:42.044112 systemd[1]: Starting containerd.service... Dec 13 14:36:42.046400 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:36:42.047838 systemd[1]: Starting dbus.service... Dec 13 14:36:42.049502 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:36:42.051636 systemd[1]: Starting extend-filesystems.service... Dec 13 14:36:42.052147 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:36:42.478180 systemd-timesyncd[1092]: Contacted time server 193.52.136.2:123 (0.flatcar.pool.ntp.org). Dec 13 14:36:42.478245 systemd-resolved[1091]: Clock change detected. Flushing caches. Dec 13 14:36:42.479352 systemd[1]: Starting kubelet.service... Dec 13 14:36:42.482519 systemd[1]: Starting motdgen.service... Dec 13 14:36:42.487163 jq[1121]: false Dec 13 14:36:42.488857 systemd-timesyncd[1092]: Initial clock synchronization to Fri 2024-12-13 14:36:42.478092 UTC. Dec 13 14:36:42.491610 systemd[1]: Starting prepare-helm.service... Dec 13 14:36:42.493315 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:36:42.495022 systemd[1]: Starting sshd-keygen.service... Dec 13 14:36:42.504632 systemd[1]: Starting systemd-logind.service... Dec 13 14:36:42.505312 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:36:42.505393 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:36:42.505902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:36:42.506756 systemd[1]: Starting update-engine.service... Dec 13 14:36:42.508179 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:36:42.510646 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:36:42.510948 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:36:42.519892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:36:42.520060 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:36:42.531708 jq[1134]: true Dec 13 14:36:42.539362 tar[1138]: linux-amd64/helm Dec 13 14:36:42.549288 jq[1144]: true Dec 13 14:36:42.564666 extend-filesystems[1122]: Found loop1 Dec 13 14:36:42.565515 extend-filesystems[1122]: Found vda Dec 13 14:36:42.566766 extend-filesystems[1122]: Found vda1 Dec 13 14:36:42.567301 extend-filesystems[1122]: Found vda2 Dec 13 14:36:42.568433 extend-filesystems[1122]: Found vda3 Dec 13 14:36:42.569479 extend-filesystems[1122]: Found usr Dec 13 14:36:42.569479 extend-filesystems[1122]: Found vda4 Dec 13 14:36:42.569479 extend-filesystems[1122]: Found vda6 Dec 13 14:36:42.569479 extend-filesystems[1122]: Found vda7 Dec 13 14:36:42.568719 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:36:42.568883 systemd[1]: Finished motdgen.service. Dec 13 14:36:42.572298 extend-filesystems[1122]: Found vda9 Dec 13 14:36:42.572298 extend-filesystems[1122]: Checking size of /dev/vda9 Dec 13 14:36:42.585945 dbus-daemon[1118]: [system] SELinux support is enabled Dec 13 14:36:42.586122 systemd[1]: Started dbus.service. Dec 13 14:36:42.588544 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:36:42.588573 systemd[1]: Reached target system-config.target. Dec 13 14:36:42.589056 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:36:42.589073 systemd[1]: Reached target user-config.target. Dec 13 14:36:42.607569 extend-filesystems[1122]: Resized partition /dev/vda9 Dec 13 14:36:42.636416 extend-filesystems[1174]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:36:42.637307 env[1141]: time="2024-12-13T14:36:42.637140220Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:36:42.668505 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Dec 13 14:36:42.700559 update_engine[1132]: I1213 14:36:42.699657 1132 main.cc:92] Flatcar Update Engine starting Dec 13 14:36:42.761633 update_engine[1132]: I1213 14:36:42.714372 1132 update_check_scheduler.cc:74] Next update check in 6m24s Dec 13 14:36:42.710561 systemd[1]: Started update-engine.service. Dec 13 14:36:42.763610 env[1141]: time="2024-12-13T14:36:42.702799834Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:36:42.763610 env[1141]: time="2024-12-13T14:36:42.761453890Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:36:42.713444 systemd[1]: Started locksmithd.service. Dec 13 14:36:42.757928 systemd-logind[1130]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:36:42.757957 systemd-logind[1130]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:36:42.760578 systemd-logind[1130]: New seat seat0. Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.766728252Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.766766464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.766971879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.766991967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.767012886Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.767025219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.767100570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.767330351Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.767440458Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:36:42.768575 env[1141]: time="2024-12-13T14:36:42.767478289Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:36:42.768822 env[1141]: time="2024-12-13T14:36:42.767530607Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:36:42.768822 env[1141]: time="2024-12-13T14:36:42.767544062Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:36:42.768638 systemd[1]: Started systemd-logind.service. Dec 13 14:36:42.769840 systemd[1]: Created slice system-sshd.slice. Dec 13 14:36:42.798484 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Dec 13 14:36:42.802346 bash[1171]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:36:42.803114 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:36:42.914631 extend-filesystems[1174]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:36:42.914631 extend-filesystems[1174]: old_desc_blocks = 1, new_desc_blocks = 3 Dec 13 14:36:42.914631 extend-filesystems[1174]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Dec 13 14:36:42.912289 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.923832000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.923907973Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.923944231Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924036343Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924087209Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924125921Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924160146Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924195512Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924229486Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924264161Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924297884Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924334052Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924609999Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:36:42.929064 env[1141]: time="2024-12-13T14:36:42.924809684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:36:42.934144 extend-filesystems[1122]: Resized filesystem in /dev/vda9 Dec 13 14:36:42.912707 systemd[1]: Finished extend-filesystems.service. Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925366488Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925444074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925519044Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925629992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925666932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925708189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925739778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925772249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925803969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925834977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925867688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.925905018Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.926218105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.926262048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.943036 env[1141]: time="2024-12-13T14:36:42.926295210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.937120 systemd[1]: Started containerd.service. Dec 13 14:36:42.944090 env[1141]: time="2024-12-13T14:36:42.926326228Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:36:42.944090 env[1141]: time="2024-12-13T14:36:42.926365742Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:36:42.944090 env[1141]: time="2024-12-13T14:36:42.926395829Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:36:42.944090 env[1141]: time="2024-12-13T14:36:42.926445582Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:36:42.944090 env[1141]: time="2024-12-13T14:36:42.926578892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.927090461Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.927244891Z" level=info msg="Connect containerd service" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.927328337Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932240781Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932431869Z" level=info msg="Start subscribing containerd event" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932553377Z" level=info msg="Start recovering state" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932670978Z" level=info msg="Start event monitor" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932704781Z" level=info msg="Start snapshots syncer" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932727764Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.932746950Z" level=info msg="Start streaming server" Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.936607581Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:36:42.944381 env[1141]: time="2024-12-13T14:36:42.936835438Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:36:42.958258 env[1141]: time="2024-12-13T14:36:42.958199106Z" level=info msg="containerd successfully booted in 0.328856s" Dec 13 14:36:43.350867 tar[1138]: linux-amd64/LICENSE Dec 13 14:36:43.351176 tar[1138]: linux-amd64/README.md Dec 13 14:36:43.355504 systemd[1]: Finished prepare-helm.service. Dec 13 14:36:43.372006 locksmithd[1179]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:36:44.251210 systemd[1]: Started kubelet.service. Dec 13 14:36:44.611659 sshd_keygen[1143]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:36:44.632073 systemd[1]: Finished sshd-keygen.service. Dec 13 14:36:44.634293 systemd[1]: Starting issuegen.service... Dec 13 14:36:44.635689 systemd[1]: Started sshd@0-172.24.4.236:22-172.24.4.1:55554.service. Dec 13 14:36:44.642209 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:36:44.642364 systemd[1]: Finished issuegen.service. Dec 13 14:36:44.644193 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:36:44.651235 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:36:44.653066 systemd[1]: Started getty@tty1.service. Dec 13 14:36:44.654641 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:36:44.655246 systemd[1]: Reached target getty.target. Dec 13 14:36:45.869079 kubelet[1191]: E1213 14:36:45.868975 1191 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:36:45.873154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:36:45.873292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:36:45.873574 systemd[1]: kubelet.service: Consumed 1.925s CPU time. Dec 13 14:36:45.916685 sshd[1205]: Accepted publickey for core from 172.24.4.1 port 55554 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:36:45.921442 sshd[1205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:45.948050 systemd[1]: Created slice user-500.slice. Dec 13 14:36:45.960726 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:36:45.969647 systemd-logind[1130]: New session 1 of user core. Dec 13 14:36:45.987688 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:36:45.993859 systemd[1]: Starting user@500.service... Dec 13 14:36:46.001456 (systemd)[1215]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:46.125427 systemd[1215]: Queued start job for default target default.target. Dec 13 14:36:46.126043 systemd[1215]: Reached target paths.target. Dec 13 14:36:46.126062 systemd[1215]: Reached target sockets.target. Dec 13 14:36:46.126077 systemd[1215]: Reached target timers.target. Dec 13 14:36:46.126090 systemd[1215]: Reached target basic.target. Dec 13 14:36:46.126128 systemd[1215]: Reached target default.target. Dec 13 14:36:46.126151 systemd[1215]: Startup finished in 112ms. Dec 13 14:36:46.127448 systemd[1]: Started user@500.service. Dec 13 14:36:46.132587 systemd[1]: Started session-1.scope. Dec 13 14:36:46.518839 systemd[1]: Started sshd@1-172.24.4.236:22-172.24.4.1:48462.service. Dec 13 14:36:48.429927 sshd[1224]: Accepted publickey for core from 172.24.4.1 port 48462 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:36:48.443729 sshd[1224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:48.455716 systemd-logind[1130]: New session 2 of user core. Dec 13 14:36:48.456739 systemd[1]: Started session-2.scope. Dec 13 14:36:49.143741 sshd[1224]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:49.149688 systemd[1]: Started sshd@2-172.24.4.236:22-172.24.4.1:48466.service. Dec 13 14:36:49.152628 systemd[1]: sshd@1-172.24.4.236:22-172.24.4.1:48462.service: Deactivated successfully. Dec 13 14:36:49.154354 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:36:49.158995 systemd-logind[1130]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:36:49.161797 systemd-logind[1130]: Removed session 2. Dec 13 14:36:49.584947 coreos-metadata[1117]: Dec 13 14:36:49.584 WARN failed to locate config-drive, using the metadata service API instead Dec 13 14:36:49.712672 coreos-metadata[1117]: Dec 13 14:36:49.712 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 14:36:50.041737 coreos-metadata[1117]: Dec 13 14:36:50.041 INFO Fetch successful Dec 13 14:36:50.041737 coreos-metadata[1117]: Dec 13 14:36:50.041 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:36:50.056794 coreos-metadata[1117]: Dec 13 14:36:50.056 INFO Fetch successful Dec 13 14:36:50.062681 unknown[1117]: wrote ssh authorized keys file for user: core Dec 13 14:36:50.092963 update-ssh-keys[1234]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:36:50.093798 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:36:50.094827 systemd[1]: Reached target multi-user.target. Dec 13 14:36:50.097453 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:36:50.114782 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:36:50.115153 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:36:50.120588 systemd[1]: Startup finished in 994ms (kernel) + 8.288s (initrd) + 14.628s (userspace) = 23.911s. Dec 13 14:36:50.394733 sshd[1229]: Accepted publickey for core from 172.24.4.1 port 48466 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:36:50.397010 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:36:50.407759 systemd-logind[1130]: New session 3 of user core. Dec 13 14:36:50.408339 systemd[1]: Started session-3.scope. Dec 13 14:36:50.992977 sshd[1229]: pam_unix(sshd:session): session closed for user core Dec 13 14:36:50.998097 systemd[1]: sshd@2-172.24.4.236:22-172.24.4.1:48466.service: Deactivated successfully. Dec 13 14:36:50.999601 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:36:51.000929 systemd-logind[1130]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:36:51.003200 systemd-logind[1130]: Removed session 3. Dec 13 14:36:56.000285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:36:56.000882 systemd[1]: Stopped kubelet.service. Dec 13 14:36:56.000970 systemd[1]: kubelet.service: Consumed 1.925s CPU time. Dec 13 14:36:56.004244 systemd[1]: Starting kubelet.service... Dec 13 14:36:56.341188 systemd[1]: Started kubelet.service. Dec 13 14:36:56.698530 kubelet[1243]: E1213 14:36:56.698295 1243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:36:56.706857 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:36:56.707164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:37:01.005858 systemd[1]: Started sshd@3-172.24.4.236:22-172.24.4.1:57810.service. Dec 13 14:37:02.200634 sshd[1251]: Accepted publickey for core from 172.24.4.1 port 57810 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:37:02.203340 sshd[1251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:37:02.214233 systemd-logind[1130]: New session 4 of user core. Dec 13 14:37:02.215147 systemd[1]: Started session-4.scope. Dec 13 14:37:02.854941 sshd[1251]: pam_unix(sshd:session): session closed for user core Dec 13 14:37:02.862659 systemd[1]: Started sshd@4-172.24.4.236:22-172.24.4.1:57814.service. Dec 13 14:37:02.865958 systemd[1]: sshd@3-172.24.4.236:22-172.24.4.1:57810.service: Deactivated successfully. Dec 13 14:37:02.867857 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:37:02.873310 systemd-logind[1130]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:37:02.875986 systemd-logind[1130]: Removed session 4. Dec 13 14:37:04.085175 sshd[1256]: Accepted publickey for core from 172.24.4.1 port 57814 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:37:04.088037 sshd[1256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:37:04.098807 systemd-logind[1130]: New session 5 of user core. Dec 13 14:37:04.100074 systemd[1]: Started session-5.scope. Dec 13 14:37:04.832964 sshd[1256]: pam_unix(sshd:session): session closed for user core Dec 13 14:37:04.843410 systemd[1]: sshd@4-172.24.4.236:22-172.24.4.1:57814.service: Deactivated successfully. Dec 13 14:37:04.845251 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:37:04.847288 systemd-logind[1130]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:37:04.851067 systemd[1]: Started sshd@5-172.24.4.236:22-172.24.4.1:49606.service. Dec 13 14:37:04.855416 systemd-logind[1130]: Removed session 5. Dec 13 14:37:06.064994 sshd[1263]: Accepted publickey for core from 172.24.4.1 port 49606 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:37:06.067940 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:37:06.079536 systemd-logind[1130]: New session 6 of user core. Dec 13 14:37:06.080316 systemd[1]: Started session-6.scope. Dec 13 14:37:06.750560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:37:06.751144 systemd[1]: Stopped kubelet.service. Dec 13 14:37:06.754999 systemd[1]: Starting kubelet.service... Dec 13 14:37:06.864898 sshd[1263]: pam_unix(sshd:session): session closed for user core Dec 13 14:37:06.874537 systemd[1]: Started sshd@6-172.24.4.236:22-172.24.4.1:49622.service. Dec 13 14:37:06.878540 systemd[1]: sshd@5-172.24.4.236:22-172.24.4.1:49606.service: Deactivated successfully. Dec 13 14:37:06.880724 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:37:06.885082 systemd-logind[1130]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:37:06.889081 systemd-logind[1130]: Removed session 6. Dec 13 14:37:06.975332 systemd[1]: Started kubelet.service. Dec 13 14:37:07.485748 kubelet[1274]: E1213 14:37:07.485671 1274 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:37:07.491171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:37:07.491546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:37:08.228684 sshd[1270]: Accepted publickey for core from 172.24.4.1 port 49622 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:37:08.231092 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:37:08.240568 systemd-logind[1130]: New session 7 of user core. Dec 13 14:37:08.242109 systemd[1]: Started session-7.scope. Dec 13 14:37:08.691161 sudo[1283]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:37:08.692366 sudo[1283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:37:08.744767 systemd[1]: Starting docker.service... Dec 13 14:37:08.798802 env[1293]: time="2024-12-13T14:37:08.798746987Z" level=info msg="Starting up" Dec 13 14:37:08.800453 env[1293]: time="2024-12-13T14:37:08.800410426Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:37:08.800453 env[1293]: time="2024-12-13T14:37:08.800436215Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:37:08.800713 env[1293]: time="2024-12-13T14:37:08.800456322Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:37:08.800713 env[1293]: time="2024-12-13T14:37:08.800498632Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:37:08.803330 env[1293]: time="2024-12-13T14:37:08.803286030Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:37:08.803561 env[1293]: time="2024-12-13T14:37:08.803526611Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:37:08.803728 env[1293]: time="2024-12-13T14:37:08.803690638Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:37:08.803859 env[1293]: time="2024-12-13T14:37:08.803830340Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:37:08.865044 env[1293]: time="2024-12-13T14:37:08.864976660Z" level=info msg="Loading containers: start." Dec 13 14:37:09.103561 kernel: Initializing XFRM netlink socket Dec 13 14:37:09.192630 env[1293]: time="2024-12-13T14:37:09.192531569Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:37:09.280647 systemd-networkd[970]: docker0: Link UP Dec 13 14:37:09.296994 env[1293]: time="2024-12-13T14:37:09.296924198Z" level=info msg="Loading containers: done." Dec 13 14:37:09.311966 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck452752092-merged.mount: Deactivated successfully. Dec 13 14:37:09.322583 env[1293]: time="2024-12-13T14:37:09.322485638Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:37:09.322832 env[1293]: time="2024-12-13T14:37:09.322658342Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:37:09.322832 env[1293]: time="2024-12-13T14:37:09.322759161Z" level=info msg="Daemon has completed initialization" Dec 13 14:37:09.354645 systemd[1]: Started docker.service. Dec 13 14:37:09.359958 env[1293]: time="2024-12-13T14:37:09.359875242Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:37:11.471928 env[1141]: time="2024-12-13T14:37:11.471790419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:37:12.267072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294723008.mount: Deactivated successfully. Dec 13 14:37:15.473990 env[1141]: time="2024-12-13T14:37:15.473911787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:15.479209 env[1141]: time="2024-12-13T14:37:15.479155588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:15.484919 env[1141]: time="2024-12-13T14:37:15.484866944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:15.489438 env[1141]: time="2024-12-13T14:37:15.489385752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:15.490934 env[1141]: time="2024-12-13T14:37:15.490872718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:37:15.502321 env[1141]: time="2024-12-13T14:37:15.502261115Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:37:17.500810 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:37:17.501231 systemd[1]: Stopped kubelet.service. Dec 13 14:37:17.503656 systemd[1]: Starting kubelet.service... Dec 13 14:37:17.619960 systemd[1]: Started kubelet.service. Dec 13 14:37:18.104527 kubelet[1433]: E1213 14:37:18.104357 1433 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:37:18.108695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:37:18.108978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:37:19.085114 env[1141]: time="2024-12-13T14:37:19.085026370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:19.091983 env[1141]: time="2024-12-13T14:37:19.091908095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:19.098814 env[1141]: time="2024-12-13T14:37:19.098743281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:19.103285 env[1141]: time="2024-12-13T14:37:19.103214700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:19.105636 env[1141]: time="2024-12-13T14:37:19.105577456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:37:19.129380 env[1141]: time="2024-12-13T14:37:19.129276068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:37:21.357354 env[1141]: time="2024-12-13T14:37:21.357305869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:21.363759 env[1141]: time="2024-12-13T14:37:21.363650026Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:21.368421 env[1141]: time="2024-12-13T14:37:21.368361301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:21.372668 env[1141]: time="2024-12-13T14:37:21.372599872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:21.373555 env[1141]: time="2024-12-13T14:37:21.373520451Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:37:21.391544 env[1141]: time="2024-12-13T14:37:21.391506307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:37:23.489452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1214364250.mount: Deactivated successfully. Dec 13 14:37:24.310606 env[1141]: time="2024-12-13T14:37:24.310566521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.313315 env[1141]: time="2024-12-13T14:37:24.313293572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.317705 env[1141]: time="2024-12-13T14:37:24.317682739Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.320150 env[1141]: time="2024-12-13T14:37:24.320130376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:24.320757 env[1141]: time="2024-12-13T14:37:24.320735727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:37:24.332211 env[1141]: time="2024-12-13T14:37:24.332162935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:37:24.994497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207311037.mount: Deactivated successfully. Dec 13 14:37:26.874021 env[1141]: time="2024-12-13T14:37:26.873896236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:26.880178 env[1141]: time="2024-12-13T14:37:26.880093575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:26.887324 env[1141]: time="2024-12-13T14:37:26.887182012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:26.893652 env[1141]: time="2024-12-13T14:37:26.892648324Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:26.894980 env[1141]: time="2024-12-13T14:37:26.894877946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:37:26.919089 env[1141]: time="2024-12-13T14:37:26.918996336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:37:27.524194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158277313.mount: Deactivated successfully. Dec 13 14:37:27.537970 env[1141]: time="2024-12-13T14:37:27.537819029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:27.542250 env[1141]: time="2024-12-13T14:37:27.542157483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:27.547626 env[1141]: time="2024-12-13T14:37:27.547555101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:27.552506 env[1141]: time="2024-12-13T14:37:27.552414055Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:37:27.553654 env[1141]: time="2024-12-13T14:37:27.550948244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:27.572964 env[1141]: time="2024-12-13T14:37:27.572881695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:37:28.175553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:37:28.175920 systemd[1]: Stopped kubelet.service. Dec 13 14:37:28.180941 systemd[1]: Starting kubelet.service... Dec 13 14:37:28.210451 update_engine[1132]: I1213 14:37:28.209682 1132 update_attempter.cc:509] Updating boot flags... Dec 13 14:37:28.238879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount493310053.mount: Deactivated successfully. Dec 13 14:37:28.743874 systemd[1]: Started kubelet.service. Dec 13 14:37:28.839642 kubelet[1485]: E1213 14:37:28.839609 1485 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:37:28.841669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:37:28.841808 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:37:32.693819 env[1141]: time="2024-12-13T14:37:32.693718121Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:32.698510 env[1141]: time="2024-12-13T14:37:32.698423986Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:32.703200 env[1141]: time="2024-12-13T14:37:32.703179214Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:32.707385 env[1141]: time="2024-12-13T14:37:32.707316358Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:32.708944 env[1141]: time="2024-12-13T14:37:32.708870322Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:37:38.082258 systemd[1]: Stopped kubelet.service. Dec 13 14:37:38.085624 systemd[1]: Starting kubelet.service... Dec 13 14:37:38.110624 systemd[1]: Reloading. Dec 13 14:37:38.225914 /usr/lib/systemd/system-generators/torcx-generator[1585]: time="2024-12-13T14:37:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:37:38.226318 /usr/lib/systemd/system-generators/torcx-generator[1585]: time="2024-12-13T14:37:38Z" level=info msg="torcx already run" Dec 13 14:37:38.426669 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:37:38.426952 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:37:38.450911 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:37:38.558000 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:37:38.558270 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:37:38.558600 systemd[1]: Stopped kubelet.service. Dec 13 14:37:38.560440 systemd[1]: Starting kubelet.service... Dec 13 14:37:39.307123 systemd[1]: Started kubelet.service. Dec 13 14:37:39.457584 kubelet[1633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:37:39.457584 kubelet[1633]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:37:39.457584 kubelet[1633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:37:39.458029 kubelet[1633]: I1213 14:37:39.457690 1633 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:37:39.881182 kubelet[1633]: I1213 14:37:39.881121 1633 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:37:39.881182 kubelet[1633]: I1213 14:37:39.881168 1633 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:37:39.881456 kubelet[1633]: I1213 14:37:39.881447 1633 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:37:39.936211 kubelet[1633]: E1213 14:37:39.936152 1633 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:39.936745 kubelet[1633]: I1213 14:37:39.936713 1633 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:37:39.971893 kubelet[1633]: I1213 14:37:39.971849 1633 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:37:39.972650 kubelet[1633]: I1213 14:37:39.972619 1633 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:37:39.973284 kubelet[1633]: I1213 14:37:39.973207 1633 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:37:39.973651 kubelet[1633]: I1213 14:37:39.973623 1633 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:37:39.973814 kubelet[1633]: I1213 14:37:39.973790 1633 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:37:39.974238 kubelet[1633]: I1213 14:37:39.974206 1633 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:37:39.974596 kubelet[1633]: I1213 14:37:39.974568 1633 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:37:39.974787 kubelet[1633]: I1213 14:37:39.974762 1633 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:37:39.974969 kubelet[1633]: I1213 14:37:39.974945 1633 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:37:39.975129 kubelet[1633]: I1213 14:37:39.975106 1633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:37:39.975626 kubelet[1633]: W1213 14:37:39.975516 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-c-262737d7bc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:39.975756 kubelet[1633]: E1213 14:37:39.975635 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-c-262737d7bc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:39.978413 kubelet[1633]: W1213 14:37:39.978315 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:39.978748 kubelet[1633]: E1213 14:37:39.978720 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:39.979056 kubelet[1633]: I1213 14:37:39.979028 1633 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:37:39.995610 kubelet[1633]: I1213 14:37:39.995561 1633 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:37:39.995987 kubelet[1633]: W1213 14:37:39.995958 1633 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:37:40.006547 kubelet[1633]: I1213 14:37:40.006502 1633 server.go:1256] "Started kubelet" Dec 13 14:37:40.008149 kubelet[1633]: I1213 14:37:40.008114 1633 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:37:40.010689 kubelet[1633]: I1213 14:37:40.010658 1633 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:37:40.014741 kubelet[1633]: I1213 14:37:40.014689 1633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:37:40.015211 kubelet[1633]: I1213 14:37:40.015165 1633 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:37:40.021172 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:37:40.035332 kubelet[1633]: E1213 14:37:40.035259 1633 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.236:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.236:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-c-262737d7bc.novalocal.1810c35cd473a06a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-c-262737d7bc.novalocal,UID:ci-3510-3-6-c-262737d7bc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-c-262737d7bc.novalocal,},FirstTimestamp:2024-12-13 14:37:40.00639601 +0000 UTC m=+0.686011721,LastTimestamp:2024-12-13 14:37:40.00639601 +0000 UTC m=+0.686011721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-c-262737d7bc.novalocal,}" Dec 13 14:37:40.037346 kubelet[1633]: I1213 14:37:40.037289 1633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:37:40.041279 kubelet[1633]: I1213 14:37:40.041252 1633 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:37:40.041731 kubelet[1633]: I1213 14:37:40.041698 1633 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:37:40.041987 kubelet[1633]: I1213 14:37:40.041961 1633 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:37:40.043002 kubelet[1633]: W1213 14:37:40.042905 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.043212 kubelet[1633]: E1213 14:37:40.043187 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.043772 kubelet[1633]: E1213 14:37:40.043741 1633 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:37:40.045408 kubelet[1633]: E1213 14:37:40.045375 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-c-262737d7bc.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="200ms" Dec 13 14:37:40.046689 kubelet[1633]: I1213 14:37:40.046655 1633 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:37:40.046987 kubelet[1633]: I1213 14:37:40.046950 1633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:37:40.048864 kubelet[1633]: I1213 14:37:40.048828 1633 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:37:40.089967 kubelet[1633]: I1213 14:37:40.089922 1633 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:37:40.089967 kubelet[1633]: I1213 14:37:40.089957 1633 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:37:40.090142 kubelet[1633]: I1213 14:37:40.089984 1633 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:37:40.094301 kubelet[1633]: I1213 14:37:40.093609 1633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:37:40.138283 kubelet[1633]: I1213 14:37:40.094598 1633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:37:40.138283 kubelet[1633]: I1213 14:37:40.094617 1633 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:37:40.138283 kubelet[1633]: I1213 14:37:40.094662 1633 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:37:40.138283 kubelet[1633]: E1213 14:37:40.094723 1633 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:37:40.138283 kubelet[1633]: W1213 14:37:40.100217 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.138283 kubelet[1633]: E1213 14:37:40.100251 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.144838 kubelet[1633]: I1213 14:37:40.144798 1633 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.145987 kubelet[1633]: E1213 14:37:40.145952 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.195386 kubelet[1633]: E1213 14:37:40.195338 1633 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:37:40.234401 kubelet[1633]: I1213 14:37:40.234367 1633 policy_none.go:49] "None policy: Start" Dec 13 14:37:40.236101 kubelet[1633]: I1213 14:37:40.236065 1633 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:37:40.236101 kubelet[1633]: I1213 14:37:40.236118 1633 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:37:40.247008 kubelet[1633]: E1213 14:37:40.246971 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-c-262737d7bc.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="400ms" Dec 13 14:37:40.254913 systemd[1]: Created slice kubepods.slice. Dec 13 14:37:40.265058 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:37:40.272511 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:37:40.283185 kubelet[1633]: I1213 14:37:40.283143 1633 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:37:40.283856 kubelet[1633]: I1213 14:37:40.283825 1633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:37:40.289632 kubelet[1633]: E1213 14:37:40.289596 1633 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-c-262737d7bc.novalocal\" not found" Dec 13 14:37:40.350775 kubelet[1633]: I1213 14:37:40.350698 1633 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.352023 kubelet[1633]: E1213 14:37:40.351960 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.396328 kubelet[1633]: I1213 14:37:40.396182 1633 topology_manager.go:215] "Topology Admit Handler" podUID="6816be5f37c3a449b117df344778015b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.400740 kubelet[1633]: I1213 14:37:40.400704 1633 topology_manager.go:215] "Topology Admit Handler" podUID="99b0121a4502ff72be62cd3d1e6c6472" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.407455 kubelet[1633]: I1213 14:37:40.407415 1633 topology_manager.go:215] "Topology Admit Handler" podUID="41f448f398e5248ad6e04262b6da4f88" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.415952 systemd[1]: Created slice kubepods-burstable-pod6816be5f37c3a449b117df344778015b.slice. Dec 13 14:37:40.440788 systemd[1]: Created slice kubepods-burstable-pod99b0121a4502ff72be62cd3d1e6c6472.slice. Dec 13 14:37:40.442780 kubelet[1633]: I1213 14:37:40.442666 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41f448f398e5248ad6e04262b6da4f88-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"41f448f398e5248ad6e04262b6da4f88\") " pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.442780 kubelet[1633]: I1213 14:37:40.442754 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.442991 kubelet[1633]: I1213 14:37:40.442831 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.442991 kubelet[1633]: I1213 14:37:40.442891 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.442991 kubelet[1633]: I1213 14:37:40.442980 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.443201 kubelet[1633]: I1213 14:37:40.443075 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99b0121a4502ff72be62cd3d1e6c6472-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"99b0121a4502ff72be62cd3d1e6c6472\") " pod="kube-system/kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.443201 kubelet[1633]: I1213 14:37:40.443162 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.443334 kubelet[1633]: I1213 14:37:40.443228 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41f448f398e5248ad6e04262b6da4f88-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"41f448f398e5248ad6e04262b6da4f88\") " pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.443334 kubelet[1633]: I1213 14:37:40.443301 1633 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41f448f398e5248ad6e04262b6da4f88-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"41f448f398e5248ad6e04262b6da4f88\") " pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.461739 systemd[1]: Created slice kubepods-burstable-pod41f448f398e5248ad6e04262b6da4f88.slice. Dec 13 14:37:40.649169 kubelet[1633]: E1213 14:37:40.649045 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-c-262737d7bc.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="800ms" Dec 13 14:37:40.738759 env[1141]: time="2024-12-13T14:37:40.737721315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal,Uid:6816be5f37c3a449b117df344778015b,Namespace:kube-system,Attempt:0,}" Dec 13 14:37:40.748540 env[1141]: time="2024-12-13T14:37:40.748446740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal,Uid:99b0121a4502ff72be62cd3d1e6c6472,Namespace:kube-system,Attempt:0,}" Dec 13 14:37:40.756402 kubelet[1633]: I1213 14:37:40.756361 1633 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.757330 kubelet[1633]: E1213 14:37:40.757301 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:40.767765 env[1141]: time="2024-12-13T14:37:40.767699288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal,Uid:41f448f398e5248ad6e04262b6da4f88,Namespace:kube-system,Attempt:0,}" Dec 13 14:37:40.944099 kubelet[1633]: W1213 14:37:40.943295 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-c-262737d7bc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.944099 kubelet[1633]: E1213 14:37:40.943429 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-c-262737d7bc.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.971811 kubelet[1633]: W1213 14:37:40.971738 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:40.972048 kubelet[1633]: E1213 14:37:40.971819 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:41.013372 kubelet[1633]: E1213 14:37:41.013309 1633 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.236:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.236:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-c-262737d7bc.novalocal.1810c35cd473a06a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-c-262737d7bc.novalocal,UID:ci-3510-3-6-c-262737d7bc.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-c-262737d7bc.novalocal,},FirstTimestamp:2024-12-13 14:37:40.00639601 +0000 UTC m=+0.686011721,LastTimestamp:2024-12-13 14:37:40.00639601 +0000 UTC m=+0.686011721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-c-262737d7bc.novalocal,}" Dec 13 14:37:41.307819 kubelet[1633]: W1213 14:37:41.307706 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.24.4.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:41.308120 kubelet[1633]: E1213 14:37:41.308088 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:41.331379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount893720098.mount: Deactivated successfully. Dec 13 14:37:41.347417 env[1141]: time="2024-12-13T14:37:41.347256762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.351162 env[1141]: time="2024-12-13T14:37:41.351094352Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.355229 env[1141]: time="2024-12-13T14:37:41.355170130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.359064 env[1141]: time="2024-12-13T14:37:41.358994075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.362120 env[1141]: time="2024-12-13T14:37:41.362039598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.365407 env[1141]: time="2024-12-13T14:37:41.365335871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.373624 env[1141]: time="2024-12-13T14:37:41.373540246Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.376405 env[1141]: time="2024-12-13T14:37:41.376340418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.384734 env[1141]: time="2024-12-13T14:37:41.384660380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.387067 env[1141]: time="2024-12-13T14:37:41.387016507Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.389561 env[1141]: time="2024-12-13T14:37:41.389511646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.406785 env[1141]: time="2024-12-13T14:37:41.406705692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:37:41.440013 env[1141]: time="2024-12-13T14:37:41.438255521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:41.440013 env[1141]: time="2024-12-13T14:37:41.438292881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:41.440013 env[1141]: time="2024-12-13T14:37:41.438324831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:41.440013 env[1141]: time="2024-12-13T14:37:41.438510691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9a05a2fa3df7fdd6b88052cf6a0b6ab7f09d40671804cd96f51cf4a3a12989a pid=1681 runtime=io.containerd.runc.v2 Dec 13 14:37:41.442804 env[1141]: time="2024-12-13T14:37:41.441746962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:41.442804 env[1141]: time="2024-12-13T14:37:41.441929504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:41.442804 env[1141]: time="2024-12-13T14:37:41.441967665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:41.443789 env[1141]: time="2024-12-13T14:37:41.443393745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e281d2b620474da6d79a55ebc2bc027f3ce141b70239a84e31be326ba3e20b5 pid=1680 runtime=io.containerd.runc.v2 Dec 13 14:37:41.452086 kubelet[1633]: E1213 14:37:41.452050 1633 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-c-262737d7bc.novalocal?timeout=10s\": dial tcp 172.24.4.236:6443: connect: connection refused" interval="1.6s" Dec 13 14:37:41.461403 env[1141]: time="2024-12-13T14:37:41.461314427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:37:41.461403 env[1141]: time="2024-12-13T14:37:41.461365132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:37:41.461403 env[1141]: time="2024-12-13T14:37:41.461378497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:37:41.465033 env[1141]: time="2024-12-13T14:37:41.464945199Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b784a121a1d0e4f07b3afe017a9383c6181c7ddad9319f5ef1d98bb7327a9761 pid=1709 runtime=io.containerd.runc.v2 Dec 13 14:37:41.468409 systemd[1]: Started cri-containerd-d9a05a2fa3df7fdd6b88052cf6a0b6ab7f09d40671804cd96f51cf4a3a12989a.scope. Dec 13 14:37:41.482053 systemd[1]: Started cri-containerd-6e281d2b620474da6d79a55ebc2bc027f3ce141b70239a84e31be326ba3e20b5.scope. Dec 13 14:37:41.504404 systemd[1]: Started cri-containerd-b784a121a1d0e4f07b3afe017a9383c6181c7ddad9319f5ef1d98bb7327a9761.scope. Dec 13 14:37:41.505429 kubelet[1633]: W1213 14:37:41.504936 1633 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:41.505429 kubelet[1633]: E1213 14:37:41.505013 1633 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:41.543942 env[1141]: time="2024-12-13T14:37:41.543873238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal,Uid:99b0121a4502ff72be62cd3d1e6c6472,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9a05a2fa3df7fdd6b88052cf6a0b6ab7f09d40671804cd96f51cf4a3a12989a\"" Dec 13 14:37:41.551912 env[1141]: time="2024-12-13T14:37:41.551173574Z" level=info msg="CreateContainer within sandbox \"d9a05a2fa3df7fdd6b88052cf6a0b6ab7f09d40671804cd96f51cf4a3a12989a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:37:41.560554 kubelet[1633]: I1213 14:37:41.559299 1633 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:41.560554 kubelet[1633]: E1213 14:37:41.559770 1633 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.236:6443/api/v1/nodes\": dial tcp 172.24.4.236:6443: connect: connection refused" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:41.572150 env[1141]: time="2024-12-13T14:37:41.572116194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal,Uid:41f448f398e5248ad6e04262b6da4f88,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e281d2b620474da6d79a55ebc2bc027f3ce141b70239a84e31be326ba3e20b5\"" Dec 13 14:37:41.582515 env[1141]: time="2024-12-13T14:37:41.582409845Z" level=info msg="CreateContainer within sandbox \"6e281d2b620474da6d79a55ebc2bc027f3ce141b70239a84e31be326ba3e20b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:37:41.593274 env[1141]: time="2024-12-13T14:37:41.593226118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal,Uid:6816be5f37c3a449b117df344778015b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b784a121a1d0e4f07b3afe017a9383c6181c7ddad9319f5ef1d98bb7327a9761\"" Dec 13 14:37:41.598070 env[1141]: time="2024-12-13T14:37:41.598041776Z" level=info msg="CreateContainer within sandbox \"b784a121a1d0e4f07b3afe017a9383c6181c7ddad9319f5ef1d98bb7327a9761\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:37:41.619261 env[1141]: time="2024-12-13T14:37:41.619190923Z" level=info msg="CreateContainer within sandbox \"d9a05a2fa3df7fdd6b88052cf6a0b6ab7f09d40671804cd96f51cf4a3a12989a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8c275544874c5103c88586ed71806e2803814075fb07604549b72ae333ce41cf\"" Dec 13 14:37:41.619954 env[1141]: time="2024-12-13T14:37:41.619925062Z" level=info msg="StartContainer for \"8c275544874c5103c88586ed71806e2803814075fb07604549b72ae333ce41cf\"" Dec 13 14:37:41.629022 env[1141]: time="2024-12-13T14:37:41.628970618Z" level=info msg="CreateContainer within sandbox \"6e281d2b620474da6d79a55ebc2bc027f3ce141b70239a84e31be326ba3e20b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6cc8f768400564df258c315df05107a0f565ce6a0c17dd2bade9005ece0ad0d6\"" Dec 13 14:37:41.629652 env[1141]: time="2024-12-13T14:37:41.629625159Z" level=info msg="StartContainer for \"6cc8f768400564df258c315df05107a0f565ce6a0c17dd2bade9005ece0ad0d6\"" Dec 13 14:37:41.637119 env[1141]: time="2024-12-13T14:37:41.637077591Z" level=info msg="CreateContainer within sandbox \"b784a121a1d0e4f07b3afe017a9383c6181c7ddad9319f5ef1d98bb7327a9761\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"25a36630ce2637b41fea8d11f231ac0c945b126e231e126134226613b0ee2148\"" Dec 13 14:37:41.637905 env[1141]: time="2024-12-13T14:37:41.637853900Z" level=info msg="StartContainer for \"25a36630ce2637b41fea8d11f231ac0c945b126e231e126134226613b0ee2148\"" Dec 13 14:37:41.642772 systemd[1]: Started cri-containerd-8c275544874c5103c88586ed71806e2803814075fb07604549b72ae333ce41cf.scope. Dec 13 14:37:41.669108 systemd[1]: Started cri-containerd-6cc8f768400564df258c315df05107a0f565ce6a0c17dd2bade9005ece0ad0d6.scope. Dec 13 14:37:41.681120 systemd[1]: Started cri-containerd-25a36630ce2637b41fea8d11f231ac0c945b126e231e126134226613b0ee2148.scope. Dec 13 14:37:41.739551 env[1141]: time="2024-12-13T14:37:41.737679003Z" level=info msg="StartContainer for \"8c275544874c5103c88586ed71806e2803814075fb07604549b72ae333ce41cf\" returns successfully" Dec 13 14:37:41.762076 env[1141]: time="2024-12-13T14:37:41.762027221Z" level=info msg="StartContainer for \"6cc8f768400564df258c315df05107a0f565ce6a0c17dd2bade9005ece0ad0d6\" returns successfully" Dec 13 14:37:41.777632 env[1141]: time="2024-12-13T14:37:41.777571587Z" level=info msg="StartContainer for \"25a36630ce2637b41fea8d11f231ac0c945b126e231e126134226613b0ee2148\" returns successfully" Dec 13 14:37:42.137374 kubelet[1633]: E1213 14:37:42.137330 1633 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.236:6443: connect: connection refused Dec 13 14:37:43.161050 kubelet[1633]: I1213 14:37:43.160977 1633 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:44.471695 kubelet[1633]: E1213 14:37:44.471623 1633 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-c-262737d7bc.novalocal\" not found" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:44.564406 kubelet[1633]: I1213 14:37:44.564367 1633 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:44.978074 kubelet[1633]: I1213 14:37:44.978004 1633 apiserver.go:52] "Watching apiserver" Dec 13 14:37:45.042475 kubelet[1633]: I1213 14:37:45.042416 1633 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:37:46.242620 kubelet[1633]: W1213 14:37:46.242542 1633 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:37:48.573688 systemd[1]: Reloading. Dec 13 14:37:48.714758 /usr/lib/systemd/system-generators/torcx-generator[1921]: time="2024-12-13T14:37:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:37:48.714790 /usr/lib/systemd/system-generators/torcx-generator[1921]: time="2024-12-13T14:37:48Z" level=info msg="torcx already run" Dec 13 14:37:48.779686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:37:48.779708 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:37:48.805492 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:37:48.949419 systemd[1]: Stopping kubelet.service... Dec 13 14:37:48.965356 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:37:48.965661 systemd[1]: Stopped kubelet.service. Dec 13 14:37:48.965715 systemd[1]: kubelet.service: Consumed 1.472s CPU time. Dec 13 14:37:48.968084 systemd[1]: Starting kubelet.service... Dec 13 14:37:51.574282 systemd[1]: Started kubelet.service. Dec 13 14:37:51.737330 kubelet[1972]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:37:51.737694 kubelet[1972]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:37:51.737753 kubelet[1972]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:37:51.737904 kubelet[1972]: I1213 14:37:51.737874 1972 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:37:51.741681 sudo[1984]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:37:51.741917 sudo[1984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:37:51.744431 kubelet[1972]: I1213 14:37:51.744398 1972 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:37:51.744431 kubelet[1972]: I1213 14:37:51.744426 1972 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:37:51.744690 kubelet[1972]: I1213 14:37:51.744674 1972 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:37:51.749093 kubelet[1972]: I1213 14:37:51.749045 1972 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:37:51.751140 kubelet[1972]: I1213 14:37:51.751117 1972 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:37:51.764077 kubelet[1972]: I1213 14:37:51.764019 1972 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:37:51.764279 kubelet[1972]: I1213 14:37:51.764254 1972 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:37:51.764541 kubelet[1972]: I1213 14:37:51.764515 1972 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:37:51.764646 kubelet[1972]: I1213 14:37:51.764547 1972 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:37:51.764646 kubelet[1972]: I1213 14:37:51.764560 1972 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:37:51.764646 kubelet[1972]: I1213 14:37:51.764593 1972 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:37:51.764751 kubelet[1972]: I1213 14:37:51.764683 1972 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:37:51.764751 kubelet[1972]: I1213 14:37:51.764700 1972 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:37:51.772444 kubelet[1972]: I1213 14:37:51.769680 1972 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:37:51.772444 kubelet[1972]: I1213 14:37:51.769707 1972 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:37:51.772444 kubelet[1972]: I1213 14:37:51.770440 1972 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:37:51.772444 kubelet[1972]: I1213 14:37:51.770618 1972 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:37:51.772444 kubelet[1972]: I1213 14:37:51.770960 1972 server.go:1256] "Started kubelet" Dec 13 14:37:51.776099 kubelet[1972]: I1213 14:37:51.776076 1972 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:37:51.780431 kubelet[1972]: I1213 14:37:51.780410 1972 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:37:51.781119 kubelet[1972]: I1213 14:37:51.781100 1972 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:37:51.785601 kubelet[1972]: I1213 14:37:51.785569 1972 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:37:51.785888 kubelet[1972]: I1213 14:37:51.785868 1972 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:37:51.786216 kubelet[1972]: I1213 14:37:51.786193 1972 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:37:51.786316 kubelet[1972]: I1213 14:37:51.786303 1972 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:37:51.786413 kubelet[1972]: I1213 14:37:51.786333 1972 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:37:51.789339 kubelet[1972]: I1213 14:37:51.788735 1972 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:37:51.789339 kubelet[1972]: I1213 14:37:51.788812 1972 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:37:51.799590 kubelet[1972]: I1213 14:37:51.799550 1972 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:37:51.804003 kubelet[1972]: E1213 14:37:51.801532 1972 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:37:51.853862 kubelet[1972]: I1213 14:37:51.853768 1972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:37:51.858335 kubelet[1972]: I1213 14:37:51.858318 1972 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:37:51.858495 kubelet[1972]: I1213 14:37:51.858456 1972 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:37:51.858585 kubelet[1972]: I1213 14:37:51.858573 1972 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:37:51.858693 kubelet[1972]: E1213 14:37:51.858682 1972 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:37:51.864083 kubelet[1972]: I1213 14:37:51.864064 1972 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:37:51.864207 kubelet[1972]: I1213 14:37:51.864198 1972 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:37:51.864270 kubelet[1972]: I1213 14:37:51.864262 1972 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:37:51.865303 kubelet[1972]: I1213 14:37:51.865289 1972 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:37:51.865408 kubelet[1972]: I1213 14:37:51.865396 1972 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:37:51.865540 kubelet[1972]: I1213 14:37:51.865528 1972 policy_none.go:49] "None policy: Start" Dec 13 14:37:51.868186 kubelet[1972]: I1213 14:37:51.868167 1972 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:37:51.868297 kubelet[1972]: I1213 14:37:51.868288 1972 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:37:51.868535 kubelet[1972]: I1213 14:37:51.868523 1972 state_mem.go:75] "Updated machine memory state" Dec 13 14:37:51.880035 kubelet[1972]: I1213 14:37:51.880015 1972 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:37:51.880380 kubelet[1972]: I1213 14:37:51.880368 1972 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:37:51.900618 kubelet[1972]: I1213 14:37:51.900598 1972 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.910381 kubelet[1972]: I1213 14:37:51.910334 1972 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.910598 kubelet[1972]: I1213 14:37:51.910571 1972 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.959357 kubelet[1972]: I1213 14:37:51.959292 1972 topology_manager.go:215] "Topology Admit Handler" podUID="6816be5f37c3a449b117df344778015b" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.959515 kubelet[1972]: I1213 14:37:51.959386 1972 topology_manager.go:215] "Topology Admit Handler" podUID="99b0121a4502ff72be62cd3d1e6c6472" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.959515 kubelet[1972]: I1213 14:37:51.959441 1972 topology_manager.go:215] "Topology Admit Handler" podUID="41f448f398e5248ad6e04262b6da4f88" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.973969 kubelet[1972]: W1213 14:37:51.972910 1972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:37:51.973969 kubelet[1972]: E1213 14:37:51.972979 1972 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.975208 kubelet[1972]: W1213 14:37:51.975184 1972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:37:51.975769 kubelet[1972]: W1213 14:37:51.975747 1972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:37:51.994410 kubelet[1972]: I1213 14:37:51.994358 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41f448f398e5248ad6e04262b6da4f88-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"41f448f398e5248ad6e04262b6da4f88\") " pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994410 kubelet[1972]: I1213 14:37:51.994411 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41f448f398e5248ad6e04262b6da4f88-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"41f448f398e5248ad6e04262b6da4f88\") " pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994624 kubelet[1972]: I1213 14:37:51.994439 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41f448f398e5248ad6e04262b6da4f88-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"41f448f398e5248ad6e04262b6da4f88\") " pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994624 kubelet[1972]: I1213 14:37:51.994479 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994624 kubelet[1972]: I1213 14:37:51.994507 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994624 kubelet[1972]: I1213 14:37:51.994531 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/99b0121a4502ff72be62cd3d1e6c6472-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"99b0121a4502ff72be62cd3d1e6c6472\") " pod="kube-system/kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994830 kubelet[1972]: I1213 14:37:51.994811 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994878 kubelet[1972]: I1213 14:37:51.994849 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:51.994878 kubelet[1972]: I1213 14:37:51.994874 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6816be5f37c3a449b117df344778015b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal\" (UID: \"6816be5f37c3a449b117df344778015b\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:52.691550 sudo[1984]: pam_unix(sudo:session): session closed for user root Dec 13 14:37:52.770529 kubelet[1972]: I1213 14:37:52.770493 1972 apiserver.go:52] "Watching apiserver" Dec 13 14:37:52.786976 kubelet[1972]: I1213 14:37:52.786942 1972 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:37:52.920814 kubelet[1972]: W1213 14:37:52.914253 1972 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 14:37:52.920814 kubelet[1972]: E1213 14:37:52.914314 1972 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" Dec 13 14:37:52.964771 kubelet[1972]: I1213 14:37:52.964652 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-c-262737d7bc.novalocal" podStartSLOduration=1.964586851 podStartE2EDuration="1.964586851s" podCreationTimestamp="2024-12-13 14:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:37:52.952864738 +0000 UTC m=+1.333774227" watchObservedRunningTime="2024-12-13 14:37:52.964586851 +0000 UTC m=+1.345496350" Dec 13 14:37:52.978872 kubelet[1972]: I1213 14:37:52.978845 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-c-262737d7bc.novalocal" podStartSLOduration=1.978785174 podStartE2EDuration="1.978785174s" podCreationTimestamp="2024-12-13 14:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:37:52.965283099 +0000 UTC m=+1.346192588" watchObservedRunningTime="2024-12-13 14:37:52.978785174 +0000 UTC m=+1.359694683" Dec 13 14:37:53.022880 kubelet[1972]: I1213 14:37:53.022855 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-c-262737d7bc.novalocal" podStartSLOduration=7.022791079 podStartE2EDuration="7.022791079s" podCreationTimestamp="2024-12-13 14:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:37:52.980136391 +0000 UTC m=+1.361045891" watchObservedRunningTime="2024-12-13 14:37:53.022791079 +0000 UTC m=+1.403700568" Dec 13 14:37:56.354277 sudo[1283]: pam_unix(sudo:session): session closed for user root Dec 13 14:37:56.580203 sshd[1270]: pam_unix(sshd:session): session closed for user core Dec 13 14:37:56.587305 systemd[1]: sshd@6-172.24.4.236:22-172.24.4.1:49622.service: Deactivated successfully. Dec 13 14:37:56.588940 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:37:56.589250 systemd[1]: session-7.scope: Consumed 9.124s CPU time. Dec 13 14:37:56.590915 systemd-logind[1130]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:37:56.593623 systemd-logind[1130]: Removed session 7. Dec 13 14:38:00.773089 kubelet[1972]: I1213 14:38:00.773029 1972 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:38:00.773819 env[1141]: time="2024-12-13T14:38:00.773678065Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:38:00.774546 kubelet[1972]: I1213 14:38:00.773897 1972 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:38:00.999550 kubelet[1972]: I1213 14:38:00.999496 1972 topology_manager.go:215] "Topology Admit Handler" podUID="0bd0b456-ea68-4a37-9b75-4d5ee08695c7" podNamespace="kube-system" podName="kube-proxy-5jcx6" Dec 13 14:38:01.016155 systemd[1]: Created slice kubepods-besteffort-pod0bd0b456_ea68_4a37_9b75_4d5ee08695c7.slice. Dec 13 14:38:01.017970 kubelet[1972]: W1213 14:38:01.017936 1972 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-c-262737d7bc.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-c-262737d7bc.novalocal' and this object Dec 13 14:38:01.018374 kubelet[1972]: W1213 14:38:01.018324 1972 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-6-c-262737d7bc.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-c-262737d7bc.novalocal' and this object Dec 13 14:38:01.018721 kubelet[1972]: E1213 14:38:01.018690 1972 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-6-c-262737d7bc.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-c-262737d7bc.novalocal' and this object Dec 13 14:38:01.018930 kubelet[1972]: E1213 14:38:01.018357 1972 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-6-c-262737d7bc.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-c-262737d7bc.novalocal' and this object Dec 13 14:38:01.033724 kubelet[1972]: I1213 14:38:01.033626 1972 topology_manager.go:215] "Topology Admit Handler" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" podNamespace="kube-system" podName="cilium-k7dkv" Dec 13 14:38:01.039233 systemd[1]: Created slice kubepods-burstable-podb0f4caef_c401_4811_af0e_59ec927cc320.slice. Dec 13 14:38:01.064779 kubelet[1972]: I1213 14:38:01.064737 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-hubble-tls\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.065029 kubelet[1972]: I1213 14:38:01.065016 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6qd\" (UniqueName: \"kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-kube-api-access-8p6qd\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.065155 kubelet[1972]: I1213 14:38:01.065143 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0bd0b456-ea68-4a37-9b75-4d5ee08695c7-xtables-lock\") pod \"kube-proxy-5jcx6\" (UID: \"0bd0b456-ea68-4a37-9b75-4d5ee08695c7\") " pod="kube-system/kube-proxy-5jcx6" Dec 13 14:38:01.065282 kubelet[1972]: I1213 14:38:01.065269 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-hostproc\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.065418 kubelet[1972]: I1213 14:38:01.065407 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-etc-cni-netd\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.065584 kubelet[1972]: I1213 14:38:01.065572 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-net\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.065734 kubelet[1972]: I1213 14:38:01.065721 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0bd0b456-ea68-4a37-9b75-4d5ee08695c7-kube-proxy\") pod \"kube-proxy-5jcx6\" (UID: \"0bd0b456-ea68-4a37-9b75-4d5ee08695c7\") " pod="kube-system/kube-proxy-5jcx6" Dec 13 14:38:01.065888 kubelet[1972]: I1213 14:38:01.065876 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0f4caef-c401-4811-af0e-59ec927cc320-clustermesh-secrets\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.066021 kubelet[1972]: I1213 14:38:01.066010 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-config-path\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.066145 kubelet[1972]: I1213 14:38:01.066135 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-run\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.066450 kubelet[1972]: I1213 14:38:01.066423 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-bpf-maps\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.066590 kubelet[1972]: I1213 14:38:01.066578 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-lib-modules\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.066702 kubelet[1972]: I1213 14:38:01.066691 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g488w\" (UniqueName: \"kubernetes.io/projected/0bd0b456-ea68-4a37-9b75-4d5ee08695c7-kube-api-access-g488w\") pod \"kube-proxy-5jcx6\" (UID: \"0bd0b456-ea68-4a37-9b75-4d5ee08695c7\") " pod="kube-system/kube-proxy-5jcx6" Dec 13 14:38:01.066802 kubelet[1972]: I1213 14:38:01.066791 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-cgroup\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.066916 kubelet[1972]: I1213 14:38:01.066905 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0bd0b456-ea68-4a37-9b75-4d5ee08695c7-lib-modules\") pod \"kube-proxy-5jcx6\" (UID: \"0bd0b456-ea68-4a37-9b75-4d5ee08695c7\") " pod="kube-system/kube-proxy-5jcx6" Dec 13 14:38:01.067053 kubelet[1972]: I1213 14:38:01.067040 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cni-path\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.067151 kubelet[1972]: I1213 14:38:01.067140 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-xtables-lock\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.067250 kubelet[1972]: I1213 14:38:01.067238 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-kernel\") pod \"cilium-k7dkv\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " pod="kube-system/cilium-k7dkv" Dec 13 14:38:01.846305 kubelet[1972]: I1213 14:38:01.846249 1972 topology_manager.go:215] "Topology Admit Handler" podUID="3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4" podNamespace="kube-system" podName="cilium-operator-5cc964979-nlb95" Dec 13 14:38:01.856445 systemd[1]: Created slice kubepods-besteffort-pod3b7c4a47_bc9b_44c0_8dd0_788ddefcdda4.slice. Dec 13 14:38:01.872997 kubelet[1972]: I1213 14:38:01.872950 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hp4\" (UniqueName: \"kubernetes.io/projected/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-kube-api-access-h5hp4\") pod \"cilium-operator-5cc964979-nlb95\" (UID: \"3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4\") " pod="kube-system/cilium-operator-5cc964979-nlb95" Dec 13 14:38:01.873143 kubelet[1972]: I1213 14:38:01.873087 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-cilium-config-path\") pod \"cilium-operator-5cc964979-nlb95\" (UID: \"3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4\") " pod="kube-system/cilium-operator-5cc964979-nlb95" Dec 13 14:38:02.213457 kubelet[1972]: E1213 14:38:02.212948 1972 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:38:02.213778 kubelet[1972]: E1213 14:38:02.213745 1972 projected.go:200] Error preparing data for projected volume kube-api-access-8p6qd for pod kube-system/cilium-k7dkv: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:38:02.214620 kubelet[1972]: E1213 14:38:02.214570 1972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-kube-api-access-8p6qd podName:b0f4caef-c401-4811-af0e-59ec927cc320 nodeName:}" failed. No retries permitted until 2024-12-13 14:38:02.714057481 +0000 UTC m=+11.094967020 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8p6qd" (UniqueName: "kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-kube-api-access-8p6qd") pod "cilium-k7dkv" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:38:02.218797 kubelet[1972]: E1213 14:38:02.218726 1972 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:38:02.218797 kubelet[1972]: E1213 14:38:02.218793 1972 projected.go:200] Error preparing data for projected volume kube-api-access-g488w for pod kube-system/kube-proxy-5jcx6: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:38:02.219020 kubelet[1972]: E1213 14:38:02.218922 1972 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0bd0b456-ea68-4a37-9b75-4d5ee08695c7-kube-api-access-g488w podName:0bd0b456-ea68-4a37-9b75-4d5ee08695c7 nodeName:}" failed. No retries permitted until 2024-12-13 14:38:02.718882504 +0000 UTC m=+11.099792043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g488w" (UniqueName: "kubernetes.io/projected/0bd0b456-ea68-4a37-9b75-4d5ee08695c7-kube-api-access-g488w") pod "kube-proxy-5jcx6" (UID: "0bd0b456-ea68-4a37-9b75-4d5ee08695c7") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:38:02.767743 env[1141]: time="2024-12-13T14:38:02.767045427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-nlb95,Uid:3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4,Namespace:kube-system,Attempt:0,}" Dec 13 14:38:02.833538 env[1141]: time="2024-12-13T14:38:02.833256604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcx6,Uid:0bd0b456-ea68-4a37-9b75-4d5ee08695c7,Namespace:kube-system,Attempt:0,}" Dec 13 14:38:02.846229 env[1141]: time="2024-12-13T14:38:02.846111385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7dkv,Uid:b0f4caef-c401-4811-af0e-59ec927cc320,Namespace:kube-system,Attempt:0,}" Dec 13 14:38:03.240014 env[1141]: time="2024-12-13T14:38:03.239435827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:38:03.240245 env[1141]: time="2024-12-13T14:38:03.239570561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:38:03.240245 env[1141]: time="2024-12-13T14:38:03.239618992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:38:03.240245 env[1141]: time="2024-12-13T14:38:03.239917541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba pid=2055 runtime=io.containerd.runc.v2 Dec 13 14:38:03.257107 env[1141]: time="2024-12-13T14:38:03.256909494Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:38:03.257379 env[1141]: time="2024-12-13T14:38:03.257007027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:38:03.257379 env[1141]: time="2024-12-13T14:38:03.257039197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:38:03.258183 env[1141]: time="2024-12-13T14:38:03.258032351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203 pid=2066 runtime=io.containerd.runc.v2 Dec 13 14:38:03.261047 env[1141]: time="2024-12-13T14:38:03.260961286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:38:03.261149 env[1141]: time="2024-12-13T14:38:03.261045895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:38:03.261149 env[1141]: time="2024-12-13T14:38:03.261088424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:38:03.261419 env[1141]: time="2024-12-13T14:38:03.261340739Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd93c50a9eacc99a7ae24f4d44f04762e8764f36c72cdb4134493c5c95b42f09 pid=2085 runtime=io.containerd.runc.v2 Dec 13 14:38:03.287269 systemd[1]: Started cri-containerd-8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba.scope. Dec 13 14:38:03.302272 systemd[1]: Started cri-containerd-cd93c50a9eacc99a7ae24f4d44f04762e8764f36c72cdb4134493c5c95b42f09.scope. Dec 13 14:38:03.307305 systemd[1]: Started cri-containerd-39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203.scope. Dec 13 14:38:03.340695 env[1141]: time="2024-12-13T14:38:03.340640119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k7dkv,Uid:b0f4caef-c401-4811-af0e-59ec927cc320,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\"" Dec 13 14:38:03.354936 env[1141]: time="2024-12-13T14:38:03.354877444Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:38:03.380507 env[1141]: time="2024-12-13T14:38:03.380370867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5jcx6,Uid:0bd0b456-ea68-4a37-9b75-4d5ee08695c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd93c50a9eacc99a7ae24f4d44f04762e8764f36c72cdb4134493c5c95b42f09\"" Dec 13 14:38:03.388569 env[1141]: time="2024-12-13T14:38:03.388490332Z" level=info msg="CreateContainer within sandbox \"cd93c50a9eacc99a7ae24f4d44f04762e8764f36c72cdb4134493c5c95b42f09\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:38:03.403864 env[1141]: time="2024-12-13T14:38:03.403795049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-nlb95,Uid:3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4,Namespace:kube-system,Attempt:0,} returns sandbox id \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\"" Dec 13 14:38:03.426797 env[1141]: time="2024-12-13T14:38:03.426722277Z" level=info msg="CreateContainer within sandbox \"cd93c50a9eacc99a7ae24f4d44f04762e8764f36c72cdb4134493c5c95b42f09\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aacfc04de9dcaab1d074effaacb923a7f56dc4c6c7a461d18cca72855492332c\"" Dec 13 14:38:03.427529 env[1141]: time="2024-12-13T14:38:03.427440385Z" level=info msg="StartContainer for \"aacfc04de9dcaab1d074effaacb923a7f56dc4c6c7a461d18cca72855492332c\"" Dec 13 14:38:03.453056 systemd[1]: Started cri-containerd-aacfc04de9dcaab1d074effaacb923a7f56dc4c6c7a461d18cca72855492332c.scope. Dec 13 14:38:03.502752 env[1141]: time="2024-12-13T14:38:03.502698894Z" level=info msg="StartContainer for \"aacfc04de9dcaab1d074effaacb923a7f56dc4c6c7a461d18cca72855492332c\" returns successfully" Dec 13 14:38:03.955195 kubelet[1972]: I1213 14:38:03.954734 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5jcx6" podStartSLOduration=3.954613984 podStartE2EDuration="3.954613984s" podCreationTimestamp="2024-12-13 14:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:38:03.944523301 +0000 UTC m=+12.325432840" watchObservedRunningTime="2024-12-13 14:38:03.954613984 +0000 UTC m=+12.335523523" Dec 13 14:38:16.810968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018713917.mount: Deactivated successfully. Dec 13 14:38:22.183364 env[1141]: time="2024-12-13T14:38:22.183255957Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:38:22.192652 env[1141]: time="2024-12-13T14:38:22.192586347Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:38:22.197866 env[1141]: time="2024-12-13T14:38:22.197784546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:38:22.201145 env[1141]: time="2024-12-13T14:38:22.199375947Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:38:22.207526 env[1141]: time="2024-12-13T14:38:22.204925374Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:38:22.210260 env[1141]: time="2024-12-13T14:38:22.210192191Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:38:22.242685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1919632696.mount: Deactivated successfully. Dec 13 14:38:22.260725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337910348.mount: Deactivated successfully. Dec 13 14:38:22.264111 env[1141]: time="2024-12-13T14:38:22.264036065Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\"" Dec 13 14:38:22.266551 env[1141]: time="2024-12-13T14:38:22.265790172Z" level=info msg="StartContainer for \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\"" Dec 13 14:38:22.304218 systemd[1]: Started cri-containerd-0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3.scope. Dec 13 14:38:22.351344 env[1141]: time="2024-12-13T14:38:22.351257583Z" level=info msg="StartContainer for \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\" returns successfully" Dec 13 14:38:22.364593 systemd[1]: cri-containerd-0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3.scope: Deactivated successfully. Dec 13 14:38:23.234871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3-rootfs.mount: Deactivated successfully. Dec 13 14:38:23.307895 env[1141]: time="2024-12-13T14:38:23.307797738Z" level=info msg="shim disconnected" id=0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3 Dec 13 14:38:23.307895 env[1141]: time="2024-12-13T14:38:23.307891203Z" level=warning msg="cleaning up after shim disconnected" id=0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3 namespace=k8s.io Dec 13 14:38:23.307895 env[1141]: time="2024-12-13T14:38:23.307917483Z" level=info msg="cleaning up dead shim" Dec 13 14:38:23.336977 env[1141]: time="2024-12-13T14:38:23.336852453Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:38:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2376 runtime=io.containerd.runc.v2\n" Dec 13 14:38:24.335223 env[1141]: time="2024-12-13T14:38:24.335145969Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:38:24.379358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200172535.mount: Deactivated successfully. Dec 13 14:38:24.400070 env[1141]: time="2024-12-13T14:38:24.399970743Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\"" Dec 13 14:38:24.401613 env[1141]: time="2024-12-13T14:38:24.401526238Z" level=info msg="StartContainer for \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\"" Dec 13 14:38:24.436413 systemd[1]: Started cri-containerd-f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a.scope. Dec 13 14:38:24.469670 env[1141]: time="2024-12-13T14:38:24.469604408Z" level=info msg="StartContainer for \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\" returns successfully" Dec 13 14:38:24.480099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:38:24.480389 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:38:24.480992 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:38:24.482752 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:38:24.486679 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:38:24.490305 systemd[1]: cri-containerd-f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a.scope: Deactivated successfully. Dec 13 14:38:24.515854 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:38:24.536990 env[1141]: time="2024-12-13T14:38:24.536932251Z" level=info msg="shim disconnected" id=f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a Dec 13 14:38:24.537178 env[1141]: time="2024-12-13T14:38:24.536998586Z" level=warning msg="cleaning up after shim disconnected" id=f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a namespace=k8s.io Dec 13 14:38:24.537178 env[1141]: time="2024-12-13T14:38:24.537014535Z" level=info msg="cleaning up dead shim" Dec 13 14:38:24.544483 env[1141]: time="2024-12-13T14:38:24.544419770Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:38:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2441 runtime=io.containerd.runc.v2\n" Dec 13 14:38:25.364543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a-rootfs.mount: Deactivated successfully. Dec 13 14:38:25.374568 env[1141]: time="2024-12-13T14:38:25.374531170Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:38:25.408730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918880027.mount: Deactivated successfully. Dec 13 14:38:25.440269 env[1141]: time="2024-12-13T14:38:25.440224956Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\"" Dec 13 14:38:25.442568 env[1141]: time="2024-12-13T14:38:25.440969852Z" level=info msg="StartContainer for \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\"" Dec 13 14:38:25.504380 systemd[1]: Started cri-containerd-bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb.scope. Dec 13 14:38:25.659987 env[1141]: time="2024-12-13T14:38:25.659876241Z" level=info msg="StartContainer for \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\" returns successfully" Dec 13 14:38:25.664481 systemd[1]: cri-containerd-bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb.scope: Deactivated successfully. Dec 13 14:38:25.753451 env[1141]: time="2024-12-13T14:38:25.753335049Z" level=info msg="shim disconnected" id=bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb Dec 13 14:38:25.753955 env[1141]: time="2024-12-13T14:38:25.753910827Z" level=warning msg="cleaning up after shim disconnected" id=bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb namespace=k8s.io Dec 13 14:38:25.754201 env[1141]: time="2024-12-13T14:38:25.754130899Z" level=info msg="cleaning up dead shim" Dec 13 14:38:25.779176 env[1141]: time="2024-12-13T14:38:25.779096486Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:38:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2501 runtime=io.containerd.runc.v2\n" Dec 13 14:38:26.365170 systemd[1]: run-containerd-runc-k8s.io-bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb-runc.IOL9Y3.mount: Deactivated successfully. Dec 13 14:38:26.365446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb-rootfs.mount: Deactivated successfully. Dec 13 14:38:26.392821 env[1141]: time="2024-12-13T14:38:26.392746650Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:38:26.861575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154702216.mount: Deactivated successfully. Dec 13 14:38:26.879678 env[1141]: time="2024-12-13T14:38:26.879386749Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\"" Dec 13 14:38:26.882038 env[1141]: time="2024-12-13T14:38:26.881845166Z" level=info msg="StartContainer for \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\"" Dec 13 14:38:26.939105 systemd[1]: Started cri-containerd-751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6.scope. Dec 13 14:38:26.993566 env[1141]: time="2024-12-13T14:38:26.992628589Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:38:26.993798 systemd[1]: cri-containerd-751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6.scope: Deactivated successfully. Dec 13 14:38:27.000041 env[1141]: time="2024-12-13T14:38:26.999976886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:38:27.000266 env[1141]: time="2024-12-13T14:38:26.999898920Z" level=info msg="StartContainer for \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\" returns successfully" Dec 13 14:38:27.002390 env[1141]: time="2024-12-13T14:38:26.999627532Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0f4caef_c401_4811_af0e_59ec927cc320.slice/cri-containerd-751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6.scope/memory.events\": no such file or directory" Dec 13 14:38:27.004973 env[1141]: time="2024-12-13T14:38:27.004933434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:38:27.005610 env[1141]: time="2024-12-13T14:38:27.005571550Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:38:27.009600 env[1141]: time="2024-12-13T14:38:27.009560854Z" level=info msg="CreateContainer within sandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:38:27.035442 env[1141]: time="2024-12-13T14:38:27.035394390Z" level=info msg="CreateContainer within sandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\"" Dec 13 14:38:27.038787 env[1141]: time="2024-12-13T14:38:27.037313976Z" level=info msg="StartContainer for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\"" Dec 13 14:38:27.061634 env[1141]: time="2024-12-13T14:38:27.061585525Z" level=info msg="shim disconnected" id=751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6 Dec 13 14:38:27.061634 env[1141]: time="2024-12-13T14:38:27.061633575Z" level=warning msg="cleaning up after shim disconnected" id=751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6 namespace=k8s.io Dec 13 14:38:27.061882 env[1141]: time="2024-12-13T14:38:27.061644675Z" level=info msg="cleaning up dead shim" Dec 13 14:38:27.069378 systemd[1]: Started cri-containerd-ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5.scope. Dec 13 14:38:27.076679 env[1141]: time="2024-12-13T14:38:27.076642528Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:38:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2573 runtime=io.containerd.runc.v2\n" Dec 13 14:38:27.114822 env[1141]: time="2024-12-13T14:38:27.114697673Z" level=info msg="StartContainer for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" returns successfully" Dec 13 14:38:27.363522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6-rootfs.mount: Deactivated successfully. Dec 13 14:38:27.386506 env[1141]: time="2024-12-13T14:38:27.386374693Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:38:27.433681 env[1141]: time="2024-12-13T14:38:27.433622672Z" level=info msg="CreateContainer within sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\"" Dec 13 14:38:27.434341 env[1141]: time="2024-12-13T14:38:27.434307446Z" level=info msg="StartContainer for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\"" Dec 13 14:38:27.461579 systemd[1]: Started cri-containerd-3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f.scope. Dec 13 14:38:27.519926 env[1141]: time="2024-12-13T14:38:27.519864045Z" level=info msg="StartContainer for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" returns successfully" Dec 13 14:38:27.809864 kubelet[1972]: I1213 14:38:27.809838 1972 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:38:27.976897 kubelet[1972]: I1213 14:38:27.976851 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-nlb95" podStartSLOduration=3.374690673 podStartE2EDuration="26.97465895s" podCreationTimestamp="2024-12-13 14:38:01 +0000 UTC" firstStartedPulling="2024-12-13 14:38:03.405905088 +0000 UTC m=+11.786814577" lastFinishedPulling="2024-12-13 14:38:27.005873365 +0000 UTC m=+35.386782854" observedRunningTime="2024-12-13 14:38:27.43939568 +0000 UTC m=+35.820305179" watchObservedRunningTime="2024-12-13 14:38:27.97465895 +0000 UTC m=+36.355568459" Dec 13 14:38:27.978369 kubelet[1972]: I1213 14:38:27.978346 1972 topology_manager.go:215] "Topology Admit Handler" podUID="5b37c311-6148-437a-8675-b95f6926ff0e" podNamespace="kube-system" podName="coredns-76f75df574-q5kkj" Dec 13 14:38:27.983714 systemd[1]: Created slice kubepods-burstable-pod5b37c311_6148_437a_8675_b95f6926ff0e.slice. Dec 13 14:38:27.994161 kubelet[1972]: I1213 14:38:27.994129 1972 topology_manager.go:215] "Topology Admit Handler" podUID="31af9f32-db11-46d2-abb1-e27ec82111c9" podNamespace="kube-system" podName="coredns-76f75df574-gvs7g" Dec 13 14:38:27.999205 systemd[1]: Created slice kubepods-burstable-pod31af9f32_db11_46d2_abb1_e27ec82111c9.slice. Dec 13 14:38:28.057721 kubelet[1972]: I1213 14:38:28.057687 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78dxw\" (UniqueName: \"kubernetes.io/projected/5b37c311-6148-437a-8675-b95f6926ff0e-kube-api-access-78dxw\") pod \"coredns-76f75df574-q5kkj\" (UID: \"5b37c311-6148-437a-8675-b95f6926ff0e\") " pod="kube-system/coredns-76f75df574-q5kkj" Dec 13 14:38:28.057885 kubelet[1972]: I1213 14:38:28.057749 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b37c311-6148-437a-8675-b95f6926ff0e-config-volume\") pod \"coredns-76f75df574-q5kkj\" (UID: \"5b37c311-6148-437a-8675-b95f6926ff0e\") " pod="kube-system/coredns-76f75df574-q5kkj" Dec 13 14:38:28.159082 kubelet[1972]: I1213 14:38:28.158986 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6skw\" (UniqueName: \"kubernetes.io/projected/31af9f32-db11-46d2-abb1-e27ec82111c9-kube-api-access-q6skw\") pod \"coredns-76f75df574-gvs7g\" (UID: \"31af9f32-db11-46d2-abb1-e27ec82111c9\") " pod="kube-system/coredns-76f75df574-gvs7g" Dec 13 14:38:28.159602 kubelet[1972]: I1213 14:38:28.159343 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31af9f32-db11-46d2-abb1-e27ec82111c9-config-volume\") pod \"coredns-76f75df574-gvs7g\" (UID: \"31af9f32-db11-46d2-abb1-e27ec82111c9\") " pod="kube-system/coredns-76f75df574-gvs7g" Dec 13 14:38:28.288339 env[1141]: time="2024-12-13T14:38:28.287992175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q5kkj,Uid:5b37c311-6148-437a-8675-b95f6926ff0e,Namespace:kube-system,Attempt:0,}" Dec 13 14:38:28.302655 env[1141]: time="2024-12-13T14:38:28.302611609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gvs7g,Uid:31af9f32-db11-46d2-abb1-e27ec82111c9,Namespace:kube-system,Attempt:0,}" Dec 13 14:38:28.391189 systemd[1]: run-containerd-runc-k8s.io-3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f-runc.znHKXw.mount: Deactivated successfully. Dec 13 14:38:30.902713 systemd-networkd[970]: cilium_host: Link UP Dec 13 14:38:30.904126 systemd-networkd[970]: cilium_net: Link UP Dec 13 14:38:30.907614 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:38:30.910939 systemd-networkd[970]: cilium_net: Gained carrier Dec 13 14:38:30.913128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:38:30.911599 systemd-networkd[970]: cilium_host: Gained carrier Dec 13 14:38:31.148078 systemd-networkd[970]: cilium_vxlan: Link UP Dec 13 14:38:31.148092 systemd-networkd[970]: cilium_vxlan: Gained carrier Dec 13 14:38:31.722925 systemd-networkd[970]: cilium_net: Gained IPv6LL Dec 13 14:38:31.723537 systemd-networkd[970]: cilium_host: Gained IPv6LL Dec 13 14:38:32.151567 kernel: NET: Registered PF_ALG protocol family Dec 13 14:38:33.119139 systemd-networkd[970]: lxc_health: Link UP Dec 13 14:38:33.130610 systemd-networkd[970]: cilium_vxlan: Gained IPv6LL Dec 13 14:38:33.155629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:38:33.138505 systemd-networkd[970]: lxc_health: Gained carrier Dec 13 14:38:33.496587 systemd-networkd[970]: lxceae12e827338: Link UP Dec 13 14:38:33.503507 kernel: eth0: renamed from tmp8992d Dec 13 14:38:33.515503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceae12e827338: link becomes ready Dec 13 14:38:33.515832 systemd-networkd[970]: lxc810e5cc48d63: Link UP Dec 13 14:38:33.516180 systemd-networkd[970]: lxceae12e827338: Gained carrier Dec 13 14:38:33.518688 kernel: eth0: renamed from tmpbebd9 Dec 13 14:38:33.525594 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc810e5cc48d63: link becomes ready Dec 13 14:38:33.525558 systemd-networkd[970]: lxc810e5cc48d63: Gained carrier Dec 13 14:38:34.410756 systemd-networkd[970]: lxc_health: Gained IPv6LL Dec 13 14:38:34.858661 systemd-networkd[970]: lxceae12e827338: Gained IPv6LL Dec 13 14:38:34.894581 kubelet[1972]: I1213 14:38:34.894520 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-k7dkv" podStartSLOduration=16.032911067 podStartE2EDuration="34.894406358s" podCreationTimestamp="2024-12-13 14:38:00 +0000 UTC" firstStartedPulling="2024-12-13 14:38:03.342538071 +0000 UTC m=+11.723447570" lastFinishedPulling="2024-12-13 14:38:22.204033322 +0000 UTC m=+30.584942861" observedRunningTime="2024-12-13 14:38:28.481408485 +0000 UTC m=+36.862317984" watchObservedRunningTime="2024-12-13 14:38:34.894406358 +0000 UTC m=+43.275315897" Dec 13 14:38:35.562673 systemd-networkd[970]: lxc810e5cc48d63: Gained IPv6LL Dec 13 14:38:38.107386 env[1141]: time="2024-12-13T14:38:38.107261559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:38:38.107386 env[1141]: time="2024-12-13T14:38:38.107376665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:38:38.108010 env[1141]: time="2024-12-13T14:38:38.107413224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:38:38.108010 env[1141]: time="2024-12-13T14:38:38.107907379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8992dbee49e2195ef34e8bd5a371ce976ad03626131b3753ffb70698e89c6f5a pid=3154 runtime=io.containerd.runc.v2 Dec 13 14:38:38.108572 env[1141]: time="2024-12-13T14:38:38.108494601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:38:38.108704 env[1141]: time="2024-12-13T14:38:38.108544504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:38:38.108704 env[1141]: time="2024-12-13T14:38:38.108679457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:38:38.114246 env[1141]: time="2024-12-13T14:38:38.113519919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bebd9c24472b7c347eea0207119a4afa4c38b27da3ff20c5dd283c0f959af438 pid=3155 runtime=io.containerd.runc.v2 Dec 13 14:38:38.142120 systemd[1]: run-containerd-runc-k8s.io-8992dbee49e2195ef34e8bd5a371ce976ad03626131b3753ffb70698e89c6f5a-runc.lkexwe.mount: Deactivated successfully. Dec 13 14:38:38.143960 systemd[1]: Started cri-containerd-8992dbee49e2195ef34e8bd5a371ce976ad03626131b3753ffb70698e89c6f5a.scope. Dec 13 14:38:38.161750 systemd[1]: Started cri-containerd-bebd9c24472b7c347eea0207119a4afa4c38b27da3ff20c5dd283c0f959af438.scope. Dec 13 14:38:38.248958 env[1141]: time="2024-12-13T14:38:38.248907850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gvs7g,Uid:31af9f32-db11-46d2-abb1-e27ec82111c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8992dbee49e2195ef34e8bd5a371ce976ad03626131b3753ffb70698e89c6f5a\"" Dec 13 14:38:38.254303 env[1141]: time="2024-12-13T14:38:38.254255603Z" level=info msg="CreateContainer within sandbox \"8992dbee49e2195ef34e8bd5a371ce976ad03626131b3753ffb70698e89c6f5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:38:38.257561 env[1141]: time="2024-12-13T14:38:38.257521023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q5kkj,Uid:5b37c311-6148-437a-8675-b95f6926ff0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bebd9c24472b7c347eea0207119a4afa4c38b27da3ff20c5dd283c0f959af438\"" Dec 13 14:38:38.260697 env[1141]: time="2024-12-13T14:38:38.260569136Z" level=info msg="CreateContainer within sandbox \"bebd9c24472b7c347eea0207119a4afa4c38b27da3ff20c5dd283c0f959af438\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:38:38.291605 env[1141]: time="2024-12-13T14:38:38.291544239Z" level=info msg="CreateContainer within sandbox \"bebd9c24472b7c347eea0207119a4afa4c38b27da3ff20c5dd283c0f959af438\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91473da964179d0fb1a1b1d55e6f192a424b2a439688e712e823db53f9b4bc75\"" Dec 13 14:38:38.292437 env[1141]: time="2024-12-13T14:38:38.292237398Z" level=info msg="StartContainer for \"91473da964179d0fb1a1b1d55e6f192a424b2a439688e712e823db53f9b4bc75\"" Dec 13 14:38:38.303219 env[1141]: time="2024-12-13T14:38:38.303161321Z" level=info msg="CreateContainer within sandbox \"8992dbee49e2195ef34e8bd5a371ce976ad03626131b3753ffb70698e89c6f5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b864f64e310cdeccf21e57bf58576ef4b0d67093ea2720b7fbd2c507143e2c1\"" Dec 13 14:38:38.304141 env[1141]: time="2024-12-13T14:38:38.304120188Z" level=info msg="StartContainer for \"6b864f64e310cdeccf21e57bf58576ef4b0d67093ea2720b7fbd2c507143e2c1\"" Dec 13 14:38:38.323290 systemd[1]: Started cri-containerd-91473da964179d0fb1a1b1d55e6f192a424b2a439688e712e823db53f9b4bc75.scope. Dec 13 14:38:38.340244 systemd[1]: Started cri-containerd-6b864f64e310cdeccf21e57bf58576ef4b0d67093ea2720b7fbd2c507143e2c1.scope. Dec 13 14:38:38.403136 env[1141]: time="2024-12-13T14:38:38.402035369Z" level=info msg="StartContainer for \"91473da964179d0fb1a1b1d55e6f192a424b2a439688e712e823db53f9b4bc75\" returns successfully" Dec 13 14:38:38.403343 env[1141]: time="2024-12-13T14:38:38.403301332Z" level=info msg="StartContainer for \"6b864f64e310cdeccf21e57bf58576ef4b0d67093ea2720b7fbd2c507143e2c1\" returns successfully" Dec 13 14:38:38.452439 kubelet[1972]: I1213 14:38:38.452398 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gvs7g" podStartSLOduration=37.452349837 podStartE2EDuration="37.452349837s" podCreationTimestamp="2024-12-13 14:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:38:38.451768217 +0000 UTC m=+46.832677706" watchObservedRunningTime="2024-12-13 14:38:38.452349837 +0000 UTC m=+46.833259336" Dec 13 14:38:39.472653 kubelet[1972]: I1213 14:38:39.472603 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q5kkj" podStartSLOduration=38.472521388 podStartE2EDuration="38.472521388s" podCreationTimestamp="2024-12-13 14:38:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:38:38.466025217 +0000 UTC m=+46.846934716" watchObservedRunningTime="2024-12-13 14:38:39.472521388 +0000 UTC m=+47.853430987" Dec 13 14:39:13.096071 systemd[1]: Started sshd@7-172.24.4.236:22-172.24.4.1:60334.service. Dec 13 14:39:14.583314 sshd[3312]: Accepted publickey for core from 172.24.4.1 port 60334 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:14.587509 sshd[3312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:14.598930 systemd-logind[1130]: New session 8 of user core. Dec 13 14:39:14.600631 systemd[1]: Started session-8.scope. Dec 13 14:39:15.752535 sshd[3312]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:15.758623 systemd[1]: sshd@7-172.24.4.236:22-172.24.4.1:60334.service: Deactivated successfully. Dec 13 14:39:15.760289 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:39:15.761819 systemd-logind[1130]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:39:15.763835 systemd-logind[1130]: Removed session 8. Dec 13 14:39:20.761882 systemd[1]: Started sshd@8-172.24.4.236:22-172.24.4.1:44086.service. Dec 13 14:39:22.644650 sshd[3325]: Accepted publickey for core from 172.24.4.1 port 44086 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:22.648752 sshd[3325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:22.658571 systemd-logind[1130]: New session 9 of user core. Dec 13 14:39:22.664353 systemd[1]: Started session-9.scope. Dec 13 14:39:23.425787 sshd[3325]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:23.433548 systemd-logind[1130]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:39:23.433986 systemd[1]: sshd@8-172.24.4.236:22-172.24.4.1:44086.service: Deactivated successfully. Dec 13 14:39:23.436672 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:39:23.440323 systemd-logind[1130]: Removed session 9. Dec 13 14:39:28.433553 systemd[1]: Started sshd@9-172.24.4.236:22-172.24.4.1:44472.service. Dec 13 14:39:29.939841 sshd[3338]: Accepted publickey for core from 172.24.4.1 port 44472 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:29.944424 sshd[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:29.956570 systemd[1]: Started session-10.scope. Dec 13 14:39:29.958015 systemd-logind[1130]: New session 10 of user core. Dec 13 14:39:31.560633 sshd[3338]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:31.563789 systemd[1]: sshd@9-172.24.4.236:22-172.24.4.1:44472.service: Deactivated successfully. Dec 13 14:39:31.565219 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:39:31.566087 systemd-logind[1130]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:39:31.568055 systemd-logind[1130]: Removed session 10. Dec 13 14:39:36.569059 systemd[1]: Started sshd@10-172.24.4.236:22-172.24.4.1:44646.service. Dec 13 14:39:37.960749 sshd[3353]: Accepted publickey for core from 172.24.4.1 port 44646 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:37.964310 sshd[3353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:37.976613 systemd-logind[1130]: New session 11 of user core. Dec 13 14:39:37.980621 systemd[1]: Started session-11.scope. Dec 13 14:39:38.758589 sshd[3353]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:38.767046 systemd[1]: Started sshd@11-172.24.4.236:22-172.24.4.1:44652.service. Dec 13 14:39:38.768232 systemd[1]: sshd@10-172.24.4.236:22-172.24.4.1:44646.service: Deactivated successfully. Dec 13 14:39:38.770422 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:39:38.776080 systemd-logind[1130]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:39:38.778846 systemd-logind[1130]: Removed session 11. Dec 13 14:39:40.190174 sshd[3366]: Accepted publickey for core from 172.24.4.1 port 44652 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:40.192964 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:40.202609 systemd-logind[1130]: New session 12 of user core. Dec 13 14:39:40.204711 systemd[1]: Started session-12.scope. Dec 13 14:39:41.026172 sshd[3366]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:41.033869 systemd[1]: Started sshd@12-172.24.4.236:22-172.24.4.1:44654.service. Dec 13 14:39:41.035091 systemd[1]: sshd@11-172.24.4.236:22-172.24.4.1:44652.service: Deactivated successfully. Dec 13 14:39:41.036717 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:39:41.041045 systemd-logind[1130]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:39:41.042772 systemd-logind[1130]: Removed session 12. Dec 13 14:39:42.424630 sshd[3376]: Accepted publickey for core from 172.24.4.1 port 44654 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:42.428229 sshd[3376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:42.442224 systemd-logind[1130]: New session 13 of user core. Dec 13 14:39:42.444192 systemd[1]: Started session-13.scope. Dec 13 14:39:43.363099 sshd[3376]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:43.370114 systemd-logind[1130]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:39:43.370772 systemd[1]: sshd@12-172.24.4.236:22-172.24.4.1:44654.service: Deactivated successfully. Dec 13 14:39:43.372311 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:39:43.375008 systemd-logind[1130]: Removed session 13. Dec 13 14:39:48.372074 systemd[1]: Started sshd@13-172.24.4.236:22-172.24.4.1:54990.service. Dec 13 14:39:49.656792 sshd[3388]: Accepted publickey for core from 172.24.4.1 port 54990 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:49.660021 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:49.672614 systemd-logind[1130]: New session 14 of user core. Dec 13 14:39:49.675357 systemd[1]: Started session-14.scope. Dec 13 14:39:50.430739 sshd[3388]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:50.436718 systemd-logind[1130]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:39:50.437002 systemd[1]: sshd@13-172.24.4.236:22-172.24.4.1:54990.service: Deactivated successfully. Dec 13 14:39:50.438573 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:39:50.440382 systemd-logind[1130]: Removed session 14. Dec 13 14:39:55.439570 systemd[1]: Started sshd@14-172.24.4.236:22-172.24.4.1:40806.service. Dec 13 14:39:56.659680 sshd[3402]: Accepted publickey for core from 172.24.4.1 port 40806 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:56.662559 sshd[3402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:56.674180 systemd-logind[1130]: New session 15 of user core. Dec 13 14:39:56.674507 systemd[1]: Started session-15.scope. Dec 13 14:39:57.432058 sshd[3402]: pam_unix(sshd:session): session closed for user core Dec 13 14:39:57.438318 systemd[1]: sshd@14-172.24.4.236:22-172.24.4.1:40806.service: Deactivated successfully. Dec 13 14:39:57.440735 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:39:57.442577 systemd-logind[1130]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:39:57.445035 systemd[1]: Started sshd@15-172.24.4.236:22-172.24.4.1:40810.service. Dec 13 14:39:57.450302 systemd-logind[1130]: Removed session 15. Dec 13 14:39:58.846933 sshd[3414]: Accepted publickey for core from 172.24.4.1 port 40810 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:39:58.850187 sshd[3414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:39:58.861605 systemd-logind[1130]: New session 16 of user core. Dec 13 14:39:58.862778 systemd[1]: Started session-16.scope. Dec 13 14:40:01.003855 sshd[3414]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:01.017635 systemd[1]: Started sshd@16-172.24.4.236:22-172.24.4.1:40816.service. Dec 13 14:40:01.019630 systemd[1]: sshd@15-172.24.4.236:22-172.24.4.1:40810.service: Deactivated successfully. Dec 13 14:40:01.026954 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:40:01.033446 systemd-logind[1130]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:40:01.037570 systemd-logind[1130]: Removed session 16. Dec 13 14:40:02.227787 sshd[3423]: Accepted publickey for core from 172.24.4.1 port 40816 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:02.230564 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:02.241393 systemd-logind[1130]: New session 17 of user core. Dec 13 14:40:02.242157 systemd[1]: Started session-17.scope. Dec 13 14:40:06.934132 sshd[3423]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:06.947581 systemd[1]: Started sshd@17-172.24.4.236:22-172.24.4.1:55664.service. Dec 13 14:40:06.948927 systemd[1]: sshd@16-172.24.4.236:22-172.24.4.1:40816.service: Deactivated successfully. Dec 13 14:40:06.951140 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:40:06.957894 systemd-logind[1130]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:40:06.960567 systemd-logind[1130]: Removed session 17. Dec 13 14:40:08.101291 sshd[3442]: Accepted publickey for core from 172.24.4.1 port 55664 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:08.104003 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:08.116430 systemd-logind[1130]: New session 18 of user core. Dec 13 14:40:08.118421 systemd[1]: Started session-18.scope. Dec 13 14:40:09.365641 sshd[3442]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:09.371018 systemd[1]: sshd@17-172.24.4.236:22-172.24.4.1:55664.service: Deactivated successfully. Dec 13 14:40:09.372650 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:40:09.376246 systemd-logind[1130]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:40:09.379268 systemd[1]: Started sshd@18-172.24.4.236:22-172.24.4.1:55666.service. Dec 13 14:40:09.383112 systemd-logind[1130]: Removed session 18. Dec 13 14:40:10.870370 sshd[3453]: Accepted publickey for core from 172.24.4.1 port 55666 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:10.872912 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:10.883888 systemd[1]: Started session-19.scope. Dec 13 14:40:10.884748 systemd-logind[1130]: New session 19 of user core. Dec 13 14:40:11.798628 sshd[3453]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:11.804422 systemd[1]: sshd@18-172.24.4.236:22-172.24.4.1:55666.service: Deactivated successfully. Dec 13 14:40:11.806234 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:40:11.810043 systemd-logind[1130]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:40:11.813517 systemd-logind[1130]: Removed session 19. Dec 13 14:40:16.812275 systemd[1]: Started sshd@19-172.24.4.236:22-172.24.4.1:41398.service. Dec 13 14:40:18.153962 sshd[3464]: Accepted publickey for core from 172.24.4.1 port 41398 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:18.156673 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:18.172241 systemd-logind[1130]: New session 20 of user core. Dec 13 14:40:18.172328 systemd[1]: Started session-20.scope. Dec 13 14:40:19.043002 sshd[3464]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:19.048228 systemd[1]: sshd@19-172.24.4.236:22-172.24.4.1:41398.service: Deactivated successfully. Dec 13 14:40:19.049937 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:40:19.051314 systemd-logind[1130]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:40:19.054129 systemd-logind[1130]: Removed session 20. Dec 13 14:40:24.055686 systemd[1]: Started sshd@20-172.24.4.236:22-172.24.4.1:41406.service. Dec 13 14:40:25.664200 sshd[3479]: Accepted publickey for core from 172.24.4.1 port 41406 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:25.667766 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:25.681323 systemd-logind[1130]: New session 21 of user core. Dec 13 14:40:25.681745 systemd[1]: Started session-21.scope. Dec 13 14:40:26.552811 sshd[3479]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:26.559939 systemd-logind[1130]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:40:26.560795 systemd[1]: sshd@20-172.24.4.236:22-172.24.4.1:41406.service: Deactivated successfully. Dec 13 14:40:26.562831 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:40:26.568195 systemd-logind[1130]: Removed session 21. Dec 13 14:40:31.561907 systemd[1]: Started sshd@21-172.24.4.236:22-172.24.4.1:51054.service. Dec 13 14:40:33.188062 sshd[3491]: Accepted publickey for core from 172.24.4.1 port 51054 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:33.189794 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:33.196901 systemd-logind[1130]: New session 22 of user core. Dec 13 14:40:33.197286 systemd[1]: Started session-22.scope. Dec 13 14:40:33.833675 sshd[3491]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:33.842013 systemd[1]: Started sshd@22-172.24.4.236:22-172.24.4.1:51058.service. Dec 13 14:40:33.843278 systemd[1]: sshd@21-172.24.4.236:22-172.24.4.1:51054.service: Deactivated successfully. Dec 13 14:40:33.844752 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:40:33.848667 systemd-logind[1130]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:40:33.852064 systemd-logind[1130]: Removed session 22. Dec 13 14:40:35.264025 sshd[3502]: Accepted publickey for core from 172.24.4.1 port 51058 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:35.267119 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:35.279217 systemd-logind[1130]: New session 23 of user core. Dec 13 14:40:35.279804 systemd[1]: Started session-23.scope. Dec 13 14:40:37.667637 systemd[1]: run-containerd-runc-k8s.io-3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f-runc.e6ewxw.mount: Deactivated successfully. Dec 13 14:40:37.714200 env[1141]: time="2024-12-13T14:40:37.714124976Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:40:37.768267 env[1141]: time="2024-12-13T14:40:37.768204477Z" level=info msg="StopContainer for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" with timeout 2 (s)" Dec 13 14:40:37.768684 env[1141]: time="2024-12-13T14:40:37.768298103Z" level=info msg="StopContainer for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" with timeout 30 (s)" Dec 13 14:40:37.769526 env[1141]: time="2024-12-13T14:40:37.769439565Z" level=info msg="Stop container \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" with signal terminated" Dec 13 14:40:37.770199 env[1141]: time="2024-12-13T14:40:37.769443993Z" level=info msg="Stop container \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" with signal terminated" Dec 13 14:40:37.787241 systemd-networkd[970]: lxc_health: Link DOWN Dec 13 14:40:37.787257 systemd-networkd[970]: lxc_health: Lost carrier Dec 13 14:40:37.789227 systemd[1]: cri-containerd-ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5.scope: Deactivated successfully. Dec 13 14:40:37.831850 systemd[1]: cri-containerd-3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f.scope: Deactivated successfully. Dec 13 14:40:37.832141 systemd[1]: cri-containerd-3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f.scope: Consumed 9.159s CPU time. Dec 13 14:40:37.860480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5-rootfs.mount: Deactivated successfully. Dec 13 14:40:37.866818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f-rootfs.mount: Deactivated successfully. Dec 13 14:40:37.947742 env[1141]: time="2024-12-13T14:40:37.947236669Z" level=info msg="shim disconnected" id=ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5 Dec 13 14:40:37.947987 env[1141]: time="2024-12-13T14:40:37.947943120Z" level=warning msg="cleaning up after shim disconnected" id=ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5 namespace=k8s.io Dec 13 14:40:37.947987 env[1141]: time="2024-12-13T14:40:37.947965403Z" level=info msg="cleaning up dead shim" Dec 13 14:40:37.948198 env[1141]: time="2024-12-13T14:40:37.947335685Z" level=info msg="shim disconnected" id=3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f Dec 13 14:40:37.948289 env[1141]: time="2024-12-13T14:40:37.948270158Z" level=warning msg="cleaning up after shim disconnected" id=3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f namespace=k8s.io Dec 13 14:40:37.948382 env[1141]: time="2024-12-13T14:40:37.948366047Z" level=info msg="cleaning up dead shim" Dec 13 14:40:37.956934 env[1141]: time="2024-12-13T14:40:37.956889626Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3574 runtime=io.containerd.runc.v2\n" Dec 13 14:40:37.957782 env[1141]: time="2024-12-13T14:40:37.957744378Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3573 runtime=io.containerd.runc.v2\n" Dec 13 14:40:37.984202 env[1141]: time="2024-12-13T14:40:37.984143545Z" level=info msg="StopContainer for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" returns successfully" Dec 13 14:40:37.993376 env[1141]: time="2024-12-13T14:40:37.993321649Z" level=info msg="StopContainer for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" returns successfully" Dec 13 14:40:38.002412 env[1141]: time="2024-12-13T14:40:38.002359336Z" level=info msg="StopPodSandbox for \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.002721920Z" level=info msg="Container to stop \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.002594369Z" level=info msg="StopPodSandbox for \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.002870610Z" level=info msg="Container to stop \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.003036643Z" level=info msg="Container to stop \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.003075035Z" level=info msg="Container to stop \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.003106394Z" level=info msg="Container to stop \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:38.006730 env[1141]: time="2024-12-13T14:40:38.003135800Z" level=info msg="Container to stop \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:38.016097 systemd[1]: cri-containerd-8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba.scope: Deactivated successfully. Dec 13 14:40:38.021439 systemd[1]: cri-containerd-39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203.scope: Deactivated successfully. Dec 13 14:40:38.361977 env[1141]: time="2024-12-13T14:40:38.361823870Z" level=info msg="shim disconnected" id=8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba Dec 13 14:40:38.362301 env[1141]: time="2024-12-13T14:40:38.361974344Z" level=warning msg="cleaning up after shim disconnected" id=8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba namespace=k8s.io Dec 13 14:40:38.362301 env[1141]: time="2024-12-13T14:40:38.362005242Z" level=info msg="cleaning up dead shim" Dec 13 14:40:38.363021 env[1141]: time="2024-12-13T14:40:38.362946116Z" level=info msg="shim disconnected" id=39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203 Dec 13 14:40:38.363167 env[1141]: time="2024-12-13T14:40:38.363029162Z" level=warning msg="cleaning up after shim disconnected" id=39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203 namespace=k8s.io Dec 13 14:40:38.363167 env[1141]: time="2024-12-13T14:40:38.363052867Z" level=info msg="cleaning up dead shim" Dec 13 14:40:38.383502 env[1141]: time="2024-12-13T14:40:38.383386587Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3638 runtime=io.containerd.runc.v2\n" Dec 13 14:40:38.384152 env[1141]: time="2024-12-13T14:40:38.384080135Z" level=info msg="TearDown network for sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" successfully" Dec 13 14:40:38.384265 env[1141]: time="2024-12-13T14:40:38.384143033Z" level=info msg="StopPodSandbox for \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" returns successfully" Dec 13 14:40:38.392419 env[1141]: time="2024-12-13T14:40:38.392349032Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3639 runtime=io.containerd.runc.v2\n" Dec 13 14:40:38.393742 env[1141]: time="2024-12-13T14:40:38.393665254Z" level=info msg="TearDown network for sandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" successfully" Dec 13 14:40:38.394103 env[1141]: time="2024-12-13T14:40:38.394019451Z" level=info msg="StopPodSandbox for \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" returns successfully" Dec 13 14:40:38.545585 kubelet[1972]: I1213 14:40:38.545429 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0f4caef-c401-4811-af0e-59ec927cc320-clustermesh-secrets\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.546601 kubelet[1972]: I1213 14:40:38.545628 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-run\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.546601 kubelet[1972]: I1213 14:40:38.545689 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-bpf-maps\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.546601 kubelet[1972]: I1213 14:40:38.545770 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-lib-modules\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.546601 kubelet[1972]: I1213 14:40:38.545820 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-xtables-lock\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.546601 kubelet[1972]: I1213 14:40:38.545928 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cni-path\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.546601 kubelet[1972]: I1213 14:40:38.545996 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-cilium-config-path\") pod \"3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4\" (UID: \"3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4\") " Dec 13 14:40:38.547019 kubelet[1972]: I1213 14:40:38.546081 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5hp4\" (UniqueName: \"kubernetes.io/projected/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-kube-api-access-h5hp4\") pod \"3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4\" (UID: \"3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4\") " Dec 13 14:40:38.547019 kubelet[1972]: I1213 14:40:38.546171 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-config-path\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547019 kubelet[1972]: I1213 14:40:38.546221 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-hostproc\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547019 kubelet[1972]: I1213 14:40:38.546274 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-net\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547019 kubelet[1972]: I1213 14:40:38.546323 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-cgroup\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547019 kubelet[1972]: I1213 14:40:38.546374 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-kernel\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547396 kubelet[1972]: I1213 14:40:38.546421 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-etc-cni-netd\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547396 kubelet[1972]: I1213 14:40:38.546526 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-hubble-tls\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.547396 kubelet[1972]: I1213 14:40:38.546594 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p6qd\" (UniqueName: \"kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-kube-api-access-8p6qd\") pod \"b0f4caef-c401-4811-af0e-59ec927cc320\" (UID: \"b0f4caef-c401-4811-af0e-59ec927cc320\") " Dec 13 14:40:38.586792 kubelet[1972]: I1213 14:40:38.584774 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:40:38.587042 kubelet[1972]: I1213 14:40:38.586996 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.587146 kubelet[1972]: I1213 14:40:38.587111 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.587371 kubelet[1972]: I1213 14:40:38.587300 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.587518 kubelet[1972]: I1213 14:40:38.587415 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cni-path" (OuterVolumeSpecName: "cni-path") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.595085 kubelet[1972]: I1213 14:40:38.595032 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-hostproc" (OuterVolumeSpecName: "hostproc") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.595337 kubelet[1972]: I1213 14:40:38.595301 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.595588 kubelet[1972]: I1213 14:40:38.595550 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.595792 kubelet[1972]: I1213 14:40:38.595752 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.596061 kubelet[1972]: I1213 14:40:38.596027 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.603205 kubelet[1972]: I1213 14:40:38.603119 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:40:38.603605 kubelet[1972]: I1213 14:40:38.603235 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-kube-api-access-h5hp4" (OuterVolumeSpecName: "kube-api-access-h5hp4") pod "3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4" (UID: "3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4"). InnerVolumeSpecName "kube-api-access-h5hp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:40:38.603946 kubelet[1972]: I1213 14:40:38.578337 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:38.604175 kubelet[1972]: I1213 14:40:38.603346 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4" (UID: "3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:40:38.604622 kubelet[1972]: I1213 14:40:38.604557 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-kube-api-access-8p6qd" (OuterVolumeSpecName: "kube-api-access-8p6qd") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "kube-api-access-8p6qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:40:38.605058 kubelet[1972]: I1213 14:40:38.604939 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0f4caef-c401-4811-af0e-59ec927cc320-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b0f4caef-c401-4811-af0e-59ec927cc320" (UID: "b0f4caef-c401-4811-af0e-59ec927cc320"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:40:38.647794 kubelet[1972]: I1213 14:40:38.647593 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-net\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.647794 kubelet[1972]: I1213 14:40:38.647685 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-cgroup\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.647794 kubelet[1972]: I1213 14:40:38.647720 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-host-proc-sys-kernel\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.647794 kubelet[1972]: I1213 14:40:38.647749 1972 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-etc-cni-netd\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.647794 kubelet[1972]: I1213 14:40:38.647798 1972 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-hubble-tls\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.647833 1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8p6qd\" (UniqueName: \"kubernetes.io/projected/b0f4caef-c401-4811-af0e-59ec927cc320-kube-api-access-8p6qd\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.647863 1972 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-xtables-lock\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.647895 1972 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b0f4caef-c401-4811-af0e-59ec927cc320-clustermesh-secrets\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.647924 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-run\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.647952 1972 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-bpf-maps\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.647980 1972 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-lib-modules\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648262 kubelet[1972]: I1213 14:40:38.648007 1972 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-cni-path\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648912 kubelet[1972]: I1213 14:40:38.648035 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-cilium-config-path\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648912 kubelet[1972]: I1213 14:40:38.648064 1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h5hp4\" (UniqueName: \"kubernetes.io/projected/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4-kube-api-access-h5hp4\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648912 kubelet[1972]: I1213 14:40:38.648108 1972 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b0f4caef-c401-4811-af0e-59ec927cc320-hostproc\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.648912 kubelet[1972]: I1213 14:40:38.648140 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b0f4caef-c401-4811-af0e-59ec927cc320-cilium-config-path\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:38.658930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203-rootfs.mount: Deactivated successfully. Dec 13 14:40:38.659563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba-rootfs.mount: Deactivated successfully. Dec 13 14:40:38.659933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203-shm.mount: Deactivated successfully. Dec 13 14:40:38.660283 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba-shm.mount: Deactivated successfully. Dec 13 14:40:38.660814 systemd[1]: var-lib-kubelet-pods-b0f4caef\x2dc401\x2d4811\x2daf0e\x2d59ec927cc320-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8p6qd.mount: Deactivated successfully. Dec 13 14:40:38.661187 systemd[1]: var-lib-kubelet-pods-3b7c4a47\x2dbc9b\x2d44c0\x2d8dd0\x2d788ddefcdda4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh5hp4.mount: Deactivated successfully. Dec 13 14:40:38.661715 systemd[1]: var-lib-kubelet-pods-b0f4caef\x2dc401\x2d4811\x2daf0e\x2d59ec927cc320-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:40:38.662078 systemd[1]: var-lib-kubelet-pods-b0f4caef\x2dc401\x2d4811\x2daf0e\x2d59ec927cc320-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:40:38.864454 kubelet[1972]: I1213 14:40:38.864419 1972 scope.go:117] "RemoveContainer" containerID="3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f" Dec 13 14:40:38.893607 env[1141]: time="2024-12-13T14:40:38.892946482Z" level=info msg="RemoveContainer for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\"" Dec 13 14:40:38.900997 systemd[1]: Removed slice kubepods-burstable-podb0f4caef_c401_4811_af0e_59ec927cc320.slice. Dec 13 14:40:38.901148 systemd[1]: kubepods-burstable-podb0f4caef_c401_4811_af0e_59ec927cc320.slice: Consumed 9.274s CPU time. Dec 13 14:40:38.905360 env[1141]: time="2024-12-13T14:40:38.905205641Z" level=info msg="RemoveContainer for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" returns successfully" Dec 13 14:40:38.910901 kubelet[1972]: I1213 14:40:38.910877 1972 scope.go:117] "RemoveContainer" containerID="751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6" Dec 13 14:40:38.912095 systemd[1]: Removed slice kubepods-besteffort-pod3b7c4a47_bc9b_44c0_8dd0_788ddefcdda4.slice. Dec 13 14:40:38.920321 env[1141]: time="2024-12-13T14:40:38.920025358Z" level=info msg="RemoveContainer for \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\"" Dec 13 14:40:38.944526 env[1141]: time="2024-12-13T14:40:38.944453967Z" level=info msg="RemoveContainer for \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\" returns successfully" Dec 13 14:40:38.944855 kubelet[1972]: I1213 14:40:38.944831 1972 scope.go:117] "RemoveContainer" containerID="bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb" Dec 13 14:40:38.946712 env[1141]: time="2024-12-13T14:40:38.946680063Z" level=info msg="RemoveContainer for \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\"" Dec 13 14:40:38.951108 env[1141]: time="2024-12-13T14:40:38.951055521Z" level=info msg="RemoveContainer for \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\" returns successfully" Dec 13 14:40:38.951772 kubelet[1972]: I1213 14:40:38.951729 1972 scope.go:117] "RemoveContainer" containerID="f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a" Dec 13 14:40:38.953201 env[1141]: time="2024-12-13T14:40:38.953149098Z" level=info msg="RemoveContainer for \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\"" Dec 13 14:40:38.958338 env[1141]: time="2024-12-13T14:40:38.958279919Z" level=info msg="RemoveContainer for \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\" returns successfully" Dec 13 14:40:38.958719 kubelet[1972]: I1213 14:40:38.958697 1972 scope.go:117] "RemoveContainer" containerID="0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3" Dec 13 14:40:38.959913 env[1141]: time="2024-12-13T14:40:38.959856962Z" level=info msg="RemoveContainer for \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\"" Dec 13 14:40:38.968189 env[1141]: time="2024-12-13T14:40:38.968146629Z" level=info msg="RemoveContainer for \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\" returns successfully" Dec 13 14:40:38.968629 kubelet[1972]: I1213 14:40:38.968596 1972 scope.go:117] "RemoveContainer" containerID="3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f" Dec 13 14:40:38.968936 env[1141]: time="2024-12-13T14:40:38.968858000Z" level=error msg="ContainerStatus for \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\": not found" Dec 13 14:40:38.973972 kubelet[1972]: E1213 14:40:38.973924 1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\": not found" containerID="3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f" Dec 13 14:40:38.987232 kubelet[1972]: I1213 14:40:38.987187 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f"} err="failed to get container status \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b452925c94be359925dd959517d0e81f2a9bf62ce6ae2fae66a19ab267fd39f\": not found" Dec 13 14:40:38.987232 kubelet[1972]: I1213 14:40:38.987238 1972 scope.go:117] "RemoveContainer" containerID="751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6" Dec 13 14:40:38.987701 env[1141]: time="2024-12-13T14:40:38.987636850Z" level=error msg="ContainerStatus for \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\": not found" Dec 13 14:40:38.987922 kubelet[1972]: E1213 14:40:38.987900 1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\": not found" containerID="751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6" Dec 13 14:40:38.987980 kubelet[1972]: I1213 14:40:38.987936 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6"} err="failed to get container status \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"751d6df1314a609755baff06ea02494f49feaf2d34ba80d9482e147f16307fb6\": not found" Dec 13 14:40:38.987980 kubelet[1972]: I1213 14:40:38.987949 1972 scope.go:117] "RemoveContainer" containerID="bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb" Dec 13 14:40:38.988213 env[1141]: time="2024-12-13T14:40:38.988167440Z" level=error msg="ContainerStatus for \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\": not found" Dec 13 14:40:38.988388 kubelet[1972]: E1213 14:40:38.988372 1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\": not found" containerID="bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb" Dec 13 14:40:38.988440 kubelet[1972]: I1213 14:40:38.988400 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb"} err="failed to get container status \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf8853bf9eb4ca0fedfde3665d95c7984cd65af7986832538a77225388bd36bb\": not found" Dec 13 14:40:38.988440 kubelet[1972]: I1213 14:40:38.988411 1972 scope.go:117] "RemoveContainer" containerID="f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a" Dec 13 14:40:38.988752 env[1141]: time="2024-12-13T14:40:38.988697118Z" level=error msg="ContainerStatus for \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\": not found" Dec 13 14:40:38.988937 kubelet[1972]: E1213 14:40:38.988920 1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\": not found" containerID="f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a" Dec 13 14:40:38.988986 kubelet[1972]: I1213 14:40:38.988948 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a"} err="failed to get container status \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f71a6933de6fc2deae865e1f05f83f7441ab361e1796175f02e528a892ec036a\": not found" Dec 13 14:40:38.988986 kubelet[1972]: I1213 14:40:38.988958 1972 scope.go:117] "RemoveContainer" containerID="0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3" Dec 13 14:40:38.989174 env[1141]: time="2024-12-13T14:40:38.989126167Z" level=error msg="ContainerStatus for \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\": not found" Dec 13 14:40:38.989279 kubelet[1972]: E1213 14:40:38.989264 1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\": not found" containerID="0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3" Dec 13 14:40:38.989341 kubelet[1972]: I1213 14:40:38.989291 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3"} err="failed to get container status \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0217f53257be9aa3a45550e04b1fdc12ce4b7ff955cd21c8c0d64642e72332e3\": not found" Dec 13 14:40:38.989341 kubelet[1972]: I1213 14:40:38.989300 1972 scope.go:117] "RemoveContainer" containerID="ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5" Dec 13 14:40:38.992076 env[1141]: time="2024-12-13T14:40:38.992045029Z" level=info msg="RemoveContainer for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\"" Dec 13 14:40:39.003695 env[1141]: time="2024-12-13T14:40:39.003656618Z" level=info msg="RemoveContainer for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" returns successfully" Dec 13 14:40:39.004034 kubelet[1972]: I1213 14:40:39.004014 1972 scope.go:117] "RemoveContainer" containerID="ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5" Dec 13 14:40:39.004383 env[1141]: time="2024-12-13T14:40:39.004330548Z" level=error msg="ContainerStatus for \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\": not found" Dec 13 14:40:39.004592 kubelet[1972]: E1213 14:40:39.004579 1972 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\": not found" containerID="ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5" Dec 13 14:40:39.004647 kubelet[1972]: I1213 14:40:39.004613 1972 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5"} err="failed to get container status \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\": rpc error: code = NotFound desc = an error occurred when try to find container \"ced147dacf3df9bb6968181c9af10f049d2d1d8adf7e4dfbbe53efb9f34abed5\": not found" Dec 13 14:40:39.685814 sshd[3502]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:39.698121 systemd[1]: Started sshd@23-172.24.4.236:22-172.24.4.1:42268.service. Dec 13 14:40:39.701756 systemd[1]: sshd@22-172.24.4.236:22-172.24.4.1:51058.service: Deactivated successfully. Dec 13 14:40:39.705862 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:40:39.706357 systemd[1]: session-23.scope: Consumed 1.005s CPU time. Dec 13 14:40:39.709176 systemd-logind[1130]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:40:39.713426 systemd-logind[1130]: Removed session 23. Dec 13 14:40:39.863713 kubelet[1972]: I1213 14:40:39.863687 1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4" path="/var/lib/kubelet/pods/3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4/volumes" Dec 13 14:40:39.864544 kubelet[1972]: I1213 14:40:39.864531 1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" path="/var/lib/kubelet/pods/b0f4caef-c401-4811-af0e-59ec927cc320/volumes" Dec 13 14:40:40.952706 sshd[3669]: Accepted publickey for core from 172.24.4.1 port 42268 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:40.958027 sshd[3669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:40.970547 systemd-logind[1130]: New session 24 of user core. Dec 13 14:40:40.975193 systemd[1]: Started session-24.scope. Dec 13 14:40:41.934498 kubelet[1972]: E1213 14:40:41.934470 1972 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:40:42.403648 kubelet[1972]: I1213 14:40:42.403592 1972 topology_manager.go:215] "Topology Admit Handler" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" podNamespace="kube-system" podName="cilium-69s68" Dec 13 14:40:42.403840 kubelet[1972]: E1213 14:40:42.403740 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" containerName="mount-cgroup" Dec 13 14:40:42.403840 kubelet[1972]: E1213 14:40:42.403776 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" containerName="clean-cilium-state" Dec 13 14:40:42.403840 kubelet[1972]: E1213 14:40:42.403791 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4" containerName="cilium-operator" Dec 13 14:40:42.403840 kubelet[1972]: E1213 14:40:42.403806 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" containerName="apply-sysctl-overwrites" Dec 13 14:40:42.403840 kubelet[1972]: E1213 14:40:42.403821 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" containerName="mount-bpf-fs" Dec 13 14:40:42.403840 kubelet[1972]: E1213 14:40:42.403834 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" containerName="cilium-agent" Dec 13 14:40:42.404690 kubelet[1972]: I1213 14:40:42.403883 1972 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7c4a47-bc9b-44c0-8dd0-788ddefcdda4" containerName="cilium-operator" Dec 13 14:40:42.404690 kubelet[1972]: I1213 14:40:42.403897 1972 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f4caef-c401-4811-af0e-59ec927cc320" containerName="cilium-agent" Dec 13 14:40:42.439029 systemd[1]: Created slice kubepods-burstable-podf5767551_9288_4b2b_b921_c2f81404bae5.slice. Dec 13 14:40:42.536768 sshd[3669]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:42.540367 systemd[1]: Started sshd@24-172.24.4.236:22-172.24.4.1:42278.service. Dec 13 14:40:42.541456 systemd[1]: sshd@23-172.24.4.236:22-172.24.4.1:42268.service: Deactivated successfully. Dec 13 14:40:42.542546 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:40:42.543733 systemd-logind[1130]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:40:42.545616 systemd-logind[1130]: Removed session 24. Dec 13 14:40:42.580047 kubelet[1972]: I1213 14:40:42.580015 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-run\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580254 kubelet[1972]: I1213 14:40:42.580241 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-kernel\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580358 kubelet[1972]: I1213 14:40:42.580346 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-bpf-maps\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580482 kubelet[1972]: I1213 14:40:42.580448 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-hostproc\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580590 kubelet[1972]: I1213 14:40:42.580578 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-xtables-lock\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580728 kubelet[1972]: I1213 14:40:42.580693 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-ipsec-secrets\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580791 kubelet[1972]: I1213 14:40:42.580774 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2wf8\" (UniqueName: \"kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-kube-api-access-m2wf8\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580831 kubelet[1972]: I1213 14:40:42.580807 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cni-path\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580864 kubelet[1972]: I1213 14:40:42.580835 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-clustermesh-secrets\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580864 kubelet[1972]: I1213 14:40:42.580859 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-net\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580922 kubelet[1972]: I1213 14:40:42.580883 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-lib-modules\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580922 kubelet[1972]: I1213 14:40:42.580908 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-config-path\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.580985 kubelet[1972]: I1213 14:40:42.580931 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-cgroup\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.581018 kubelet[1972]: I1213 14:40:42.581010 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-etc-cni-netd\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:42.581048 kubelet[1972]: I1213 14:40:42.581038 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-hubble-tls\") pod \"cilium-69s68\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " pod="kube-system/cilium-69s68" Dec 13 14:40:43.044369 env[1141]: time="2024-12-13T14:40:43.044247900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-69s68,Uid:f5767551-9288-4b2b-b921-c2f81404bae5,Namespace:kube-system,Attempt:0,}" Dec 13 14:40:43.084601 env[1141]: time="2024-12-13T14:40:43.084408831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:40:43.084601 env[1141]: time="2024-12-13T14:40:43.084540338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:40:43.085334 env[1141]: time="2024-12-13T14:40:43.085022727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:40:43.085932 env[1141]: time="2024-12-13T14:40:43.085741361Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6 pid=3694 runtime=io.containerd.runc.v2 Dec 13 14:40:43.119910 systemd[1]: Started cri-containerd-3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6.scope. Dec 13 14:40:43.165126 env[1141]: time="2024-12-13T14:40:43.165063176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-69s68,Uid:f5767551-9288-4b2b-b921-c2f81404bae5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\"" Dec 13 14:40:43.169559 env[1141]: time="2024-12-13T14:40:43.169166509Z" level=info msg="CreateContainer within sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:40:43.187253 env[1141]: time="2024-12-13T14:40:43.187188490Z" level=info msg="CreateContainer within sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\"" Dec 13 14:40:43.189095 env[1141]: time="2024-12-13T14:40:43.188707463Z" level=info msg="StartContainer for \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\"" Dec 13 14:40:43.206117 systemd[1]: Started cri-containerd-4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458.scope. Dec 13 14:40:43.221259 systemd[1]: cri-containerd-4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458.scope: Deactivated successfully. Dec 13 14:40:43.255871 env[1141]: time="2024-12-13T14:40:43.255787519Z" level=info msg="shim disconnected" id=4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458 Dec 13 14:40:43.256156 env[1141]: time="2024-12-13T14:40:43.256135295Z" level=warning msg="cleaning up after shim disconnected" id=4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458 namespace=k8s.io Dec 13 14:40:43.256253 env[1141]: time="2024-12-13T14:40:43.256236776Z" level=info msg="cleaning up dead shim" Dec 13 14:40:43.264928 env[1141]: time="2024-12-13T14:40:43.263745646Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3751 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:40:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:40:43.264928 env[1141]: time="2024-12-13T14:40:43.264011537Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Dec 13 14:40:43.264928 env[1141]: time="2024-12-13T14:40:43.264414165Z" level=error msg="Failed to pipe stdout of container \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\"" error="reading from a closed fifo" Dec 13 14:40:43.265310 env[1141]: time="2024-12-13T14:40:43.265273395Z" level=error msg="Failed to pipe stderr of container \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\"" error="reading from a closed fifo" Dec 13 14:40:43.267064 env[1141]: time="2024-12-13T14:40:43.267014075Z" level=error msg="StartContainer for \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:40:43.267452 kubelet[1972]: E1213 14:40:43.267309 1972 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458" Dec 13 14:40:43.268831 kubelet[1972]: E1213 14:40:43.268801 1972 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:40:43.268831 kubelet[1972]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:40:43.268831 kubelet[1972]: rm /hostbin/cilium-mount Dec 13 14:40:43.268971 kubelet[1972]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m2wf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-69s68_kube-system(f5767551-9288-4b2b-b921-c2f81404bae5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:40:43.269171 kubelet[1972]: E1213 14:40:43.269145 1972 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-69s68" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" Dec 13 14:40:43.855865 sshd[3679]: Accepted publickey for core from 172.24.4.1 port 42278 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:43.860933 sshd[3679]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:43.876606 systemd-logind[1130]: New session 25 of user core. Dec 13 14:40:43.876658 systemd[1]: Started session-25.scope. Dec 13 14:40:43.953613 env[1141]: time="2024-12-13T14:40:43.952156250Z" level=info msg="CreateContainer within sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 14:40:43.993863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407355908.mount: Deactivated successfully. Dec 13 14:40:44.000198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267527736.mount: Deactivated successfully. Dec 13 14:40:44.006949 env[1141]: time="2024-12-13T14:40:44.006840397Z" level=info msg="CreateContainer within sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\"" Dec 13 14:40:44.007759 env[1141]: time="2024-12-13T14:40:44.007433545Z" level=info msg="StartContainer for \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\"" Dec 13 14:40:44.025885 systemd[1]: Started cri-containerd-64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e.scope. Dec 13 14:40:44.040057 systemd[1]: cri-containerd-64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e.scope: Deactivated successfully. Dec 13 14:40:44.053240 env[1141]: time="2024-12-13T14:40:44.053177920Z" level=info msg="shim disconnected" id=64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e Dec 13 14:40:44.053761 env[1141]: time="2024-12-13T14:40:44.053739509Z" level=warning msg="cleaning up after shim disconnected" id=64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e namespace=k8s.io Dec 13 14:40:44.053865 env[1141]: time="2024-12-13T14:40:44.053849205Z" level=info msg="cleaning up dead shim" Dec 13 14:40:44.063673 env[1141]: time="2024-12-13T14:40:44.063611661Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3789 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:40:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:40:44.064186 env[1141]: time="2024-12-13T14:40:44.064123756Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Dec 13 14:40:44.064764 env[1141]: time="2024-12-13T14:40:44.064414974Z" level=error msg="Failed to pipe stdout of container \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\"" error="reading from a closed fifo" Dec 13 14:40:44.064875 env[1141]: time="2024-12-13T14:40:44.064517518Z" level=error msg="Failed to pipe stderr of container \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\"" error="reading from a closed fifo" Dec 13 14:40:44.068542 env[1141]: time="2024-12-13T14:40:44.068493079Z" level=error msg="StartContainer for \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:40:44.069513 kubelet[1972]: E1213 14:40:44.068986 1972 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e" Dec 13 14:40:44.069513 kubelet[1972]: E1213 14:40:44.069142 1972 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:40:44.069513 kubelet[1972]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:40:44.069513 kubelet[1972]: rm /hostbin/cilium-mount Dec 13 14:40:44.069513 kubelet[1972]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m2wf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-69s68_kube-system(f5767551-9288-4b2b-b921-c2f81404bae5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:40:44.069513 kubelet[1972]: E1213 14:40:44.069194 1972 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-69s68" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" Dec 13 14:40:44.694173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e-rootfs.mount: Deactivated successfully. Dec 13 14:40:44.850718 sshd[3679]: pam_unix(sshd:session): session closed for user core Dec 13 14:40:44.860820 systemd[1]: Started sshd@25-172.24.4.236:22-172.24.4.1:44230.service. Dec 13 14:40:44.862197 systemd[1]: sshd@24-172.24.4.236:22-172.24.4.1:42278.service: Deactivated successfully. Dec 13 14:40:44.865310 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:40:44.871620 systemd-logind[1130]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:40:44.874579 systemd-logind[1130]: Removed session 25. Dec 13 14:40:44.950369 env[1141]: time="2024-12-13T14:40:44.937703388Z" level=info msg="StopPodSandbox for \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\"" Dec 13 14:40:44.950369 env[1141]: time="2024-12-13T14:40:44.937901321Z" level=info msg="Container to stop \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:44.950369 env[1141]: time="2024-12-13T14:40:44.937994837Z" level=info msg="Container to stop \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:40:44.950369 env[1141]: time="2024-12-13T14:40:44.940992366Z" level=info msg="RemoveContainer for \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\"" Dec 13 14:40:44.943154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6-shm.mount: Deactivated successfully. Dec 13 14:40:44.954303 kubelet[1972]: I1213 14:40:44.936426 1972 scope.go:117] "RemoveContainer" containerID="4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458" Dec 13 14:40:44.970381 env[1141]: time="2024-12-13T14:40:44.970288278Z" level=info msg="RemoveContainer for \"4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458\" returns successfully" Dec 13 14:40:44.982934 systemd[1]: cri-containerd-3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6.scope: Deactivated successfully. Dec 13 14:40:45.023882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6-rootfs.mount: Deactivated successfully. Dec 13 14:40:45.034100 env[1141]: time="2024-12-13T14:40:45.034036073Z" level=info msg="shim disconnected" id=3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6 Dec 13 14:40:45.034425 env[1141]: time="2024-12-13T14:40:45.034403516Z" level=warning msg="cleaning up after shim disconnected" id=3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6 namespace=k8s.io Dec 13 14:40:45.034627 env[1141]: time="2024-12-13T14:40:45.034609374Z" level=info msg="cleaning up dead shim" Dec 13 14:40:45.043314 env[1141]: time="2024-12-13T14:40:45.043273879Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3830 runtime=io.containerd.runc.v2\n" Dec 13 14:40:45.043815 env[1141]: time="2024-12-13T14:40:45.043785532Z" level=info msg="TearDown network for sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" successfully" Dec 13 14:40:45.043921 env[1141]: time="2024-12-13T14:40:45.043900549Z" level=info msg="StopPodSandbox for \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" returns successfully" Dec 13 14:40:45.107000 kubelet[1972]: I1213 14:40:45.106897 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cni-path" (OuterVolumeSpecName: "cni-path") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.107345 kubelet[1972]: I1213 14:40:45.106765 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cni-path\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.107601 kubelet[1972]: I1213 14:40:45.107558 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-hostproc\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.107859 kubelet[1972]: I1213 14:40:45.107730 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-hostproc" (OuterVolumeSpecName: "hostproc") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.108018 kubelet[1972]: I1213 14:40:45.107996 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-ipsec-secrets\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.108230 kubelet[1972]: I1213 14:40:45.108207 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2wf8\" (UniqueName: \"kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-kube-api-access-m2wf8\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.108391 kubelet[1972]: I1213 14:40:45.108373 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-config-path\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.108585 kubelet[1972]: I1213 14:40:45.108564 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-etc-cni-netd\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.108747 kubelet[1972]: I1213 14:40:45.108727 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-cgroup\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.108946 kubelet[1972]: I1213 14:40:45.108925 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-lib-modules\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.109168 kubelet[1972]: I1213 14:40:45.109142 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-clustermesh-secrets\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.110710 kubelet[1972]: I1213 14:40:45.110636 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-net\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.110936 kubelet[1972]: I1213 14:40:45.110885 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-run\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.111141 kubelet[1972]: I1213 14:40:45.111091 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-kernel\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.111347 kubelet[1972]: I1213 14:40:45.111243 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-bpf-maps\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.111522 kubelet[1972]: I1213 14:40:45.111437 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-xtables-lock\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.111755 kubelet[1972]: I1213 14:40:45.111706 1972 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-hubble-tls\") pod \"f5767551-9288-4b2b-b921-c2f81404bae5\" (UID: \"f5767551-9288-4b2b-b921-c2f81404bae5\") " Dec 13 14:40:45.115902 kubelet[1972]: I1213 14:40:45.111958 1972 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cni-path\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.115902 kubelet[1972]: I1213 14:40:45.112222 1972 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-hostproc\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.127429 kubelet[1972]: I1213 14:40:45.127347 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-kube-api-access-m2wf8" (OuterVolumeSpecName: "kube-api-access-m2wf8") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "kube-api-access-m2wf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:40:45.128927 systemd[1]: var-lib-kubelet-pods-f5767551\x2d9288\x2d4b2b\x2db921\x2dc2f81404bae5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm2wf8.mount: Deactivated successfully. Dec 13 14:40:45.133315 kubelet[1972]: I1213 14:40:45.133247 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:40:45.133735 kubelet[1972]: I1213 14:40:45.133701 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.133978 kubelet[1972]: I1213 14:40:45.133942 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.134271 kubelet[1972]: I1213 14:40:45.134234 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.134967 kubelet[1972]: I1213 14:40:45.134926 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.135286 kubelet[1972]: I1213 14:40:45.135175 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.135404 kubelet[1972]: I1213 14:40:45.135307 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.135404 kubelet[1972]: I1213 14:40:45.135365 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.135770 kubelet[1972]: I1213 14:40:45.135409 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:40:45.141697 systemd[1]: var-lib-kubelet-pods-f5767551\x2d9288\x2d4b2b\x2db921\x2dc2f81404bae5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:40:45.146407 kubelet[1972]: I1213 14:40:45.146281 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:40:45.148693 kubelet[1972]: I1213 14:40:45.148607 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:40:45.151271 kubelet[1972]: I1213 14:40:45.151190 1972 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f5767551-9288-4b2b-b921-c2f81404bae5" (UID: "f5767551-9288-4b2b-b921-c2f81404bae5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:40:45.215274 kubelet[1972]: I1213 14:40:45.212599 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-cgroup\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.215621 kubelet[1972]: I1213 14:40:45.215594 1972 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-lib-modules\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.215773 kubelet[1972]: I1213 14:40:45.215754 1972 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-clustermesh-secrets\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.215915 kubelet[1972]: I1213 14:40:45.215896 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-net\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216043 kubelet[1972]: I1213 14:40:45.216025 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-run\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216169 kubelet[1972]: I1213 14:40:45.216153 1972 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-host-proc-sys-kernel\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216296 kubelet[1972]: I1213 14:40:45.216279 1972 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-bpf-maps\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216425 kubelet[1972]: I1213 14:40:45.216408 1972 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-xtables-lock\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216606 kubelet[1972]: I1213 14:40:45.216586 1972 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-hubble-tls\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216744 kubelet[1972]: I1213 14:40:45.216727 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-ipsec-secrets\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216873 kubelet[1972]: I1213 14:40:45.216856 1972 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m2wf8\" (UniqueName: \"kubernetes.io/projected/f5767551-9288-4b2b-b921-c2f81404bae5-kube-api-access-m2wf8\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.216999 kubelet[1972]: I1213 14:40:45.216982 1972 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5767551-9288-4b2b-b921-c2f81404bae5-cilium-config-path\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.217125 kubelet[1972]: I1213 14:40:45.217108 1972 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5767551-9288-4b2b-b921-c2f81404bae5-etc-cni-netd\") on node \"ci-3510-3-6-c-262737d7bc.novalocal\" DevicePath \"\"" Dec 13 14:40:45.393679 kubelet[1972]: I1213 14:40:45.393623 1972 setters.go:568] "Node became not ready" node="ci-3510-3-6-c-262737d7bc.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:40:45Z","lastTransitionTime":"2024-12-13T14:40:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:40:45.696533 systemd[1]: var-lib-kubelet-pods-f5767551\x2d9288\x2d4b2b\x2db921\x2dc2f81404bae5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:40:45.696780 systemd[1]: var-lib-kubelet-pods-f5767551\x2d9288\x2d4b2b\x2db921\x2dc2f81404bae5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:40:45.874127 systemd[1]: Removed slice kubepods-burstable-podf5767551_9288_4b2b_b921_c2f81404bae5.slice. Dec 13 14:40:45.942385 kubelet[1972]: I1213 14:40:45.942336 1972 scope.go:117] "RemoveContainer" containerID="64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e" Dec 13 14:40:45.954012 env[1141]: time="2024-12-13T14:40:45.951749785Z" level=info msg="RemoveContainer for \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\"" Dec 13 14:40:45.966257 env[1141]: time="2024-12-13T14:40:45.966037235Z" level=info msg="RemoveContainer for \"64cf463cb06cb2e9121e7a78bf25ae7a770a72826eb6da871fdc47304bf9466e\" returns successfully" Dec 13 14:40:46.046746 kubelet[1972]: I1213 14:40:46.046606 1972 topology_manager.go:215] "Topology Admit Handler" podUID="4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a" podNamespace="kube-system" podName="cilium-vrxkp" Dec 13 14:40:46.047681 kubelet[1972]: E1213 14:40:46.046799 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" containerName="mount-cgroup" Dec 13 14:40:46.047681 kubelet[1972]: E1213 14:40:46.046835 1972 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" containerName="mount-cgroup" Dec 13 14:40:46.047681 kubelet[1972]: I1213 14:40:46.046936 1972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" containerName="mount-cgroup" Dec 13 14:40:46.047681 kubelet[1972]: I1213 14:40:46.046999 1972 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" containerName="mount-cgroup" Dec 13 14:40:46.062276 systemd[1]: Created slice kubepods-burstable-pod4ecc870a_d54a_4a3b_b0f2_4a7e6e27fa8a.slice. Dec 13 14:40:46.102537 sshd[3809]: Accepted publickey for core from 172.24.4.1 port 44230 ssh2: RSA SHA256:2ngTm68CMx36X1xnKPqUJq9w0RJJht3bhOuOq01A7tI Dec 13 14:40:46.103880 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:40:46.109176 systemd-logind[1130]: New session 26 of user core. Dec 13 14:40:46.109847 systemd[1]: Started session-26.scope. Dec 13 14:40:46.123125 kubelet[1972]: I1213 14:40:46.123083 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-cni-path\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123289 kubelet[1972]: I1213 14:40:46.123230 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-cilium-cgroup\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123289 kubelet[1972]: I1213 14:40:46.123270 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-host-proc-sys-kernel\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123478 kubelet[1972]: I1213 14:40:46.123340 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-hubble-tls\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123526 kubelet[1972]: I1213 14:40:46.123432 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-hostproc\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123526 kubelet[1972]: I1213 14:40:46.123523 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-bpf-maps\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123612 kubelet[1972]: I1213 14:40:46.123594 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-etc-cni-netd\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123679 kubelet[1972]: I1213 14:40:46.123661 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-cilium-config-path\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123714 kubelet[1972]: I1213 14:40:46.123698 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjd6\" (UniqueName: \"kubernetes.io/projected/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-kube-api-access-7xjd6\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123792 kubelet[1972]: I1213 14:40:46.123768 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-lib-modules\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123857 kubelet[1972]: I1213 14:40:46.123837 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-cilium-ipsec-secrets\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123898 kubelet[1972]: I1213 14:40:46.123871 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-host-proc-sys-net\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.123958 kubelet[1972]: I1213 14:40:46.123933 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-cilium-run\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.124034 kubelet[1972]: I1213 14:40:46.124017 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-xtables-lock\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.124107 kubelet[1972]: I1213 14:40:46.124091 1972 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a-clustermesh-secrets\") pod \"cilium-vrxkp\" (UID: \"4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a\") " pod="kube-system/cilium-vrxkp" Dec 13 14:40:46.367821 env[1141]: time="2024-12-13T14:40:46.367686257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrxkp,Uid:4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a,Namespace:kube-system,Attempt:0,}" Dec 13 14:40:46.409305 kubelet[1972]: W1213 14:40:46.409002 1972 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5767551_9288_4b2b_b921_c2f81404bae5.slice/cri-containerd-4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458.scope WatchSource:0}: container "4c79dbc5dd64bd25423377181b20fbc148e8364682a076e7402ce9f6ed899458" in namespace "k8s.io": not found Dec 13 14:40:46.420720 env[1141]: time="2024-12-13T14:40:46.420591845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:40:46.421172 env[1141]: time="2024-12-13T14:40:46.421108558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:40:46.421526 env[1141]: time="2024-12-13T14:40:46.421402993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:40:46.422127 env[1141]: time="2024-12-13T14:40:46.422059179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084 pid=3862 runtime=io.containerd.runc.v2 Dec 13 14:40:46.462289 systemd[1]: Started cri-containerd-ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084.scope. Dec 13 14:40:46.528386 env[1141]: time="2024-12-13T14:40:46.528322465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vrxkp,Uid:4ecc870a-d54a-4a3b-b0f2-4a7e6e27fa8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\"" Dec 13 14:40:46.533993 env[1141]: time="2024-12-13T14:40:46.533947964Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:40:46.558196 env[1141]: time="2024-12-13T14:40:46.558139502Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa\"" Dec 13 14:40:46.559030 env[1141]: time="2024-12-13T14:40:46.558851263Z" level=info msg="StartContainer for \"2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa\"" Dec 13 14:40:46.581695 systemd[1]: Started cri-containerd-2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa.scope. Dec 13 14:40:46.639195 env[1141]: time="2024-12-13T14:40:46.639038049Z" level=info msg="StartContainer for \"2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa\" returns successfully" Dec 13 14:40:46.679139 systemd[1]: cri-containerd-2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa.scope: Deactivated successfully. Dec 13 14:40:46.727939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa-rootfs.mount: Deactivated successfully. Dec 13 14:40:46.750949 env[1141]: time="2024-12-13T14:40:46.750886879Z" level=info msg="shim disconnected" id=2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa Dec 13 14:40:46.750949 env[1141]: time="2024-12-13T14:40:46.750941792Z" level=warning msg="cleaning up after shim disconnected" id=2af626f50416fb3436d86d55abf9ab101417abff1a9f8cd5706f1a8a3e4895aa namespace=k8s.io Dec 13 14:40:46.751269 env[1141]: time="2024-12-13T14:40:46.750954436Z" level=info msg="cleaning up dead shim" Dec 13 14:40:46.766573 env[1141]: time="2024-12-13T14:40:46.766404966Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3948 runtime=io.containerd.runc.v2\n" Dec 13 14:40:46.935870 kubelet[1972]: E1213 14:40:46.935737 1972 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:40:46.948243 env[1141]: time="2024-12-13T14:40:46.948198034Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:40:46.963129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789456168.mount: Deactivated successfully. Dec 13 14:40:46.977175 env[1141]: time="2024-12-13T14:40:46.977107992Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08\"" Dec 13 14:40:46.986041 env[1141]: time="2024-12-13T14:40:46.985995035Z" level=info msg="StartContainer for \"42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08\"" Dec 13 14:40:47.007363 systemd[1]: Started cri-containerd-42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08.scope. Dec 13 14:40:47.059016 env[1141]: time="2024-12-13T14:40:47.058960509Z" level=info msg="StartContainer for \"42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08\" returns successfully" Dec 13 14:40:47.067377 systemd[1]: cri-containerd-42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08.scope: Deactivated successfully. Dec 13 14:40:47.097068 env[1141]: time="2024-12-13T14:40:47.096996347Z" level=info msg="shim disconnected" id=42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08 Dec 13 14:40:47.097406 env[1141]: time="2024-12-13T14:40:47.097386000Z" level=warning msg="cleaning up after shim disconnected" id=42384cabe800f5ac8345745c011a70995ef1779e19d28bfb89a45a99275f9e08 namespace=k8s.io Dec 13 14:40:47.097504 env[1141]: time="2024-12-13T14:40:47.097488874Z" level=info msg="cleaning up dead shim" Dec 13 14:40:47.110955 env[1141]: time="2024-12-13T14:40:47.110902725Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" Dec 13 14:40:47.697654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3083239672.mount: Deactivated successfully. Dec 13 14:40:47.864967 kubelet[1972]: I1213 14:40:47.864882 1972 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f5767551-9288-4b2b-b921-c2f81404bae5" path="/var/lib/kubelet/pods/f5767551-9288-4b2b-b921-c2f81404bae5/volumes" Dec 13 14:40:47.966974 env[1141]: time="2024-12-13T14:40:47.964697851Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:40:48.002868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491138572.mount: Deactivated successfully. Dec 13 14:40:48.023677 env[1141]: time="2024-12-13T14:40:48.023558789Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb\"" Dec 13 14:40:48.025167 env[1141]: time="2024-12-13T14:40:48.025106675Z" level=info msg="StartContainer for \"e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb\"" Dec 13 14:40:48.064270 systemd[1]: Started cri-containerd-e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb.scope. Dec 13 14:40:48.111576 env[1141]: time="2024-12-13T14:40:48.111493121Z" level=info msg="StartContainer for \"e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb\" returns successfully" Dec 13 14:40:48.115333 systemd[1]: cri-containerd-e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb.scope: Deactivated successfully. Dec 13 14:40:48.151364 env[1141]: time="2024-12-13T14:40:48.151309398Z" level=info msg="shim disconnected" id=e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb Dec 13 14:40:48.151697 env[1141]: time="2024-12-13T14:40:48.151677722Z" level=warning msg="cleaning up after shim disconnected" id=e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb namespace=k8s.io Dec 13 14:40:48.151772 env[1141]: time="2024-12-13T14:40:48.151758273Z" level=info msg="cleaning up dead shim" Dec 13 14:40:48.159580 env[1141]: time="2024-12-13T14:40:48.159535143Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4067 runtime=io.containerd.runc.v2\n" Dec 13 14:40:48.697740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e891e4a1c585179564ac7b06cb122b202f81db513b85f642269053064ba272fb-rootfs.mount: Deactivated successfully. Dec 13 14:40:48.978928 env[1141]: time="2024-12-13T14:40:48.977653090Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:40:49.007026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199383783.mount: Deactivated successfully. Dec 13 14:40:49.018013 env[1141]: time="2024-12-13T14:40:49.017756325Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2\"" Dec 13 14:40:49.021144 env[1141]: time="2024-12-13T14:40:49.021071891Z" level=info msg="StartContainer for \"fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2\"" Dec 13 14:40:49.066257 systemd[1]: Started cri-containerd-fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2.scope. Dec 13 14:40:49.107885 systemd[1]: cri-containerd-fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2.scope: Deactivated successfully. Dec 13 14:40:49.109119 env[1141]: time="2024-12-13T14:40:49.109069783Z" level=info msg="StartContainer for \"fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2\" returns successfully" Dec 13 14:40:49.136697 env[1141]: time="2024-12-13T14:40:49.136635870Z" level=info msg="shim disconnected" id=fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2 Dec 13 14:40:49.137078 env[1141]: time="2024-12-13T14:40:49.137056482Z" level=warning msg="cleaning up after shim disconnected" id=fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2 namespace=k8s.io Dec 13 14:40:49.137150 env[1141]: time="2024-12-13T14:40:49.137135180Z" level=info msg="cleaning up dead shim" Dec 13 14:40:49.145687 env[1141]: time="2024-12-13T14:40:49.145629672Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:40:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Dec 13 14:40:49.697665 systemd[1]: run-containerd-runc-k8s.io-fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2-runc.oPqcCh.mount: Deactivated successfully. Dec 13 14:40:49.697967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb82531df1bce8044256b96a15c673d494dae5147ef990623f92165b8beaa3c2-rootfs.mount: Deactivated successfully. Dec 13 14:40:49.987233 env[1141]: time="2024-12-13T14:40:49.986751477Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:40:50.025599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689793521.mount: Deactivated successfully. Dec 13 14:40:50.043734 env[1141]: time="2024-12-13T14:40:50.043631809Z" level=info msg="CreateContainer within sandbox \"ddb1896d0e4e5f2ca06e68bd465d190e0f3d2fd75a7fb0897041a569112a3084\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7\"" Dec 13 14:40:50.047008 env[1141]: time="2024-12-13T14:40:50.046700659Z" level=info msg="StartContainer for \"4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7\"" Dec 13 14:40:50.071035 systemd[1]: Started cri-containerd-4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7.scope. Dec 13 14:40:50.119177 env[1141]: time="2024-12-13T14:40:50.117780394Z" level=info msg="StartContainer for \"4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7\" returns successfully" Dec 13 14:40:51.002256 kubelet[1972]: I1213 14:40:51.002199 1972 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vrxkp" podStartSLOduration=5.002132454 podStartE2EDuration="5.002132454s" podCreationTimestamp="2024-12-13 14:40:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:40:51.000236293 +0000 UTC m=+179.381145802" watchObservedRunningTime="2024-12-13 14:40:51.002132454 +0000 UTC m=+179.383041953" Dec 13 14:40:51.066525 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:40:51.119496 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Dec 13 14:40:51.827242 env[1141]: time="2024-12-13T14:40:51.827150872Z" level=info msg="StopPodSandbox for \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\"" Dec 13 14:40:51.828134 env[1141]: time="2024-12-13T14:40:51.827399881Z" level=info msg="TearDown network for sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" successfully" Dec 13 14:40:51.828134 env[1141]: time="2024-12-13T14:40:51.827523604Z" level=info msg="StopPodSandbox for \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" returns successfully" Dec 13 14:40:51.828450 env[1141]: time="2024-12-13T14:40:51.828361352Z" level=info msg="RemovePodSandbox for \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\"" Dec 13 14:40:51.828667 env[1141]: time="2024-12-13T14:40:51.828454377Z" level=info msg="Forcibly stopping sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\"" Dec 13 14:40:51.828797 env[1141]: time="2024-12-13T14:40:51.828719247Z" level=info msg="TearDown network for sandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" successfully" Dec 13 14:40:51.837825 env[1141]: time="2024-12-13T14:40:51.837661699Z" level=info msg="RemovePodSandbox \"3423df3478a6afec5912a37ead836138b2a0f4f63d9639a9a70353bf979ba0a6\" returns successfully" Dec 13 14:40:51.838973 env[1141]: time="2024-12-13T14:40:51.838843095Z" level=info msg="StopPodSandbox for \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\"" Dec 13 14:40:51.839321 env[1141]: time="2024-12-13T14:40:51.839026279Z" level=info msg="TearDown network for sandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" successfully" Dec 13 14:40:51.839321 env[1141]: time="2024-12-13T14:40:51.839105709Z" level=info msg="StopPodSandbox for \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" returns successfully" Dec 13 14:40:51.840358 env[1141]: time="2024-12-13T14:40:51.840282676Z" level=info msg="RemovePodSandbox for \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\"" Dec 13 14:40:51.840833 env[1141]: time="2024-12-13T14:40:51.840705722Z" level=info msg="Forcibly stopping sandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\"" Dec 13 14:40:51.841356 env[1141]: time="2024-12-13T14:40:51.841284703Z" level=info msg="TearDown network for sandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" successfully" Dec 13 14:40:51.851254 env[1141]: time="2024-12-13T14:40:51.851178257Z" level=info msg="RemovePodSandbox \"39bff6624f7f476f78a1177bfeeb02c39cc85bdefdfe8682a2b6be7385504203\" returns successfully" Dec 13 14:40:51.852996 env[1141]: time="2024-12-13T14:40:51.852897536Z" level=info msg="StopPodSandbox for \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\"" Dec 13 14:40:51.853267 env[1141]: time="2024-12-13T14:40:51.853162685Z" level=info msg="TearDown network for sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" successfully" Dec 13 14:40:51.853373 env[1141]: time="2024-12-13T14:40:51.853259778Z" level=info msg="StopPodSandbox for \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" returns successfully" Dec 13 14:40:51.870736 env[1141]: time="2024-12-13T14:40:51.870641687Z" level=info msg="RemovePodSandbox for \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\"" Dec 13 14:40:51.871180 env[1141]: time="2024-12-13T14:40:51.870724413Z" level=info msg="Forcibly stopping sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\"" Dec 13 14:40:51.871180 env[1141]: time="2024-12-13T14:40:51.870940078Z" level=info msg="TearDown network for sandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" successfully" Dec 13 14:40:51.883715 env[1141]: time="2024-12-13T14:40:51.883612607Z" level=info msg="RemovePodSandbox \"8a7991e5ccd7e8fd7dabaac0b2749ea28a9800295b8aab0cf055399dd8fa72ba\" returns successfully" Dec 13 14:40:53.067128 systemd[1]: run-containerd-runc-k8s.io-4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7-runc.Y0fLb6.mount: Deactivated successfully. Dec 13 14:40:54.446582 systemd-networkd[970]: lxc_health: Link UP Dec 13 14:40:54.465080 systemd-networkd[970]: lxc_health: Gained carrier Dec 13 14:40:54.465532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:40:55.406372 systemd[1]: run-containerd-runc-k8s.io-4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7-runc.ADzDzP.mount: Deactivated successfully. Dec 13 14:40:55.931648 systemd-networkd[970]: lxc_health: Gained IPv6LL Dec 13 14:40:57.726362 systemd[1]: run-containerd-runc-k8s.io-4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7-runc.RDIRkD.mount: Deactivated successfully. Dec 13 14:40:59.923877 systemd[1]: run-containerd-runc-k8s.io-4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7-runc.IOvJPb.mount: Deactivated successfully. Dec 13 14:41:02.132300 systemd[1]: run-containerd-runc-k8s.io-4e4a11f8e4aa9727aa984d73fe95f70ad087258f84ffa3e7a7d5d3206cd93fd7-runc.qribe6.mount: Deactivated successfully. Dec 13 14:41:02.432534 sshd[3809]: pam_unix(sshd:session): session closed for user core Dec 13 14:41:02.438920 systemd[1]: sshd@25-172.24.4.236:22-172.24.4.1:44230.service: Deactivated successfully. Dec 13 14:41:02.440458 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:41:02.442974 systemd-logind[1130]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:41:02.445920 systemd-logind[1130]: Removed session 26.