Dec 13 15:11:45.854887 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 15:11:45.854919 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:11:45.854995 kernel: BIOS-provided physical RAM map: Dec 13 15:11:45.855003 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 15:11:45.855009 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 15:11:45.855016 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 15:11:45.855024 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 15:11:45.855032 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 15:11:45.855038 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 15:11:45.855045 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 15:11:45.855055 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 15:11:45.855062 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 15:11:45.855069 kernel: NX (Execute Disable) protection: active Dec 13 15:11:45.855076 kernel: SMBIOS 2.8 present. Dec 13 15:11:45.855085 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 15:11:45.855093 kernel: Hypervisor detected: KVM Dec 13 15:11:45.855103 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 15:11:45.855111 kernel: kvm-clock: cpu 0, msr 5b19a001, primary cpu clock Dec 13 15:11:45.855118 kernel: kvm-clock: using sched offset of 4106272116 cycles Dec 13 15:11:45.855127 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 15:11:45.855135 kernel: tsc: Detected 2294.608 MHz processor Dec 13 15:11:45.855143 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 15:11:45.855151 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 15:11:45.855159 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 15:11:45.855167 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 15:11:45.855177 kernel: Using GB pages for direct mapping Dec 13 15:11:45.855185 kernel: ACPI: Early table checksum verification disabled Dec 13 15:11:45.855192 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 15:11:45.855200 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855208 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855216 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855224 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 15:11:45.855231 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855239 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855249 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855256 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 15:11:45.855264 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 15:11:45.855272 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 15:11:45.855280 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 15:11:45.855288 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 15:11:45.855299 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 15:11:45.855310 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 15:11:45.855319 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 15:11:45.855327 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 15:11:45.855336 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 15:11:45.855344 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 15:11:45.855352 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 15:11:45.855360 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 15:11:45.855371 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 15:11:45.855379 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 15:11:45.855387 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 15:11:45.855396 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 15:11:45.855404 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 15:11:45.855412 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 15:11:45.855420 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 15:11:45.855428 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 15:11:45.855437 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 15:11:45.855445 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 15:11:45.855455 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 15:11:45.855464 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 15:11:45.855472 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 15:11:45.855481 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 15:11:45.855489 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 15:11:45.855498 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 15:11:45.855506 kernel: Zone ranges: Dec 13 15:11:45.855515 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 15:11:45.855523 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 15:11:45.855534 kernel: Normal empty Dec 13 15:11:45.855542 kernel: Movable zone start for each node Dec 13 15:11:45.855551 kernel: Early memory node ranges Dec 13 15:11:45.855559 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 15:11:45.855567 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 15:11:45.855576 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 15:11:45.855584 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 15:11:45.855592 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 15:11:45.855601 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 15:11:45.855611 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 15:11:45.855620 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 15:11:45.855628 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 15:11:45.855637 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 15:11:45.855645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 15:11:45.855654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 15:11:45.855662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 15:11:45.855670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 15:11:45.855679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 15:11:45.855689 kernel: TSC deadline timer available Dec 13 15:11:45.855698 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 15:11:45.855706 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 15:11:45.855714 kernel: Booting paravirtualized kernel on KVM Dec 13 15:11:45.855723 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 15:11:45.855732 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 15:11:45.855740 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 15:11:45.855757 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 15:11:45.855766 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 15:11:45.855776 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 15:11:45.855785 kernel: kvm-guest: PV spinlocks enabled Dec 13 15:11:45.855793 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 15:11:45.855802 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 15:11:45.855810 kernel: Policy zone: DMA32 Dec 13 15:11:45.855820 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:11:45.855829 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 15:11:45.855837 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 15:11:45.855848 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 15:11:45.855857 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 15:11:45.855865 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 192524K reserved, 0K cma-reserved) Dec 13 15:11:45.855874 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 15:11:45.855882 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 15:11:45.855891 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 15:11:45.855899 kernel: rcu: Hierarchical RCU implementation. Dec 13 15:11:45.855908 kernel: rcu: RCU event tracing is enabled. Dec 13 15:11:45.855917 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 15:11:45.855942 kernel: Rude variant of Tasks RCU enabled. Dec 13 15:11:45.855951 kernel: Tracing variant of Tasks RCU enabled. Dec 13 15:11:45.855960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 15:11:45.855968 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 15:11:45.855977 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 15:11:45.855985 kernel: random: crng init done Dec 13 15:11:45.855994 kernel: Console: colour VGA+ 80x25 Dec 13 15:11:45.856013 kernel: printk: console [tty0] enabled Dec 13 15:11:45.856023 kernel: printk: console [ttyS0] enabled Dec 13 15:11:45.856032 kernel: ACPI: Core revision 20210730 Dec 13 15:11:45.856041 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 15:11:45.856049 kernel: x2apic enabled Dec 13 15:11:45.856061 kernel: Switched APIC routing to physical x2apic. Dec 13 15:11:45.856070 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 15:11:45.856079 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Dec 13 15:11:45.856088 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 15:11:45.856097 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 15:11:45.856109 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 15:11:45.856118 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 15:11:45.856127 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 15:11:45.856136 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 15:11:45.856145 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 15:11:45.856154 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 15:11:45.856163 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 15:11:45.856172 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 15:11:45.856180 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 15:11:45.856189 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 15:11:45.856198 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 15:11:45.856209 kernel: TAA: Mitigation: Clear CPU buffers Dec 13 15:11:45.856219 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 15:11:45.856227 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 15:11:45.856236 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 15:11:45.856245 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 15:11:45.856254 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 15:11:45.856263 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 15:11:45.856272 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 15:11:45.856281 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 15:11:45.856290 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 15:11:45.856301 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 15:11:45.856310 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 15:11:45.856319 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 15:11:45.856328 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 15:11:45.856336 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Dec 13 15:11:45.856345 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Dec 13 15:11:45.856354 kernel: Freeing SMP alternatives memory: 32K Dec 13 15:11:45.856363 kernel: pid_max: default: 32768 minimum: 301 Dec 13 15:11:45.856372 kernel: LSM: Security Framework initializing Dec 13 15:11:45.856381 kernel: SELinux: Initializing. Dec 13 15:11:45.856389 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 15:11:45.856399 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 15:11:45.856410 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Dec 13 15:11:45.856419 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 15:11:45.856428 kernel: signal: max sigframe size: 3632 Dec 13 15:11:45.856437 kernel: rcu: Hierarchical SRCU implementation. Dec 13 15:11:45.856446 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 15:11:45.856455 kernel: smp: Bringing up secondary CPUs ... Dec 13 15:11:45.856464 kernel: x86: Booting SMP configuration: Dec 13 15:11:45.856473 kernel: .... node #0, CPUs: #1 Dec 13 15:11:45.856483 kernel: kvm-clock: cpu 1, msr 5b19a041, secondary cpu clock Dec 13 15:11:45.856494 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 15:11:45.856503 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 15:11:45.856512 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 15:11:45.856521 kernel: smpboot: Max logical packages: 16 Dec 13 15:11:45.856530 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Dec 13 15:11:45.856539 kernel: devtmpfs: initialized Dec 13 15:11:45.856548 kernel: x86/mm: Memory block size: 128MB Dec 13 15:11:45.856557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 15:11:45.856566 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 15:11:45.856575 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 15:11:45.856586 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 15:11:45.856595 kernel: audit: initializing netlink subsys (disabled) Dec 13 15:11:45.856604 kernel: audit: type=2000 audit(1734102704.972:1): state=initialized audit_enabled=0 res=1 Dec 13 15:11:45.856613 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 15:11:45.856622 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 15:11:45.856631 kernel: cpuidle: using governor menu Dec 13 15:11:45.856640 kernel: ACPI: bus type PCI registered Dec 13 15:11:45.856649 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 15:11:45.856658 kernel: dca service started, version 1.12.1 Dec 13 15:11:45.856669 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 15:11:45.856678 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 15:11:45.856687 kernel: PCI: Using configuration type 1 for base access Dec 13 15:11:45.856696 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 15:11:45.856705 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 15:11:45.856714 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 15:11:45.856723 kernel: ACPI: Added _OSI(Module Device) Dec 13 15:11:45.856733 kernel: ACPI: Added _OSI(Processor Device) Dec 13 15:11:45.856747 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 15:11:45.856758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 15:11:45.856767 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 15:11:45.856776 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 15:11:45.856785 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 15:11:45.856794 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 15:11:45.856803 kernel: ACPI: Interpreter enabled Dec 13 15:11:45.856812 kernel: ACPI: PM: (supports S0 S5) Dec 13 15:11:45.856821 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 15:11:45.856830 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 15:11:45.856842 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 15:11:45.856851 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 15:11:45.857003 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 15:11:45.857093 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 15:11:45.857176 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 15:11:45.857188 kernel: PCI host bridge to bus 0000:00 Dec 13 15:11:45.857277 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 15:11:45.857356 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 15:11:45.857429 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 15:11:45.857503 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 15:11:45.857576 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 15:11:45.857649 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 15:11:45.857722 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 15:11:45.857827 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 15:11:45.857935 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 15:11:45.858023 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 15:11:45.858108 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 15:11:45.858193 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 15:11:45.858281 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 15:11:45.858381 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.858473 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 15:11:45.858585 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.858683 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 15:11:45.858801 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.858904 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 15:11:45.869100 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.869218 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 15:11:45.869314 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.869402 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 15:11:45.869493 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.869576 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 15:11:45.869666 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.869763 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 15:11:45.869853 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 15:11:45.869947 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 15:11:45.870042 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 15:11:45.870127 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 15:11:45.870209 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 15:11:45.870293 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 15:11:45.870378 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 15:11:45.870465 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 15:11:45.870548 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 15:11:45.870631 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 15:11:45.870715 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 15:11:45.870809 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 15:11:45.870894 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 15:11:45.875300 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 15:11:45.875557 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 15:11:45.875776 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 15:11:45.876040 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 15:11:45.876242 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 15:11:45.876460 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 15:11:45.876705 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 15:11:45.877036 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 15:11:45.877238 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 15:11:45.877433 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:11:45.877651 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 15:11:45.877946 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 15:11:45.878182 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 15:11:45.878405 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 15:11:45.878615 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 15:11:45.878844 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 15:11:45.879095 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 15:11:45.879294 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 15:11:45.879491 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 15:11:45.879691 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:11:45.881956 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 15:11:45.882208 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 15:11:45.882409 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 15:11:45.882603 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 15:11:45.882869 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:11:45.890039 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 15:11:45.890151 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 15:11:45.890251 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:11:45.890340 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 15:11:45.890425 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 15:11:45.890509 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:11:45.890598 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 15:11:45.890681 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 15:11:45.890773 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:11:45.890863 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 15:11:45.890979 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 15:11:45.891065 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:11:45.891153 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 15:11:45.891236 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 15:11:45.891321 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:11:45.891334 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 15:11:45.891344 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 15:11:45.891354 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 15:11:45.891368 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 15:11:45.891378 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 15:11:45.891387 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 15:11:45.891397 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 15:11:45.891406 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 15:11:45.891416 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 15:11:45.891425 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 15:11:45.891435 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 15:11:45.891444 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 15:11:45.891457 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 15:11:45.891466 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 15:11:45.891476 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 15:11:45.891485 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 15:11:45.891495 kernel: iommu: Default domain type: Translated Dec 13 15:11:45.891504 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 15:11:45.891592 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 15:11:45.891679 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 15:11:45.891773 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 15:11:45.891785 kernel: vgaarb: loaded Dec 13 15:11:45.891795 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 15:11:45.891804 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 15:11:45.891814 kernel: PTP clock support registered Dec 13 15:11:45.891824 kernel: PCI: Using ACPI for IRQ routing Dec 13 15:11:45.891834 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 15:11:45.891843 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 15:11:45.891853 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 15:11:45.891866 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 15:11:45.891875 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 15:11:45.891885 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 15:11:45.891895 kernel: pnp: PnP ACPI init Dec 13 15:11:45.892009 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 15:11:45.892024 kernel: pnp: PnP ACPI: found 5 devices Dec 13 15:11:45.892034 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 15:11:45.892044 kernel: NET: Registered PF_INET protocol family Dec 13 15:11:45.892057 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 15:11:45.892067 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 15:11:45.892076 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 15:11:45.892086 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 15:11:45.892095 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 15:11:45.892105 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 15:11:45.892114 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 15:11:45.892124 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 15:11:45.892133 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 15:11:45.892146 kernel: NET: Registered PF_XDP protocol family Dec 13 15:11:45.892235 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 15:11:45.892327 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 15:11:45.892414 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 15:11:45.892501 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 15:11:45.892586 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 15:11:45.892672 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 15:11:45.892768 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 15:11:45.892886 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 15:11:45.893030 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 15:11:45.893121 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 15:11:45.893205 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 15:11:45.893288 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 15:11:45.893395 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 15:11:45.893483 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 15:11:45.893568 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 15:11:45.893651 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 15:11:45.893752 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 15:11:45.893842 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 15:11:45.893937 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 15:11:45.894023 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 15:11:45.894133 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 15:11:45.894218 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:11:45.894303 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 15:11:45.894389 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 15:11:45.894476 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 15:11:45.894562 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:11:45.894653 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 15:11:45.894737 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 15:11:45.894829 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 15:11:45.894913 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:11:45.895021 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 15:11:45.895106 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 15:11:45.895189 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 15:11:45.895275 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:11:45.895359 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 15:11:45.895455 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 15:11:45.895540 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 15:11:45.895624 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:11:45.895707 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 15:11:45.895800 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 15:11:45.895884 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 15:11:45.896012 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:11:45.896097 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 15:11:45.896184 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 15:11:45.896266 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 15:11:45.896347 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:11:45.896432 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 15:11:45.896611 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 15:11:45.896698 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 15:11:45.896798 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:11:45.896887 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 15:11:45.901004 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 15:11:45.901095 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 15:11:45.901172 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 15:11:45.901247 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 15:11:45.901322 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 15:11:45.901417 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 15:11:45.901506 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 15:11:45.901585 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 15:11:45.901677 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 15:11:45.901813 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 15:11:45.901900 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 15:11:45.901992 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 15:11:45.902089 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 15:11:45.902175 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 15:11:45.902253 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 15:11:45.902342 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 15:11:45.902422 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 15:11:45.902507 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 15:11:45.902598 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 15:11:45.902688 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 15:11:45.902774 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 15:11:45.902867 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 15:11:45.902957 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 15:11:45.903038 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 15:11:45.903127 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 15:11:45.903207 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 15:11:45.903289 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 15:11:45.903377 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 15:11:45.903457 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 15:11:45.903540 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 15:11:45.903555 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 15:11:45.903566 kernel: PCI: CLS 0 bytes, default 64 Dec 13 15:11:45.903576 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 15:11:45.903586 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 15:11:45.903600 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 15:11:45.903611 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Dec 13 15:11:45.903622 kernel: Initialise system trusted keyrings Dec 13 15:11:45.903636 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 15:11:45.903647 kernel: Key type asymmetric registered Dec 13 15:11:45.903657 kernel: Asymmetric key parser 'x509' registered Dec 13 15:11:45.903671 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 15:11:45.903681 kernel: io scheduler mq-deadline registered Dec 13 15:11:45.903691 kernel: io scheduler kyber registered Dec 13 15:11:45.903704 kernel: io scheduler bfq registered Dec 13 15:11:45.903804 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 15:11:45.903892 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 15:11:45.903988 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.904077 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 15:11:45.904164 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 15:11:45.904250 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.904342 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 15:11:45.904433 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 15:11:45.904519 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.904607 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 15:11:45.904692 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 15:11:45.904786 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.904877 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 15:11:45.908023 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 15:11:45.908119 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.908210 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 15:11:45.908302 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 15:11:45.908388 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.908485 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 15:11:45.908571 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 15:11:45.908655 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.908748 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 15:11:45.908835 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 15:11:45.908920 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 15:11:45.908985 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 15:11:45.908997 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 15:11:45.909007 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 15:11:45.909018 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 15:11:45.909028 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 15:11:45.909038 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 15:11:45.909048 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 15:11:45.909058 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 15:11:45.909072 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 15:11:45.909176 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 15:11:45.909256 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 15:11:45.909333 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T15:11:45 UTC (1734102705) Dec 13 15:11:45.909409 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 15:11:45.909422 kernel: intel_pstate: CPU model not supported Dec 13 15:11:45.909432 kernel: NET: Registered PF_INET6 protocol family Dec 13 15:11:45.909442 kernel: Segment Routing with IPv6 Dec 13 15:11:45.909456 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 15:11:45.909466 kernel: NET: Registered PF_PACKET protocol family Dec 13 15:11:45.909476 kernel: Key type dns_resolver registered Dec 13 15:11:45.909486 kernel: IPI shorthand broadcast: enabled Dec 13 15:11:45.909496 kernel: sched_clock: Marking stable (727002561, 117916541)->(1033838268, -188919166) Dec 13 15:11:45.909506 kernel: registered taskstats version 1 Dec 13 15:11:45.909516 kernel: Loading compiled-in X.509 certificates Dec 13 15:11:45.909526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 15:11:45.909537 kernel: Key type .fscrypt registered Dec 13 15:11:45.909550 kernel: Key type fscrypt-provisioning registered Dec 13 15:11:45.909560 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 15:11:45.909570 kernel: ima: Allocated hash algorithm: sha1 Dec 13 15:11:45.909580 kernel: ima: No architecture policies found Dec 13 15:11:45.909591 kernel: clk: Disabling unused clocks Dec 13 15:11:45.909601 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 15:11:45.909611 kernel: Write protecting the kernel read-only data: 28672k Dec 13 15:11:45.909621 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 15:11:45.909635 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 15:11:45.909645 kernel: Run /init as init process Dec 13 15:11:45.909655 kernel: with arguments: Dec 13 15:11:45.909666 kernel: /init Dec 13 15:11:45.909676 kernel: with environment: Dec 13 15:11:45.909686 kernel: HOME=/ Dec 13 15:11:45.909696 kernel: TERM=linux Dec 13 15:11:45.909706 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 15:11:45.909720 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 15:11:45.909791 systemd[1]: Detected virtualization kvm. Dec 13 15:11:45.909803 systemd[1]: Detected architecture x86-64. Dec 13 15:11:45.909814 systemd[1]: Running in initrd. Dec 13 15:11:45.909824 systemd[1]: No hostname configured, using default hostname. Dec 13 15:11:45.909834 systemd[1]: Hostname set to . Dec 13 15:11:45.909845 systemd[1]: Initializing machine ID from VM UUID. Dec 13 15:11:45.909855 systemd[1]: Queued start job for default target initrd.target. Dec 13 15:11:45.909866 systemd[1]: Started systemd-ask-password-console.path. Dec 13 15:11:45.909879 systemd[1]: Reached target cryptsetup.target. Dec 13 15:11:45.909889 systemd[1]: Reached target paths.target. Dec 13 15:11:45.909899 systemd[1]: Reached target slices.target. Dec 13 15:11:45.909909 systemd[1]: Reached target swap.target. Dec 13 15:11:45.909920 systemd[1]: Reached target timers.target. Dec 13 15:11:45.909968 systemd[1]: Listening on iscsid.socket. Dec 13 15:11:45.909978 systemd[1]: Listening on iscsiuio.socket. Dec 13 15:11:45.909992 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 15:11:45.910002 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 15:11:45.910013 systemd[1]: Listening on systemd-journald.socket. Dec 13 15:11:45.910023 systemd[1]: Listening on systemd-networkd.socket. Dec 13 15:11:45.910034 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 15:11:45.910044 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 15:11:45.910055 systemd[1]: Reached target sockets.target. Dec 13 15:11:45.910065 systemd[1]: Starting kmod-static-nodes.service... Dec 13 15:11:45.910075 systemd[1]: Finished network-cleanup.service. Dec 13 15:11:45.910103 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 15:11:45.910114 systemd[1]: Starting systemd-journald.service... Dec 13 15:11:45.910127 systemd[1]: Starting systemd-modules-load.service... Dec 13 15:11:45.910137 systemd[1]: Starting systemd-resolved.service... Dec 13 15:11:45.910148 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 15:11:45.910159 systemd[1]: Finished kmod-static-nodes.service. Dec 13 15:11:45.910169 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 15:11:45.910181 kernel: audit: type=1130 audit(1734102705.842:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.910192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 15:11:45.910205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 15:11:45.910259 systemd-journald[202]: Journal started Dec 13 15:11:45.910331 systemd-journald[202]: Runtime Journal (/run/log/journal/d36dc4fea9874732b8ebc99935df98a7) is 4.7M, max 38.1M, 33.3M free. Dec 13 15:11:45.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.860245 systemd-modules-load[203]: Inserted module 'overlay' Dec 13 15:11:45.931679 kernel: Bridge firewalling registered Dec 13 15:11:45.931708 systemd[1]: Started systemd-resolved.service. Dec 13 15:11:45.931726 kernel: audit: type=1130 audit(1734102705.928:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.890384 systemd-resolved[204]: Positive Trust Anchors: Dec 13 15:11:45.933636 kernel: SCSI subsystem initialized Dec 13 15:11:45.933657 systemd[1]: Started systemd-journald.service. Dec 13 15:11:45.890397 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:11:45.890458 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 15:11:45.893603 systemd-resolved[204]: Defaulting to hostname 'linux'. Dec 13 15:11:45.912969 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 13 15:11:45.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.940393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 15:11:45.941443 kernel: audit: type=1130 audit(1734102705.936:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.941571 systemd[1]: Reached target nss-lookup.target. Dec 13 15:11:45.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.945998 kernel: audit: type=1130 audit(1734102705.940:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.946038 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 15:11:45.946162 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 15:11:45.953236 kernel: device-mapper: uevent: version 1.0.3 Dec 13 15:11:45.953297 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 15:11:45.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.960386 systemd-modules-load[203]: Inserted module 'dm_multipath' Dec 13 15:11:45.961158 kernel: audit: type=1130 audit(1734102705.952:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.961803 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 15:11:45.962465 systemd[1]: Finished systemd-modules-load.service. Dec 13 15:11:45.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.963988 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:11:45.968085 kernel: audit: type=1130 audit(1734102705.962:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.975175 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:11:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.990320 kernel: audit: type=1130 audit(1734102705.983:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.990361 kernel: audit: type=1130 audit(1734102705.986:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:45.987310 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 15:11:45.990574 systemd[1]: Starting dracut-cmdline.service... Dec 13 15:11:46.002121 dracut-cmdline[224]: dracut-dracut-053 Dec 13 15:11:46.004431 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 15:11:46.067962 kernel: Loading iSCSI transport class v2.0-870. Dec 13 15:11:46.086953 kernel: iscsi: registered transport (tcp) Dec 13 15:11:46.111072 kernel: iscsi: registered transport (qla4xxx) Dec 13 15:11:46.111162 kernel: QLogic iSCSI HBA Driver Dec 13 15:11:46.175008 systemd[1]: Finished dracut-cmdline.service. Dec 13 15:11:46.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:46.180948 kernel: audit: type=1130 audit(1734102706.174:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:46.178248 systemd[1]: Starting dracut-pre-udev.service... Dec 13 15:11:46.234104 kernel: raid6: avx512x4 gen() 17670 MB/s Dec 13 15:11:46.250988 kernel: raid6: avx512x4 xor() 7437 MB/s Dec 13 15:11:46.268005 kernel: raid6: avx512x2 gen() 17857 MB/s Dec 13 15:11:46.285045 kernel: raid6: avx512x2 xor() 21826 MB/s Dec 13 15:11:46.301995 kernel: raid6: avx512x1 gen() 17745 MB/s Dec 13 15:11:46.318999 kernel: raid6: avx512x1 xor() 19708 MB/s Dec 13 15:11:46.336027 kernel: raid6: avx2x4 gen() 17825 MB/s Dec 13 15:11:46.353008 kernel: raid6: avx2x4 xor() 7053 MB/s Dec 13 15:11:46.370013 kernel: raid6: avx2x2 gen() 17787 MB/s Dec 13 15:11:46.387004 kernel: raid6: avx2x2 xor() 16002 MB/s Dec 13 15:11:46.404086 kernel: raid6: avx2x1 gen() 13393 MB/s Dec 13 15:11:46.420996 kernel: raid6: avx2x1 xor() 13879 MB/s Dec 13 15:11:46.437991 kernel: raid6: sse2x4 gen() 8192 MB/s Dec 13 15:11:46.454990 kernel: raid6: sse2x4 xor() 5529 MB/s Dec 13 15:11:46.472023 kernel: raid6: sse2x2 gen() 9093 MB/s Dec 13 15:11:46.488982 kernel: raid6: sse2x2 xor() 5266 MB/s Dec 13 15:11:46.505993 kernel: raid6: sse2x1 gen() 8230 MB/s Dec 13 15:11:46.523574 kernel: raid6: sse2x1 xor() 4162 MB/s Dec 13 15:11:46.523673 kernel: raid6: using algorithm avx512x2 gen() 17857 MB/s Dec 13 15:11:46.523746 kernel: raid6: .... xor() 21826 MB/s, rmw enabled Dec 13 15:11:46.524306 kernel: raid6: using avx512x2 recovery algorithm Dec 13 15:11:46.538986 kernel: xor: automatically using best checksumming function avx Dec 13 15:11:46.642975 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 15:11:46.656957 systemd[1]: Finished dracut-pre-udev.service. Dec 13 15:11:46.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:46.656000 audit: BPF prog-id=7 op=LOAD Dec 13 15:11:46.656000 audit: BPF prog-id=8 op=LOAD Dec 13 15:11:46.658304 systemd[1]: Starting systemd-udevd.service... Dec 13 15:11:46.673019 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 15:11:46.678924 systemd[1]: Started systemd-udevd.service. Dec 13 15:11:46.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:46.684850 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 15:11:46.708175 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Dec 13 15:11:46.751166 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 15:11:46.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:46.752377 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 15:11:46.804291 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 15:11:46.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:46.860161 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 15:11:46.906757 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 15:11:46.906777 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 15:11:46.906790 kernel: AES CTR mode by8 optimization enabled Dec 13 15:11:46.906808 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 15:11:46.906820 kernel: GPT:17805311 != 125829119 Dec 13 15:11:46.906831 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 15:11:46.906842 kernel: GPT:17805311 != 125829119 Dec 13 15:11:46.906853 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 15:11:46.906865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:11:46.927263 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Dec 13 15:11:46.938377 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 15:11:46.941389 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 15:11:46.949739 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 15:11:46.950831 kernel: ACPI: bus type USB registered Dec 13 15:11:46.953937 kernel: usbcore: registered new interface driver usbfs Dec 13 15:11:46.954815 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 15:11:46.957121 kernel: usbcore: registered new interface driver hub Dec 13 15:11:46.957141 kernel: usbcore: registered new device driver usb Dec 13 15:11:46.963042 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 15:11:46.964743 systemd[1]: Starting disk-uuid.service... Dec 13 15:11:46.969844 disk-uuid[476]: Primary Header is updated. Dec 13 15:11:46.969844 disk-uuid[476]: Secondary Entries is updated. Dec 13 15:11:46.969844 disk-uuid[476]: Secondary Header is updated. Dec 13 15:11:46.973944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:11:46.973975 kernel: libata version 3.00 loaded. Dec 13 15:11:46.982944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:11:47.001951 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 15:11:47.059260 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 15:11:47.059281 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 15:11:47.059395 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 15:11:47.059489 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 15:11:47.059581 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 15:11:47.059687 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 15:11:47.059781 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 15:11:47.059879 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 15:11:47.059985 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 15:11:47.060077 kernel: hub 1-0:1.0: USB hub found Dec 13 15:11:47.060188 kernel: hub 1-0:1.0: 4 ports detected Dec 13 15:11:47.060289 kernel: scsi host0: ahci Dec 13 15:11:47.060389 kernel: scsi host1: ahci Dec 13 15:11:47.060490 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 15:11:47.060676 kernel: scsi host2: ahci Dec 13 15:11:47.060779 kernel: scsi host3: ahci Dec 13 15:11:47.060875 kernel: hub 2-0:1.0: USB hub found Dec 13 15:11:47.060992 kernel: hub 2-0:1.0: 4 ports detected Dec 13 15:11:47.061094 kernel: scsi host4: ahci Dec 13 15:11:47.061201 kernel: scsi host5: ahci Dec 13 15:11:47.061297 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 15:11:47.061311 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 15:11:47.061323 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 15:11:47.061335 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 15:11:47.061346 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 15:11:47.061358 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 15:11:47.279992 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 15:11:47.364974 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 15:11:47.365096 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 15:11:47.368659 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 15:11:47.368967 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 15:11:47.372972 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 15:11:47.374972 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 15:11:47.424951 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 15:11:47.430946 kernel: usbcore: registered new interface driver usbhid Dec 13 15:11:47.430980 kernel: usbhid: USB HID core driver Dec 13 15:11:47.436755 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 15:11:47.436795 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 15:11:47.985984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 15:11:47.986769 disk-uuid[477]: The operation has completed successfully. Dec 13 15:11:48.023036 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 15:11:48.023124 systemd[1]: Finished disk-uuid.service. Dec 13 15:11:48.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.024378 systemd[1]: Starting verity-setup.service... Dec 13 15:11:48.041947 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 15:11:48.093615 systemd[1]: Found device dev-mapper-usr.device. Dec 13 15:11:48.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.095910 systemd[1]: Mounting sysusr-usr.mount... Dec 13 15:11:48.096481 systemd[1]: Finished verity-setup.service. Dec 13 15:11:48.175984 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 15:11:48.176173 systemd[1]: Mounted sysusr-usr.mount. Dec 13 15:11:48.176717 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 15:11:48.177546 systemd[1]: Starting ignition-setup.service... Dec 13 15:11:48.189423 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 15:11:48.192452 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 15:11:48.192476 kernel: BTRFS info (device vda6): using free space tree Dec 13 15:11:48.192489 kernel: BTRFS info (device vda6): has skinny extents Dec 13 15:11:48.207181 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 15:11:48.212500 systemd[1]: Finished ignition-setup.service. Dec 13 15:11:48.213786 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 15:11:48.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.334512 ignition[626]: Ignition 2.14.0 Dec 13 15:11:48.334525 ignition[626]: Stage: fetch-offline Dec 13 15:11:48.334597 ignition[626]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:48.334633 ignition[626]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:48.335871 ignition[626]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:48.336004 ignition[626]: parsed url from cmdline: "" Dec 13 15:11:48.336008 ignition[626]: no config URL provided Dec 13 15:11:48.336013 ignition[626]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:11:48.336021 ignition[626]: no config at "/usr/lib/ignition/user.ign" Dec 13 15:11:48.336027 ignition[626]: failed to fetch config: resource requires networking Dec 13 15:11:48.336320 ignition[626]: Ignition finished successfully Dec 13 15:11:48.339787 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 15:11:48.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.340653 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 15:11:48.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.341000 audit: BPF prog-id=9 op=LOAD Dec 13 15:11:48.343127 systemd[1]: Starting systemd-networkd.service... Dec 13 15:11:48.364291 systemd-networkd[714]: lo: Link UP Dec 13 15:11:48.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.364302 systemd-networkd[714]: lo: Gained carrier Dec 13 15:11:48.364784 systemd-networkd[714]: Enumeration completed Dec 13 15:11:48.365000 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:11:48.365150 systemd[1]: Started systemd-networkd.service. Dec 13 15:11:48.366157 systemd-networkd[714]: eth0: Link UP Dec 13 15:11:48.366163 systemd-networkd[714]: eth0: Gained carrier Dec 13 15:11:48.366627 systemd[1]: Reached target network.target. Dec 13 15:11:48.368359 systemd[1]: Starting ignition-fetch.service... Dec 13 15:11:48.370876 systemd[1]: Starting iscsiuio.service... Dec 13 15:11:48.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.380994 systemd[1]: Started iscsiuio.service. Dec 13 15:11:48.382133 systemd[1]: Starting iscsid.service... Dec 13 15:11:48.383624 systemd-networkd[714]: eth0: DHCPv4 address 10.244.95.150/30, gateway 10.244.95.149 acquired from 10.244.95.149 Dec 13 15:11:48.385898 ignition[716]: Ignition 2.14.0 Dec 13 15:11:48.388154 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 15:11:48.388154 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 15:11:48.388154 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 15:11:48.388154 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 15:11:48.388154 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 15:11:48.388154 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 15:11:48.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.385906 ignition[716]: Stage: fetch Dec 13 15:11:48.390913 systemd[1]: Started iscsid.service. Dec 13 15:11:48.386046 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:48.392849 systemd[1]: Starting dracut-initqueue.service... Dec 13 15:11:48.386063 ignition[716]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:48.387223 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:48.387343 ignition[716]: parsed url from cmdline: "" Dec 13 15:11:48.387348 ignition[716]: no config URL provided Dec 13 15:11:48.387354 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 15:11:48.387362 ignition[716]: no config at "/usr/lib/ignition/user.ign" Dec 13 15:11:48.390580 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 15:11:48.390608 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 15:11:48.391214 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 15:11:48.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.404307 systemd[1]: Finished dracut-initqueue.service. Dec 13 15:11:48.404754 systemd[1]: Reached target remote-fs-pre.target. Dec 13 15:11:48.405105 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 15:11:48.405438 systemd[1]: Reached target remote-fs.target. Dec 13 15:11:48.406424 systemd[1]: Starting dracut-pre-mount.service... Dec 13 15:11:48.411149 ignition[716]: GET result: OK Dec 13 15:11:48.411230 ignition[716]: parsing config with SHA512: 2e4642c1203627cd2bb63915c782f43818e2346876b047f7b75716d0d8b3a97099635980bece40ae414d9ef009702443a9e49dc37e60923158b6c6b994c47a15 Dec 13 15:11:48.415402 systemd[1]: Finished dracut-pre-mount.service. Dec 13 15:11:48.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.418336 unknown[716]: fetched base config from "system" Dec 13 15:11:48.418768 unknown[716]: fetched base config from "system" Dec 13 15:11:48.419432 unknown[716]: fetched user config from "openstack" Dec 13 15:11:48.419741 ignition[716]: fetch: fetch complete Dec 13 15:11:48.419746 ignition[716]: fetch: fetch passed Dec 13 15:11:48.419807 ignition[716]: Ignition finished successfully Dec 13 15:11:48.422074 systemd[1]: Finished ignition-fetch.service. Dec 13 15:11:48.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.423227 systemd[1]: Starting ignition-kargs.service... Dec 13 15:11:48.432445 ignition[739]: Ignition 2.14.0 Dec 13 15:11:48.432984 ignition[739]: Stage: kargs Dec 13 15:11:48.433412 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:48.433887 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:48.434876 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:48.436329 ignition[739]: kargs: kargs passed Dec 13 15:11:48.436777 ignition[739]: Ignition finished successfully Dec 13 15:11:48.437916 systemd[1]: Finished ignition-kargs.service. Dec 13 15:11:48.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.439091 systemd[1]: Starting ignition-disks.service... Dec 13 15:11:48.446819 ignition[744]: Ignition 2.14.0 Dec 13 15:11:48.446827 ignition[744]: Stage: disks Dec 13 15:11:48.446938 ignition[744]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:48.446954 ignition[744]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:48.447838 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:48.448668 ignition[744]: disks: disks passed Dec 13 15:11:48.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.449356 systemd[1]: Finished ignition-disks.service. Dec 13 15:11:48.448710 ignition[744]: Ignition finished successfully Dec 13 15:11:48.450041 systemd[1]: Reached target initrd-root-device.target. Dec 13 15:11:48.450387 systemd[1]: Reached target local-fs-pre.target. Dec 13 15:11:48.450715 systemd[1]: Reached target local-fs.target. Dec 13 15:11:48.451051 systemd[1]: Reached target sysinit.target. Dec 13 15:11:48.451331 systemd[1]: Reached target basic.target. Dec 13 15:11:48.452343 systemd[1]: Starting systemd-fsck-root.service... Dec 13 15:11:48.466256 systemd-fsck[751]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 15:11:48.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.469008 systemd[1]: Finished systemd-fsck-root.service. Dec 13 15:11:48.470074 systemd[1]: Mounting sysroot.mount... Dec 13 15:11:48.479947 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 15:11:48.480398 systemd[1]: Mounted sysroot.mount. Dec 13 15:11:48.480818 systemd[1]: Reached target initrd-root-fs.target. Dec 13 15:11:48.482447 systemd[1]: Mounting sysroot-usr.mount... Dec 13 15:11:48.483312 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 15:11:48.484016 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 15:11:48.484458 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 15:11:48.484507 systemd[1]: Reached target ignition-diskful.target. Dec 13 15:11:48.488734 systemd[1]: Mounted sysroot-usr.mount. Dec 13 15:11:48.489866 systemd[1]: Starting initrd-setup-root.service... Dec 13 15:11:48.496343 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 15:11:48.509202 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Dec 13 15:11:48.521378 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 15:11:48.530821 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 15:11:48.570528 systemd[1]: Finished initrd-setup-root.service. Dec 13 15:11:48.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.571782 systemd[1]: Starting ignition-mount.service... Dec 13 15:11:48.574556 systemd[1]: Starting sysroot-boot.service... Dec 13 15:11:48.580889 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 15:11:48.598808 coreos-metadata[757]: Dec 13 15:11:48.598 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 15:11:48.599592 ignition[807]: INFO : Ignition 2.14.0 Dec 13 15:11:48.599592 ignition[807]: INFO : Stage: mount Dec 13 15:11:48.599592 ignition[807]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:48.599592 ignition[807]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:48.604415 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:48.604415 ignition[807]: INFO : mount: mount passed Dec 13 15:11:48.604415 ignition[807]: INFO : Ignition finished successfully Dec 13 15:11:48.606676 systemd[1]: Finished ignition-mount.service. Dec 13 15:11:48.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.613037 systemd[1]: Finished sysroot-boot.service. Dec 13 15:11:48.615821 coreos-metadata[757]: Dec 13 15:11:48.613 INFO Fetch successful Dec 13 15:11:48.615821 coreos-metadata[757]: Dec 13 15:11:48.613 INFO wrote hostname srv-jc9g4.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 15:11:48.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.618547 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 15:11:48.618711 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 15:11:48.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:48.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:49.112698 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 15:11:49.125955 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (814) Dec 13 15:11:49.129694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 15:11:49.129728 kernel: BTRFS info (device vda6): using free space tree Dec 13 15:11:49.129745 kernel: BTRFS info (device vda6): has skinny extents Dec 13 15:11:49.137706 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 15:11:49.140640 systemd[1]: Starting ignition-files.service... Dec 13 15:11:49.164031 ignition[834]: INFO : Ignition 2.14.0 Dec 13 15:11:49.164031 ignition[834]: INFO : Stage: files Dec 13 15:11:49.164031 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:49.164031 ignition[834]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:49.170239 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:49.170239 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Dec 13 15:11:49.170239 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 15:11:49.170239 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 15:11:49.170239 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 15:11:49.175789 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 15:11:49.175789 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:11:49.175789 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 15:11:49.174318 unknown[834]: wrote ssh authorized keys file for user: core Dec 13 15:11:49.739196 systemd-networkd[714]: eth0: Gained IPv6LL Dec 13 15:11:49.798608 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 15:11:51.250241 systemd-networkd[714]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:17e5:24:19ff:fef4:5f96/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:17e5:24:19ff:fef4:5f96/64 assigned by NDisc. Dec 13 15:11:51.250263 systemd-networkd[714]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 15:11:51.929990 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 15:11:51.932833 ignition[834]: INFO : files: op(7): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 15:11:51.932833 ignition[834]: INFO : files: op(7): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 15:11:51.932833 ignition[834]: INFO : files: op(8): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 15:11:51.932833 ignition[834]: INFO : files: op(8): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 15:11:51.944915 ignition[834]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:11:51.948783 ignition[834]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 15:11:51.948783 ignition[834]: INFO : files: files passed Dec 13 15:11:51.948783 ignition[834]: INFO : Ignition finished successfully Dec 13 15:11:51.958036 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 15:11:51.958065 kernel: audit: type=1130 audit(1734102711.949:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.948794 systemd[1]: Finished ignition-files.service. Dec 13 15:11:51.951975 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 15:11:51.960230 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 15:11:51.961883 systemd[1]: Starting ignition-quench.service... Dec 13 15:11:51.966955 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 15:11:51.968712 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 15:11:51.973041 kernel: audit: type=1130 audit(1734102711.968:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.970447 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 15:11:51.982208 kernel: audit: type=1130 audit(1734102711.973:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.982259 kernel: audit: type=1131 audit(1734102711.973:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:51.970656 systemd[1]: Finished ignition-quench.service. Dec 13 15:11:51.974259 systemd[1]: Reached target ignition-complete.target. Dec 13 15:11:51.981549 systemd[1]: Starting initrd-parse-etc.service... Dec 13 15:11:52.003193 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 15:11:52.003312 systemd[1]: Finished initrd-parse-etc.service. Dec 13 15:11:52.010132 kernel: audit: type=1130 audit(1734102712.003:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.010169 kernel: audit: type=1131 audit(1734102712.003:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.004286 systemd[1]: Reached target initrd-fs.target. Dec 13 15:11:52.010440 systemd[1]: Reached target initrd.target. Dec 13 15:11:52.012388 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 15:11:52.013486 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 15:11:52.027408 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 15:11:52.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.028813 systemd[1]: Starting initrd-cleanup.service... Dec 13 15:11:52.032770 kernel: audit: type=1130 audit(1734102712.026:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.039434 systemd[1]: Stopped target nss-lookup.target. Dec 13 15:11:52.040414 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 15:11:52.041245 systemd[1]: Stopped target timers.target. Dec 13 15:11:52.042028 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 15:11:52.042579 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 15:11:52.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.045979 kernel: audit: type=1131 audit(1734102712.042:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.046095 systemd[1]: Stopped target initrd.target. Dec 13 15:11:52.046575 systemd[1]: Stopped target basic.target. Dec 13 15:11:52.048018 systemd[1]: Stopped target ignition-complete.target. Dec 13 15:11:52.058194 systemd[1]: Stopped target ignition-diskful.target. Dec 13 15:11:52.059409 systemd[1]: Stopped target initrd-root-device.target. Dec 13 15:11:52.060821 systemd[1]: Stopped target remote-fs.target. Dec 13 15:11:52.062110 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 15:11:52.063382 systemd[1]: Stopped target sysinit.target. Dec 13 15:11:52.064608 systemd[1]: Stopped target local-fs.target. Dec 13 15:11:52.065753 systemd[1]: Stopped target local-fs-pre.target. Dec 13 15:11:52.066648 systemd[1]: Stopped target swap.target. Dec 13 15:11:52.067534 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 15:11:52.073406 kernel: audit: type=1131 audit(1734102712.067:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.067699 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 15:11:52.068576 systemd[1]: Stopped target cryptsetup.target. Dec 13 15:11:52.073822 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 15:11:52.078636 kernel: audit: type=1131 audit(1734102712.075:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.073989 systemd[1]: Stopped dracut-initqueue.service. Dec 13 15:11:52.075971 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 15:11:52.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.076355 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 15:11:52.080258 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 15:11:52.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.080605 systemd[1]: Stopped ignition-files.service. Dec 13 15:11:52.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.084771 systemd[1]: Stopping ignition-mount.service... Dec 13 15:11:52.086852 systemd[1]: Stopping sysroot-boot.service... Dec 13 15:11:52.087502 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 15:11:52.087794 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 15:11:52.088667 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 15:11:52.100323 ignition[872]: INFO : Ignition 2.14.0 Dec 13 15:11:52.100323 ignition[872]: INFO : Stage: umount Dec 13 15:11:52.100323 ignition[872]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 15:11:52.100323 ignition[872]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 15:11:52.100323 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 15:11:52.100323 ignition[872]: INFO : umount: umount passed Dec 13 15:11:52.100323 ignition[872]: INFO : Ignition finished successfully Dec 13 15:11:52.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.088864 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 15:11:52.093555 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 15:11:52.093710 systemd[1]: Finished initrd-cleanup.service. Dec 13 15:11:52.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.106279 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 15:11:52.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.106393 systemd[1]: Stopped ignition-mount.service. Dec 13 15:11:52.107862 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 15:11:52.108224 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 15:11:52.108263 systemd[1]: Stopped ignition-disks.service. Dec 13 15:11:52.108652 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 15:11:52.108686 systemd[1]: Stopped ignition-kargs.service. Dec 13 15:11:52.109082 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 15:11:52.109117 systemd[1]: Stopped ignition-fetch.service. Dec 13 15:11:52.109484 systemd[1]: Stopped target network.target. Dec 13 15:11:52.109807 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 15:11:52.109844 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 15:11:52.110233 systemd[1]: Stopped target paths.target. Dec 13 15:11:52.111049 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 15:11:52.117986 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 15:11:52.118437 systemd[1]: Stopped target slices.target. Dec 13 15:11:52.119116 systemd[1]: Stopped target sockets.target. Dec 13 15:11:52.119806 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 15:11:52.119836 systemd[1]: Closed iscsid.socket. Dec 13 15:11:52.120450 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 15:11:52.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.120477 systemd[1]: Closed iscsiuio.socket. Dec 13 15:11:52.121105 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 15:11:52.121160 systemd[1]: Stopped ignition-setup.service. Dec 13 15:11:52.121952 systemd[1]: Stopping systemd-networkd.service... Dec 13 15:11:52.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.124833 systemd[1]: Stopping systemd-resolved.service... Dec 13 15:11:52.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.127376 systemd-networkd[714]: eth0: DHCPv6 lease lost Dec 13 15:11:52.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.136000 audit: BPF prog-id=6 op=UNLOAD Dec 13 15:11:52.127780 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 15:11:52.137000 audit: BPF prog-id=9 op=UNLOAD Dec 13 15:11:52.130679 systemd[1]: Stopped sysroot-boot.service. Dec 13 15:11:52.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.133370 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 15:11:52.133596 systemd[1]: Stopped systemd-resolved.service. Dec 13 15:11:52.135298 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 15:11:52.135409 systemd[1]: Stopped systemd-networkd.service. Dec 13 15:11:52.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.137177 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 15:11:52.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.137213 systemd[1]: Closed systemd-networkd.socket. Dec 13 15:11:52.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.138122 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 15:11:52.138191 systemd[1]: Stopped initrd-setup-root.service. Dec 13 15:11:52.140518 systemd[1]: Stopping network-cleanup.service... Dec 13 15:11:52.141024 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 15:11:52.141100 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 15:11:52.145656 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:11:52.145712 systemd[1]: Stopped systemd-sysctl.service. Dec 13 15:11:52.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.146536 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 15:11:52.146582 systemd[1]: Stopped systemd-modules-load.service. Dec 13 15:11:52.147278 systemd[1]: Stopping systemd-udevd.service... Dec 13 15:11:52.153651 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 15:11:52.156818 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 15:11:52.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.156903 systemd[1]: Stopped network-cleanup.service. Dec 13 15:11:52.160637 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 15:11:52.160755 systemd[1]: Stopped systemd-udevd.service. Dec 13 15:11:52.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.161847 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 15:11:52.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.161884 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 15:11:52.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.162340 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 15:11:52.162369 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 15:11:52.163788 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 15:11:52.163900 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 15:11:52.165323 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 15:11:52.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.165410 systemd[1]: Stopped dracut-cmdline.service. Dec 13 15:11:52.166414 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 15:11:52.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.166493 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 15:11:52.169352 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 15:11:52.170577 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 15:11:52.170706 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 15:11:52.172237 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 15:11:52.172348 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 15:11:52.173433 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 15:11:52.173533 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 15:11:52.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.181546 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 15:11:52.187820 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 15:11:52.188077 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 15:11:52.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:52.197590 systemd[1]: Reached target initrd-switch-root.target. Dec 13 15:11:52.198880 systemd[1]: Starting initrd-switch-root.service... Dec 13 15:11:52.219116 systemd[1]: Switching root. Dec 13 15:11:52.241358 iscsid[725]: iscsid shutting down. Dec 13 15:11:52.242027 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 13 15:11:52.242084 systemd-journald[202]: Journal stopped Dec 13 15:11:55.365751 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 15:11:55.365813 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 15:11:55.365843 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 15:11:55.365857 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 15:11:55.365873 kernel: SELinux: policy capability open_perms=1 Dec 13 15:11:55.365886 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 15:11:55.365906 kernel: SELinux: policy capability always_check_network=0 Dec 13 15:11:55.366965 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 15:11:55.366991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 15:11:55.367009 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 15:11:55.367023 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 15:11:55.367037 systemd[1]: Successfully loaded SELinux policy in 47.411ms. Dec 13 15:11:55.367059 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.133ms. Dec 13 15:11:55.367074 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 15:11:55.367090 systemd[1]: Detected virtualization kvm. Dec 13 15:11:55.367107 systemd[1]: Detected architecture x86-64. Dec 13 15:11:55.367120 systemd[1]: Detected first boot. Dec 13 15:11:55.367133 systemd[1]: Hostname set to . Dec 13 15:11:55.367147 systemd[1]: Initializing machine ID from VM UUID. Dec 13 15:11:55.367160 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 15:11:55.367173 systemd[1]: Populated /etc with preset unit settings. Dec 13 15:11:55.367188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:11:55.367208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:11:55.367224 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:11:55.367239 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 15:11:55.367252 systemd[1]: Stopped iscsiuio.service. Dec 13 15:11:55.367265 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 15:11:55.367278 systemd[1]: Stopped iscsid.service. Dec 13 15:11:55.367292 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 15:11:55.367308 systemd[1]: Stopped initrd-switch-root.service. Dec 13 15:11:55.367323 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 15:11:55.367337 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 15:11:55.367350 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 15:11:55.367364 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 15:11:55.367380 systemd[1]: Created slice system-getty.slice. Dec 13 15:11:55.367396 systemd[1]: Created slice system-modprobe.slice. Dec 13 15:11:55.367410 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 15:11:55.367423 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 15:11:55.367438 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 15:11:55.367452 systemd[1]: Created slice user.slice. Dec 13 15:11:55.367465 systemd[1]: Started systemd-ask-password-console.path. Dec 13 15:11:55.367481 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 15:11:55.367495 systemd[1]: Set up automount boot.automount. Dec 13 15:11:55.367508 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 15:11:55.367521 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 15:11:55.367536 systemd[1]: Stopped target initrd-fs.target. Dec 13 15:11:55.367550 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 15:11:55.367564 systemd[1]: Reached target integritysetup.target. Dec 13 15:11:55.367578 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 15:11:55.367592 systemd[1]: Reached target remote-fs.target. Dec 13 15:11:55.367608 systemd[1]: Reached target slices.target. Dec 13 15:11:55.367621 systemd[1]: Reached target swap.target. Dec 13 15:11:55.367640 systemd[1]: Reached target torcx.target. Dec 13 15:11:55.367653 systemd[1]: Reached target veritysetup.target. Dec 13 15:11:55.367666 systemd[1]: Listening on systemd-coredump.socket. Dec 13 15:11:55.367680 systemd[1]: Listening on systemd-initctl.socket. Dec 13 15:11:55.367693 systemd[1]: Listening on systemd-networkd.socket. Dec 13 15:11:55.367706 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 15:11:55.367721 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 15:11:55.367734 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 15:11:55.367750 systemd[1]: Mounting dev-hugepages.mount... Dec 13 15:11:55.367763 systemd[1]: Mounting dev-mqueue.mount... Dec 13 15:11:55.367777 systemd[1]: Mounting media.mount... Dec 13 15:11:55.367791 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:11:55.367804 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 15:11:55.367818 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 15:11:55.367838 systemd[1]: Mounting tmp.mount... Dec 13 15:11:55.367852 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 15:11:55.367867 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:11:55.367883 systemd[1]: Starting kmod-static-nodes.service... Dec 13 15:11:55.367896 systemd[1]: Starting modprobe@configfs.service... Dec 13 15:11:55.367910 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:11:55.371346 systemd[1]: Starting modprobe@drm.service... Dec 13 15:11:55.371382 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:11:55.371397 systemd[1]: Starting modprobe@fuse.service... Dec 13 15:11:55.371410 systemd[1]: Starting modprobe@loop.service... Dec 13 15:11:55.371424 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 15:11:55.371437 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 15:11:55.371455 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 15:11:55.371468 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 15:11:55.371482 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 15:11:55.371496 systemd[1]: Stopped systemd-journald.service. Dec 13 15:11:55.371509 systemd[1]: Starting systemd-journald.service... Dec 13 15:11:55.371523 systemd[1]: Starting systemd-modules-load.service... Dec 13 15:11:55.371537 systemd[1]: Starting systemd-network-generator.service... Dec 13 15:11:55.371550 systemd[1]: Starting systemd-remount-fs.service... Dec 13 15:11:55.371563 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 15:11:55.371580 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 15:11:55.371593 systemd[1]: Stopped verity-setup.service. Dec 13 15:11:55.371607 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:11:55.371620 kernel: fuse: init (API version 7.34) Dec 13 15:11:55.371634 systemd[1]: Mounted dev-hugepages.mount. Dec 13 15:11:55.371648 systemd[1]: Mounted dev-mqueue.mount. Dec 13 15:11:55.371661 kernel: loop: module loaded Dec 13 15:11:55.371674 systemd[1]: Mounted media.mount. Dec 13 15:11:55.371688 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 15:11:55.371703 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 15:11:55.371716 systemd[1]: Mounted tmp.mount. Dec 13 15:11:55.371731 systemd[1]: Finished kmod-static-nodes.service. Dec 13 15:11:55.371744 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 15:11:55.371757 systemd[1]: Finished modprobe@configfs.service. Dec 13 15:11:55.371773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:11:55.371786 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:11:55.371799 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:11:55.371813 systemd[1]: Finished modprobe@drm.service. Dec 13 15:11:55.371834 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:11:55.371848 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:11:55.371861 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 15:11:55.371875 systemd[1]: Finished modprobe@fuse.service. Dec 13 15:11:55.371889 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:11:55.371904 systemd[1]: Finished modprobe@loop.service. Dec 13 15:11:55.371917 systemd[1]: Finished systemd-modules-load.service. Dec 13 15:11:55.373646 systemd[1]: Finished systemd-network-generator.service. Dec 13 15:11:55.373665 systemd[1]: Finished systemd-remount-fs.service. Dec 13 15:11:55.373679 systemd[1]: Reached target network-pre.target. Dec 13 15:11:55.373693 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 15:11:55.373707 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 15:11:55.373721 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 15:11:55.373744 systemd-journald[978]: Journal started Dec 13 15:11:55.373811 systemd-journald[978]: Runtime Journal (/run/log/journal/d36dc4fea9874732b8ebc99935df98a7) is 4.7M, max 38.1M, 33.3M free. Dec 13 15:11:52.402000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 15:11:52.471000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 15:11:52.471000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 15:11:52.471000 audit: BPF prog-id=10 op=LOAD Dec 13 15:11:52.471000 audit: BPF prog-id=10 op=UNLOAD Dec 13 15:11:52.471000 audit: BPF prog-id=11 op=LOAD Dec 13 15:11:52.471000 audit: BPF prog-id=11 op=UNLOAD Dec 13 15:11:52.561000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 15:11:52.561000 audit[904]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:11:52.561000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 15:11:52.564000 audit[904]: AVC avc: denied { associate } for pid=904 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 15:11:52.564000 audit[904]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=887 pid=904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:11:52.564000 audit: CWD cwd="/" Dec 13 15:11:52.564000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:52.564000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:52.564000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 15:11:55.154000 audit: BPF prog-id=12 op=LOAD Dec 13 15:11:55.154000 audit: BPF prog-id=3 op=UNLOAD Dec 13 15:11:55.154000 audit: BPF prog-id=13 op=LOAD Dec 13 15:11:55.154000 audit: BPF prog-id=14 op=LOAD Dec 13 15:11:55.154000 audit: BPF prog-id=4 op=UNLOAD Dec 13 15:11:55.154000 audit: BPF prog-id=5 op=UNLOAD Dec 13 15:11:55.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.162000 audit: BPF prog-id=12 op=UNLOAD Dec 13 15:11:55.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.277000 audit: BPF prog-id=15 op=LOAD Dec 13 15:11:55.277000 audit: BPF prog-id=16 op=LOAD Dec 13 15:11:55.277000 audit: BPF prog-id=17 op=LOAD Dec 13 15:11:55.277000 audit: BPF prog-id=13 op=UNLOAD Dec 13 15:11:55.277000 audit: BPF prog-id=14 op=UNLOAD Dec 13 15:11:55.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.360000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 15:11:55.360000 audit[978]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc0b63b610 a2=4000 a3=7ffc0b63b6ac items=0 ppid=1 pid=978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:11:55.360000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 15:11:55.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.152013 systemd[1]: Queued start job for default target multi-user.target. Dec 13 15:11:52.560737 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:11:55.152027 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 15:11:52.561189 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 15:11:55.156160 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 15:11:52.561213 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 15:11:52.561249 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 15:11:52.561260 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 15:11:52.561296 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 15:11:52.561310 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 15:11:52.561527 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 15:11:52.561565 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 15:11:52.561580 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 15:11:52.562170 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 15:11:52.562206 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 15:11:52.562226 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 15:11:55.380427 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 15:11:55.380452 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:11:52.562241 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 15:11:52.562259 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 15:11:52.562274 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 15:11:54.799363 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:11:54.799624 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:11:55.383086 systemd[1]: Starting systemd-random-seed.service... Dec 13 15:11:54.799734 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:11:54.799968 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 15:11:54.800025 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 15:11:54.800088 /usr/lib/systemd/system-generators/torcx-generator[904]: time="2024-12-13T15:11:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 15:11:55.385940 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:11:55.388089 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:11:55.391978 systemd[1]: Started systemd-journald.service. Dec 13 15:11:55.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.392818 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 15:11:55.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.393346 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 15:11:55.393774 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 15:11:55.395919 systemd[1]: Starting systemd-journal-flush.service... Dec 13 15:11:55.400504 systemd[1]: Starting systemd-sysusers.service... Dec 13 15:11:55.406860 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:11:55.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.414266 systemd[1]: Finished systemd-random-seed.service. Dec 13 15:11:55.414680 systemd[1]: Reached target first-boot-complete.target. Dec 13 15:11:55.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.420403 systemd-journald[978]: Time spent on flushing to /var/log/journal/d36dc4fea9874732b8ebc99935df98a7 is 51.672ms for 1296 entries. Dec 13 15:11:55.420403 systemd-journald[978]: System Journal (/var/log/journal/d36dc4fea9874732b8ebc99935df98a7) is 8.0M, max 584.8M, 576.8M free. Dec 13 15:11:55.478156 systemd-journald[978]: Received client request to flush runtime journal. Dec 13 15:11:55.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.437238 systemd[1]: Finished systemd-sysusers.service. Dec 13 15:11:55.439002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 15:11:55.460721 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 15:11:55.480083 udevadm[1015]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 15:11:55.462439 systemd[1]: Starting systemd-udev-settle.service... Dec 13 15:11:55.473764 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 15:11:55.479125 systemd[1]: Finished systemd-journal-flush.service. Dec 13 15:11:55.950879 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 15:11:55.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:55.952000 audit: BPF prog-id=18 op=LOAD Dec 13 15:11:55.952000 audit: BPF prog-id=19 op=LOAD Dec 13 15:11:55.952000 audit: BPF prog-id=7 op=UNLOAD Dec 13 15:11:55.952000 audit: BPF prog-id=8 op=UNLOAD Dec 13 15:11:55.955097 systemd[1]: Starting systemd-udevd.service... Dec 13 15:11:55.975011 systemd-udevd[1017]: Using default interface naming scheme 'v252'. Dec 13 15:11:55.995954 systemd[1]: Started systemd-udevd.service. Dec 13 15:11:55.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.002000 audit: BPF prog-id=20 op=LOAD Dec 13 15:11:56.005541 systemd[1]: Starting systemd-networkd.service... Dec 13 15:11:56.015000 audit: BPF prog-id=21 op=LOAD Dec 13 15:11:56.015000 audit: BPF prog-id=22 op=LOAD Dec 13 15:11:56.015000 audit: BPF prog-id=23 op=LOAD Dec 13 15:11:56.017496 systemd[1]: Starting systemd-userdbd.service... Dec 13 15:11:56.044782 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 15:11:56.069332 systemd[1]: Started systemd-userdbd.service. Dec 13 15:11:56.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.127810 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 15:11:56.130965 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 15:11:56.134940 kernel: ACPI: button: Power Button [PWRF] Dec 13 15:11:56.144302 systemd-networkd[1032]: lo: Link UP Dec 13 15:11:56.144311 systemd-networkd[1032]: lo: Gained carrier Dec 13 15:11:56.144737 systemd-networkd[1032]: Enumeration completed Dec 13 15:11:56.144837 systemd[1]: Started systemd-networkd.service. Dec 13 15:11:56.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.145452 systemd-networkd[1032]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 15:11:56.146636 systemd-networkd[1032]: eth0: Link UP Dec 13 15:11:56.146645 systemd-networkd[1032]: eth0: Gained carrier Dec 13 15:11:56.166060 systemd-networkd[1032]: eth0: DHCPv4 address 10.244.95.150/30, gateway 10.244.95.149 acquired from 10.244.95.149 Dec 13 15:11:56.178918 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 15:11:56.175000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 15:11:56.175000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5592d4518780 a1=337fc a2=7f9107febbc5 a3=5 items=110 ppid=1017 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:11:56.175000 audit: CWD cwd="/" Dec 13 15:11:56.175000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=1 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=2 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=3 name=(null) inode=15278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=4 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=5 name=(null) inode=15279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=6 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=7 name=(null) inode=15280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=8 name=(null) inode=15280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=9 name=(null) inode=15281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=10 name=(null) inode=15280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=11 name=(null) inode=15282 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=12 name=(null) inode=15280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=13 name=(null) inode=15283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=14 name=(null) inode=15280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=15 name=(null) inode=15284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=16 name=(null) inode=15280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=17 name=(null) inode=15285 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=18 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=19 name=(null) inode=15286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=20 name=(null) inode=15286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=21 name=(null) inode=15287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=22 name=(null) inode=15286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=23 name=(null) inode=15288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=24 name=(null) inode=15286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=25 name=(null) inode=15289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=26 name=(null) inode=15286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=27 name=(null) inode=15290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=28 name=(null) inode=15286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=29 name=(null) inode=15291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=30 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=31 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=32 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=33 name=(null) inode=15293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=34 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=35 name=(null) inode=15294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=36 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=37 name=(null) inode=15295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=38 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=39 name=(null) inode=15296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=40 name=(null) inode=15292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=41 name=(null) inode=15297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=42 name=(null) inode=15277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=43 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=44 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=45 name=(null) inode=15299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=46 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=47 name=(null) inode=15300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=48 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=49 name=(null) inode=15301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=50 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=51 name=(null) inode=15302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=52 name=(null) inode=15298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=53 name=(null) inode=15303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=55 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=56 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=57 name=(null) inode=15305 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=58 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=59 name=(null) inode=15306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=60 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=61 name=(null) inode=15307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=62 name=(null) inode=15307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=63 name=(null) inode=15308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=64 name=(null) inode=15307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=65 name=(null) inode=15309 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=66 name=(null) inode=15307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=67 name=(null) inode=15310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=68 name=(null) inode=15307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=69 name=(null) inode=15311 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=70 name=(null) inode=15307 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=71 name=(null) inode=15312 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=72 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=73 name=(null) inode=15313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=74 name=(null) inode=15313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=75 name=(null) inode=15314 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=76 name=(null) inode=15313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=77 name=(null) inode=15315 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=78 name=(null) inode=15313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=79 name=(null) inode=15316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=80 name=(null) inode=15313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=81 name=(null) inode=15317 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=82 name=(null) inode=15313 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=83 name=(null) inode=15318 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=84 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=85 name=(null) inode=15319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=86 name=(null) inode=15319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=87 name=(null) inode=15320 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=88 name=(null) inode=15319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=89 name=(null) inode=15321 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=90 name=(null) inode=15319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=91 name=(null) inode=15322 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=92 name=(null) inode=15319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=93 name=(null) inode=15323 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=94 name=(null) inode=15319 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=95 name=(null) inode=15324 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=96 name=(null) inode=15304 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=97 name=(null) inode=15325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=98 name=(null) inode=15325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=99 name=(null) inode=15326 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=100 name=(null) inode=15325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=101 name=(null) inode=15327 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=102 name=(null) inode=15325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=103 name=(null) inode=15328 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=104 name=(null) inode=15325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=105 name=(null) inode=15329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=106 name=(null) inode=15325 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=107 name=(null) inode=15330 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PATH item=109 name=(null) inode=15331 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 15:11:56.175000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 15:11:56.229947 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 15:11:56.234949 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 15:11:56.244111 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 15:11:56.244250 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 15:11:56.374595 systemd[1]: Finished systemd-udev-settle.service. Dec 13 15:11:56.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.377430 systemd[1]: Starting lvm2-activation-early.service... Dec 13 15:11:56.408096 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:11:56.442824 systemd[1]: Finished lvm2-activation-early.service. Dec 13 15:11:56.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.444183 systemd[1]: Reached target cryptsetup.target. Dec 13 15:11:56.447687 systemd[1]: Starting lvm2-activation.service... Dec 13 15:11:56.453673 lvm[1047]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 15:11:56.476483 systemd[1]: Finished lvm2-activation.service. Dec 13 15:11:56.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.477794 systemd[1]: Reached target local-fs-pre.target. Dec 13 15:11:56.478825 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 15:11:56.478894 systemd[1]: Reached target local-fs.target. Dec 13 15:11:56.479851 systemd[1]: Reached target machines.target. Dec 13 15:11:56.483370 systemd[1]: Starting ldconfig.service... Dec 13 15:11:56.484952 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:11:56.485040 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:56.487231 systemd[1]: Starting systemd-boot-update.service... Dec 13 15:11:56.490268 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 15:11:56.494643 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 15:11:56.496614 systemd[1]: Starting systemd-sysext.service... Dec 13 15:11:56.506123 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1049 (bootctl) Dec 13 15:11:56.507304 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 15:11:56.515683 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 15:11:56.520594 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 15:11:56.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.525321 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 15:11:56.525493 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 15:11:56.542892 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 15:11:56.543488 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 15:11:56.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.548978 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 15:11:56.581014 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 15:11:56.604086 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 15:11:56.614181 (sd-sysext)[1061]: Using extensions 'kubernetes'. Dec 13 15:11:56.615991 systemd-fsck[1058]: fsck.fat 4.2 (2021-01-31) Dec 13 15:11:56.615991 systemd-fsck[1058]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 15:11:56.616724 (sd-sysext)[1061]: Merged extensions into '/usr'. Dec 13 15:11:56.620268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 15:11:56.621911 systemd[1]: Mounting boot.mount... Dec 13 15:11:56.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.637755 systemd[1]: Mounted boot.mount. Dec 13 15:11:56.640001 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:11:56.641492 systemd[1]: Mounting usr-share-oem.mount... Dec 13 15:11:56.645213 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:11:56.646573 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:11:56.649068 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:11:56.650791 systemd[1]: Starting modprobe@loop.service... Dec 13 15:11:56.651288 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:11:56.651436 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:56.651615 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:11:56.654012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:11:56.654142 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:11:56.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.655841 systemd[1]: Finished systemd-boot-update.service. Dec 13 15:11:56.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.657233 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:11:56.657351 systemd[1]: Finished modprobe@loop.service. Dec 13 15:11:56.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.660300 systemd[1]: Mounted usr-share-oem.mount. Dec 13 15:11:56.660962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:11:56.661079 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:11:56.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.662410 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:11:56.662456 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:11:56.663192 systemd[1]: Finished systemd-sysext.service. Dec 13 15:11:56.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.664696 systemd[1]: Starting ensure-sysext.service... Dec 13 15:11:56.666779 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 15:11:56.675181 systemd[1]: Reloading. Dec 13 15:11:56.697250 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 15:11:56.698364 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 15:11:56.701499 systemd-tmpfiles[1069]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 15:11:56.764138 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2024-12-13T15:11:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:11:56.764166 /usr/lib/systemd/system-generators/torcx-generator[1088]: time="2024-12-13T15:11:56Z" level=info msg="torcx already run" Dec 13 15:11:56.826844 ldconfig[1048]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 15:11:56.874071 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:11:56.874089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:11:56.893115 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:11:56.947000 audit: BPF prog-id=24 op=LOAD Dec 13 15:11:56.947000 audit: BPF prog-id=21 op=UNLOAD Dec 13 15:11:56.948000 audit: BPF prog-id=25 op=LOAD Dec 13 15:11:56.948000 audit: BPF prog-id=26 op=LOAD Dec 13 15:11:56.948000 audit: BPF prog-id=22 op=UNLOAD Dec 13 15:11:56.948000 audit: BPF prog-id=23 op=UNLOAD Dec 13 15:11:56.948000 audit: BPF prog-id=27 op=LOAD Dec 13 15:11:56.948000 audit: BPF prog-id=28 op=LOAD Dec 13 15:11:56.948000 audit: BPF prog-id=18 op=UNLOAD Dec 13 15:11:56.948000 audit: BPF prog-id=19 op=UNLOAD Dec 13 15:11:56.948000 audit: BPF prog-id=29 op=LOAD Dec 13 15:11:56.948000 audit: BPF prog-id=15 op=UNLOAD Dec 13 15:11:56.949000 audit: BPF prog-id=30 op=LOAD Dec 13 15:11:56.949000 audit: BPF prog-id=31 op=LOAD Dec 13 15:11:56.949000 audit: BPF prog-id=16 op=UNLOAD Dec 13 15:11:56.949000 audit: BPF prog-id=17 op=UNLOAD Dec 13 15:11:56.949000 audit: BPF prog-id=32 op=LOAD Dec 13 15:11:56.949000 audit: BPF prog-id=20 op=UNLOAD Dec 13 15:11:56.956646 systemd[1]: Finished ldconfig.service. Dec 13 15:11:56.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.958700 kernel: kauditd_printk_skb: 254 callbacks suppressed Dec 13 15:11:56.958781 kernel: audit: type=1130 audit(1734102716.957:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.966016 kernel: audit: type=1130 audit(1734102716.963:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.963028 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 15:11:56.969652 systemd[1]: Starting audit-rules.service... Dec 13 15:11:56.972269 systemd[1]: Starting clean-ca-certificates.service... Dec 13 15:11:56.976354 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 15:11:56.976000 audit: BPF prog-id=33 op=LOAD Dec 13 15:11:56.980871 kernel: audit: type=1334 audit(1734102716.976:182): prog-id=33 op=LOAD Dec 13 15:11:56.978757 systemd[1]: Starting systemd-resolved.service... Dec 13 15:11:56.981000 audit: BPF prog-id=34 op=LOAD Dec 13 15:11:56.988136 kernel: audit: type=1334 audit(1734102716.981:183): prog-id=34 op=LOAD Dec 13 15:11:56.983200 systemd[1]: Starting systemd-timesyncd.service... Dec 13 15:11:56.986106 systemd[1]: Starting systemd-update-utmp.service... Dec 13 15:11:56.999968 kernel: audit: type=1127 audit(1734102716.990:184): pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.000045 kernel: audit: type=1130 audit(1734102716.996:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.990000 audit[1142]: SYSTEM_BOOT pid=1142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:56.991781 systemd[1]: Finished clean-ca-certificates.service. Dec 13 15:11:56.999126 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:11:57.006337 systemd[1]: Finished systemd-update-utmp.service. Dec 13 15:11:57.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.009938 kernel: audit: type=1130 audit(1734102717.005:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.012747 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.014166 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:11:57.015982 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:11:57.018599 systemd[1]: Starting modprobe@loop.service... Dec 13 15:11:57.020040 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.020174 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:57.020273 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:11:57.021069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:11:57.021208 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:11:57.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.024966 kernel: audit: type=1130 audit(1734102717.020:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.024995 kernel: audit: type=1131 audit(1734102717.020:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.022006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:11:57.022129 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:11:57.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.028394 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:11:57.028494 systemd[1]: Finished modprobe@loop.service. Dec 13 15:11:57.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.031686 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:11:57.031798 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.031951 kernel: audit: type=1130 audit(1734102717.027:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.033368 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.034718 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:11:57.049485 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:11:57.052817 systemd[1]: Starting modprobe@loop.service... Dec 13 15:11:57.053234 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.053399 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:57.053533 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:11:57.054352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:11:57.054461 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:11:57.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.056313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:11:57.056427 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:11:57.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.057228 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:11:57.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.057338 systemd[1]: Finished modprobe@loop.service. Dec 13 15:11:57.061493 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.062802 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 15:11:57.065564 systemd[1]: Starting modprobe@drm.service... Dec 13 15:11:57.067468 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 15:11:57.069848 systemd[1]: Starting modprobe@loop.service... Dec 13 15:11:57.070868 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.071067 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:57.072703 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 15:11:57.073176 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 15:11:57.074512 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 15:11:57.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.075635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 15:11:57.075830 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 15:11:57.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.077079 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 15:11:57.077626 systemd[1]: Finished modprobe@drm.service. Dec 13 15:11:57.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.080773 systemd[1]: Starting systemd-update-done.service... Dec 13 15:11:57.081639 systemd[1]: Finished ensure-sysext.service. Dec 13 15:11:57.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.090270 systemd[1]: Finished systemd-update-done.service. Dec 13 15:11:57.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.091785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 15:11:57.091893 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 15:11:57.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.092332 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 15:11:57.092584 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 15:11:57.092697 systemd[1]: Finished modprobe@loop.service. Dec 13 15:11:57.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.093154 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.108357 systemd[1]: Started systemd-timesyncd.service. Dec 13 15:11:57.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 15:11:57.108808 systemd[1]: Reached target time-set.target. Dec 13 15:11:57.109941 augenrules[1168]: No rules Dec 13 15:11:57.108000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 15:11:57.108000 audit[1168]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc1353a4c0 a2=420 a3=0 items=0 ppid=1136 pid=1168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 15:11:57.108000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 15:11:57.110720 systemd[1]: Finished audit-rules.service. Dec 13 15:11:57.130564 systemd-resolved[1140]: Positive Trust Anchors: Dec 13 15:11:57.130578 systemd-resolved[1140]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 15:11:57.130614 systemd-resolved[1140]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 15:11:57.136559 systemd-resolved[1140]: Using system hostname 'srv-jc9g4.gb1.brightbox.com'. Dec 13 15:11:57.138270 systemd[1]: Started systemd-resolved.service. Dec 13 15:11:57.138738 systemd[1]: Reached target network.target. Dec 13 15:11:57.139098 systemd[1]: Reached target nss-lookup.target. Dec 13 15:11:57.139441 systemd[1]: Reached target sysinit.target. Dec 13 15:11:57.139853 systemd[1]: Started motdgen.path. Dec 13 15:11:57.140226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 15:11:57.140770 systemd[1]: Started logrotate.timer. Dec 13 15:11:57.141198 systemd[1]: Started mdadm.timer. Dec 13 15:11:57.141517 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 15:11:57.141866 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 15:11:57.141895 systemd[1]: Reached target paths.target. Dec 13 15:11:57.142229 systemd[1]: Reached target timers.target. Dec 13 15:11:57.142794 systemd[1]: Listening on dbus.socket. Dec 13 15:11:57.144215 systemd[1]: Starting docker.socket... Dec 13 15:11:57.147367 systemd[1]: Listening on sshd.socket. Dec 13 15:11:57.147842 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:57.148373 systemd[1]: Listening on docker.socket. Dec 13 15:11:57.148807 systemd[1]: Reached target sockets.target. Dec 13 15:11:57.149163 systemd[1]: Reached target basic.target. Dec 13 15:11:57.149526 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.149555 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 15:11:57.150693 systemd[1]: Starting containerd.service... Dec 13 15:11:57.152952 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 15:11:57.154603 systemd[1]: Starting dbus.service... Dec 13 15:11:57.157260 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 15:11:57.162000 systemd[1]: Starting extend-filesystems.service... Dec 13 15:11:57.163835 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 15:11:57.165512 systemd[1]: Starting motdgen.service... Dec 13 15:11:57.169964 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 15:11:57.171598 systemd[1]: Starting sshd-keygen.service... Dec 13 15:11:57.175711 systemd[1]: Starting systemd-logind.service... Dec 13 15:11:57.176168 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 15:11:57.176265 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 15:11:57.176744 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 15:11:57.177550 systemd[1]: Starting update-engine.service... Dec 13 15:11:57.179436 jq[1179]: false Dec 13 15:11:57.180400 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 15:11:57.184205 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 15:11:57.184409 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 15:11:57.184748 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 15:11:57.184886 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 15:11:57.201684 jq[1189]: true Dec 13 15:11:57.226097 dbus-daemon[1178]: [system] SELinux support is enabled Dec 13 15:11:57.226873 jq[1197]: true Dec 13 15:11:57.227252 systemd[1]: Started dbus.service. Dec 13 15:11:57.229865 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 15:11:57.229908 systemd[1]: Reached target system-config.target. Dec 13 15:11:57.230322 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 15:11:57.230342 systemd[1]: Reached target user-config.target. Dec 13 15:11:57.232115 dbus-daemon[1178]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1032 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 15:11:57.235868 dbus-daemon[1178]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 15:11:57.241405 systemd[1]: Starting systemd-hostnamed.service... Dec 13 15:11:57.264956 extend-filesystems[1180]: Found loop1 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda1 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda2 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda3 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found usr Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda4 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda6 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda7 Dec 13 15:11:57.264956 extend-filesystems[1180]: Found vda9 Dec 13 15:11:57.264956 extend-filesystems[1180]: Checking size of /dev/vda9 Dec 13 15:11:57.271761 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 15:11:57.284186 update_engine[1187]: I1213 15:11:57.282570 1187 main.cc:92] Flatcar Update Engine starting Dec 13 15:11:57.271958 systemd[1]: Finished motdgen.service. Dec 13 15:11:57.287872 update_engine[1187]: I1213 15:11:57.287447 1187 update_check_scheduler.cc:74] Next update check in 6m37s Dec 13 15:11:57.287428 systemd[1]: Started update-engine.service. Dec 13 15:11:57.289621 systemd[1]: Started locksmithd.service. Dec 13 15:11:57.293772 extend-filesystems[1180]: Resized partition /dev/vda9 Dec 13 15:11:57.298431 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 15:11:57.308955 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 15:11:57.341412 bash[1230]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:11:57.342158 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 15:11:57.363036 env[1193]: time="2024-12-13T15:11:57.362955974Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 15:11:57.371943 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 15:11:57.789896 systemd-resolved[1140]: Clock change detected. Flushing caches. Dec 13 15:11:57.794643 systemd-timesyncd[1141]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Dec 13 15:11:57.794702 systemd-timesyncd[1141]: Initial clock synchronization to Fri 2024-12-13 15:11:57.789822 UTC. Dec 13 15:11:57.795112 systemd-logind[1185]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 15:11:57.795134 systemd-logind[1185]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 15:11:57.796346 systemd-logind[1185]: New seat seat0. Dec 13 15:11:57.797405 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 15:11:57.797405 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 15:11:57.797405 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 15:11:57.803388 extend-filesystems[1180]: Resized filesystem in /dev/vda9 Dec 13 15:11:57.797720 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 15:11:57.797930 systemd[1]: Finished extend-filesystems.service. Dec 13 15:11:57.803302 systemd[1]: Started systemd-logind.service. Dec 13 15:11:57.820728 dbus-daemon[1178]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 15:11:57.820855 systemd[1]: Started systemd-hostnamed.service. Dec 13 15:11:57.821673 dbus-daemon[1178]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1210 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 15:11:57.824800 systemd[1]: Starting polkit.service... Dec 13 15:11:57.836580 polkitd[1235]: Started polkitd version 121 Dec 13 15:11:57.848888 env[1193]: time="2024-12-13T15:11:57.848671034Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 15:11:57.849024 env[1193]: time="2024-12-13T15:11:57.849006849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:11:57.851100 env[1193]: time="2024-12-13T15:11:57.850797807Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:11:57.851557 env[1193]: time="2024-12-13T15:11:57.851248030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.851720221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.851742030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.851767340Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.851778160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.851871037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.852107267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.852236655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.852253145Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.852299457Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 15:11:57.852436 env[1193]: time="2024-12-13T15:11:57.852311227Z" level=info msg="metadata content store policy set" policy=shared Dec 13 15:11:57.852410 polkitd[1235]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 15:11:57.852472 polkitd[1235]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 15:11:57.853670 polkitd[1235]: Finished loading, compiling and executing 2 rules Dec 13 15:11:57.854214 dbus-daemon[1178]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 15:11:57.854370 systemd[1]: Started polkit.service. Dec 13 15:11:57.855032 polkitd[1235]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 15:11:57.856191 env[1193]: time="2024-12-13T15:11:57.856169151Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 15:11:57.856285 env[1193]: time="2024-12-13T15:11:57.856269515Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 15:11:57.856347 env[1193]: time="2024-12-13T15:11:57.856334957Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 15:11:57.856486 env[1193]: time="2024-12-13T15:11:57.856471789Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.856593 env[1193]: time="2024-12-13T15:11:57.856579547Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.856670 env[1193]: time="2024-12-13T15:11:57.856656911Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.856735 env[1193]: time="2024-12-13T15:11:57.856723023Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.856862 env[1193]: time="2024-12-13T15:11:57.856848972Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.856943 env[1193]: time="2024-12-13T15:11:57.856928930Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.857027 env[1193]: time="2024-12-13T15:11:57.857012232Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.857103 env[1193]: time="2024-12-13T15:11:57.857090954Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.857163 env[1193]: time="2024-12-13T15:11:57.857150798Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 15:11:57.857443 env[1193]: time="2024-12-13T15:11:57.857429105Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 15:11:57.857612 env[1193]: time="2024-12-13T15:11:57.857597462Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 15:11:57.857952 env[1193]: time="2024-12-13T15:11:57.857934606Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 15:11:57.858090 env[1193]: time="2024-12-13T15:11:57.858075871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858224 env[1193]: time="2024-12-13T15:11:57.858182701Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 15:11:57.858361 env[1193]: time="2024-12-13T15:11:57.858349460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858418 env[1193]: time="2024-12-13T15:11:57.858407247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858531 env[1193]: time="2024-12-13T15:11:57.858518539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858613 env[1193]: time="2024-12-13T15:11:57.858600627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858676 env[1193]: time="2024-12-13T15:11:57.858663747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858739 env[1193]: time="2024-12-13T15:11:57.858727085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858810 env[1193]: time="2024-12-13T15:11:57.858798268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858876 env[1193]: time="2024-12-13T15:11:57.858863995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.858999 env[1193]: time="2024-12-13T15:11:57.858986762Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 15:11:57.859171 env[1193]: time="2024-12-13T15:11:57.859156606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.859244 env[1193]: time="2024-12-13T15:11:57.859231083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.859305 env[1193]: time="2024-12-13T15:11:57.859293073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.859365 env[1193]: time="2024-12-13T15:11:57.859352654Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 15:11:57.859438 env[1193]: time="2024-12-13T15:11:57.859423464Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 15:11:57.859509 env[1193]: time="2024-12-13T15:11:57.859497038Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 15:11:57.859589 env[1193]: time="2024-12-13T15:11:57.859575619Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 15:11:57.859696 env[1193]: time="2024-12-13T15:11:57.859678500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 15:11:57.860011 env[1193]: time="2024-12-13T15:11:57.859962011Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 15:11:57.861401 env[1193]: time="2024-12-13T15:11:57.860144157Z" level=info msg="Connect containerd service" Dec 13 15:11:57.861401 env[1193]: time="2024-12-13T15:11:57.860197925Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 15:11:57.861401 env[1193]: time="2024-12-13T15:11:57.860952314Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:11:57.861735 env[1193]: time="2024-12-13T15:11:57.861715022Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 15:11:57.861865 env[1193]: time="2024-12-13T15:11:57.861850942Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 15:11:57.862050 env[1193]: time="2024-12-13T15:11:57.862028831Z" level=info msg="containerd successfully booted in 0.083027s" Dec 13 15:11:57.862092 systemd[1]: Started containerd.service. Dec 13 15:11:57.870475 env[1193]: time="2024-12-13T15:11:57.870428143Z" level=info msg="Start subscribing containerd event" Dec 13 15:11:57.870589 env[1193]: time="2024-12-13T15:11:57.870574163Z" level=info msg="Start recovering state" Dec 13 15:11:57.870718 env[1193]: time="2024-12-13T15:11:57.870705531Z" level=info msg="Start event monitor" Dec 13 15:11:57.870806 env[1193]: time="2024-12-13T15:11:57.870792881Z" level=info msg="Start snapshots syncer" Dec 13 15:11:57.871098 env[1193]: time="2024-12-13T15:11:57.870873812Z" level=info msg="Start cni network conf syncer for default" Dec 13 15:11:57.871173 env[1193]: time="2024-12-13T15:11:57.871160994Z" level=info msg="Start streaming server" Dec 13 15:11:57.871198 systemd-hostnamed[1210]: Hostname set to (static) Dec 13 15:11:57.931479 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 15:11:57.942974 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:11:57.943041 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 15:11:58.092161 systemd-networkd[1032]: eth0: Gained IPv6LL Dec 13 15:11:58.095643 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 15:11:58.096362 systemd[1]: Reached target network-online.target. Dec 13 15:11:58.099093 systemd[1]: Starting kubelet.service... Dec 13 15:11:58.303876 sshd_keygen[1205]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 15:11:58.330009 systemd[1]: Finished sshd-keygen.service. Dec 13 15:11:58.332260 systemd[1]: Starting issuegen.service... Dec 13 15:11:58.339018 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 15:11:58.339166 systemd[1]: Finished issuegen.service. Dec 13 15:11:58.341143 systemd[1]: Starting systemd-user-sessions.service... Dec 13 15:11:58.348958 systemd[1]: Finished systemd-user-sessions.service. Dec 13 15:11:58.350822 systemd[1]: Started getty@tty1.service. Dec 13 15:11:58.352660 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 15:11:58.353298 systemd[1]: Reached target getty.target. Dec 13 15:11:58.948303 systemd[1]: Started kubelet.service. Dec 13 15:11:59.495256 systemd[1]: Created slice system-sshd.slice. Dec 13 15:11:59.502472 systemd[1]: Started sshd@0-10.244.95.150:22-139.178.68.195:42678.service. Dec 13 15:11:59.606960 systemd-networkd[1032]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:17e5:24:19ff:fef4:5f96/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:17e5:24:19ff:fef4:5f96/64 assigned by NDisc. Dec 13 15:11:59.607618 systemd-networkd[1032]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 15:11:59.760303 kubelet[1267]: E1213 15:11:59.760078 1267 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:11:59.762847 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:11:59.762992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:11:59.763327 systemd[1]: kubelet.service: Consumed 1.320s CPU time. Dec 13 15:12:00.417606 sshd[1273]: Accepted publickey for core from 139.178.68.195 port 42678 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:00.423282 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:00.440634 systemd[1]: Created slice user-500.slice. Dec 13 15:12:00.444713 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 15:12:00.450263 systemd-logind[1185]: New session 1 of user core. Dec 13 15:12:00.456913 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 15:12:00.459321 systemd[1]: Starting user@500.service... Dec 13 15:12:00.463967 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:00.550102 systemd[1278]: Queued start job for default target default.target. Dec 13 15:12:00.551357 systemd[1278]: Reached target paths.target. Dec 13 15:12:00.551526 systemd[1278]: Reached target sockets.target. Dec 13 15:12:00.551620 systemd[1278]: Reached target timers.target. Dec 13 15:12:00.551694 systemd[1278]: Reached target basic.target. Dec 13 15:12:00.551813 systemd[1278]: Reached target default.target. Dec 13 15:12:00.551945 systemd[1]: Started user@500.service. Dec 13 15:12:00.552035 systemd[1278]: Startup finished in 74ms. Dec 13 15:12:00.553982 systemd[1]: Started session-1.scope. Dec 13 15:12:01.187455 systemd[1]: Started sshd@1-10.244.95.150:22-139.178.68.195:42684.service. Dec 13 15:12:02.091049 sshd[1288]: Accepted publickey for core from 139.178.68.195 port 42684 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:02.095584 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:02.105481 systemd-logind[1185]: New session 2 of user core. Dec 13 15:12:02.106210 systemd[1]: Started session-2.scope. Dec 13 15:12:02.716617 sshd[1288]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:02.724634 systemd[1]: sshd@1-10.244.95.150:22-139.178.68.195:42684.service: Deactivated successfully. Dec 13 15:12:02.726242 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 15:12:02.727280 systemd-logind[1185]: Session 2 logged out. Waiting for processes to exit. Dec 13 15:12:02.729303 systemd-logind[1185]: Removed session 2. Dec 13 15:12:02.872665 systemd[1]: Started sshd@2-10.244.95.150:22-139.178.68.195:42690.service. Dec 13 15:12:03.770853 sshd[1294]: Accepted publickey for core from 139.178.68.195 port 42690 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:03.775592 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:03.785974 systemd[1]: Started session-3.scope. Dec 13 15:12:03.786714 systemd-logind[1185]: New session 3 of user core. Dec 13 15:12:04.398901 sshd[1294]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:04.406644 systemd-logind[1185]: Session 3 logged out. Waiting for processes to exit. Dec 13 15:12:04.407395 systemd[1]: sshd@2-10.244.95.150:22-139.178.68.195:42690.service: Deactivated successfully. Dec 13 15:12:04.409220 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 15:12:04.411236 systemd-logind[1185]: Removed session 3. Dec 13 15:12:04.721077 coreos-metadata[1177]: Dec 13 15:12:04.720 WARN failed to locate config-drive, using the metadata service API instead Dec 13 15:12:04.773661 coreos-metadata[1177]: Dec 13 15:12:04.773 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 15:12:04.800278 coreos-metadata[1177]: Dec 13 15:12:04.799 INFO Fetch successful Dec 13 15:12:04.800858 coreos-metadata[1177]: Dec 13 15:12:04.800 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 15:12:04.830018 coreos-metadata[1177]: Dec 13 15:12:04.829 INFO Fetch successful Dec 13 15:12:04.831523 unknown[1177]: wrote ssh authorized keys file for user: core Dec 13 15:12:04.844552 update-ssh-keys[1301]: Updated "/home/core/.ssh/authorized_keys" Dec 13 15:12:04.845817 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 15:12:04.846674 systemd[1]: Reached target multi-user.target. Dec 13 15:12:04.850333 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 15:12:04.861427 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 15:12:04.861852 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 15:12:04.862570 systemd[1]: Startup finished in 889ms (kernel) + 6.663s (initrd) + 12.113s (userspace) = 19.665s. Dec 13 15:12:09.888065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 15:12:09.889502 systemd[1]: Stopped kubelet.service. Dec 13 15:12:09.889604 systemd[1]: kubelet.service: Consumed 1.320s CPU time. Dec 13 15:12:09.893689 systemd[1]: Starting kubelet.service... Dec 13 15:12:10.004132 systemd[1]: Started kubelet.service. Dec 13 15:12:10.086208 kubelet[1307]: E1213 15:12:10.086139 1307 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:12:10.090513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:12:10.090676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:12:14.547595 systemd[1]: Started sshd@3-10.244.95.150:22-139.178.68.195:46360.service. Dec 13 15:12:15.452452 sshd[1314]: Accepted publickey for core from 139.178.68.195 port 46360 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:15.455440 sshd[1314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:15.468215 systemd[1]: Started session-4.scope. Dec 13 15:12:15.468269 systemd-logind[1185]: New session 4 of user core. Dec 13 15:12:16.078264 sshd[1314]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:16.084657 systemd-logind[1185]: Session 4 logged out. Waiting for processes to exit. Dec 13 15:12:16.085210 systemd[1]: sshd@3-10.244.95.150:22-139.178.68.195:46360.service: Deactivated successfully. Dec 13 15:12:16.086691 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 15:12:16.088305 systemd-logind[1185]: Removed session 4. Dec 13 15:12:16.228854 systemd[1]: Started sshd@4-10.244.95.150:22-139.178.68.195:34214.service. Dec 13 15:12:17.131461 sshd[1320]: Accepted publickey for core from 139.178.68.195 port 34214 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:17.135318 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:17.146065 systemd[1]: Started session-5.scope. Dec 13 15:12:17.146833 systemd-logind[1185]: New session 5 of user core. Dec 13 15:12:17.751464 sshd[1320]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:17.759889 systemd[1]: sshd@4-10.244.95.150:22-139.178.68.195:34214.service: Deactivated successfully. Dec 13 15:12:17.761804 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 15:12:17.763108 systemd-logind[1185]: Session 5 logged out. Waiting for processes to exit. Dec 13 15:12:17.765428 systemd-logind[1185]: Removed session 5. Dec 13 15:12:17.904043 systemd[1]: Started sshd@5-10.244.95.150:22-139.178.68.195:34216.service. Dec 13 15:12:18.808013 sshd[1326]: Accepted publickey for core from 139.178.68.195 port 34216 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:18.812724 sshd[1326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:18.822360 systemd-logind[1185]: New session 6 of user core. Dec 13 15:12:18.823317 systemd[1]: Started session-6.scope. Dec 13 15:12:19.437005 sshd[1326]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:19.443874 systemd-logind[1185]: Session 6 logged out. Waiting for processes to exit. Dec 13 15:12:19.445100 systemd[1]: sshd@5-10.244.95.150:22-139.178.68.195:34216.service: Deactivated successfully. Dec 13 15:12:19.446569 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 15:12:19.448058 systemd-logind[1185]: Removed session 6. Dec 13 15:12:19.587404 systemd[1]: Started sshd@6-10.244.95.150:22-139.178.68.195:34218.service. Dec 13 15:12:20.137437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 15:12:20.137996 systemd[1]: Stopped kubelet.service. Dec 13 15:12:20.142024 systemd[1]: Starting kubelet.service... Dec 13 15:12:20.257572 systemd[1]: Started kubelet.service. Dec 13 15:12:20.318742 kubelet[1338]: E1213 15:12:20.318684 1338 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 15:12:20.323621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 15:12:20.324008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 15:12:20.492423 sshd[1332]: Accepted publickey for core from 139.178.68.195 port 34218 ssh2: RSA SHA256:BRWuvX4vngANWcecei9LW91Zd3OWx+vtbErQ53ehsZc Dec 13 15:12:20.496717 sshd[1332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 15:12:20.506866 systemd-logind[1185]: New session 7 of user core. Dec 13 15:12:20.508960 systemd[1]: Started session-7.scope. Dec 13 15:12:20.982086 sudo[1345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 15:12:20.982338 sudo[1345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 15:12:21.006939 systemd[1]: Starting coreos-metadata.service... Dec 13 15:12:28.061671 coreos-metadata[1349]: Dec 13 15:12:28.061 WARN failed to locate config-drive, using the metadata service API instead Dec 13 15:12:28.113091 coreos-metadata[1349]: Dec 13 15:12:28.112 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 15:12:28.114402 coreos-metadata[1349]: Dec 13 15:12:28.114 INFO Fetch successful Dec 13 15:12:28.114900 coreos-metadata[1349]: Dec 13 15:12:28.114 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 15:12:28.133617 coreos-metadata[1349]: Dec 13 15:12:28.133 INFO Fetch successful Dec 13 15:12:28.134113 coreos-metadata[1349]: Dec 13 15:12:28.133 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 15:12:28.146187 coreos-metadata[1349]: Dec 13 15:12:28.145 INFO Fetch successful Dec 13 15:12:28.146641 coreos-metadata[1349]: Dec 13 15:12:28.146 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 15:12:28.161342 coreos-metadata[1349]: Dec 13 15:12:28.161 INFO Fetch successful Dec 13 15:12:28.161811 coreos-metadata[1349]: Dec 13 15:12:28.161 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 15:12:28.179270 coreos-metadata[1349]: Dec 13 15:12:28.178 INFO Fetch successful Dec 13 15:12:28.201733 systemd[1]: Finished coreos-metadata.service. Dec 13 15:12:28.979388 systemd[1]: Stopped kubelet.service. Dec 13 15:12:28.982519 systemd[1]: Starting kubelet.service... Dec 13 15:12:29.003059 systemd[1]: Reloading. Dec 13 15:12:29.110256 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2024-12-13T15:12:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 15:12:29.111108 /usr/lib/systemd/system-generators/torcx-generator[1413]: time="2024-12-13T15:12:29Z" level=info msg="torcx already run" Dec 13 15:12:29.194311 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 15:12:29.194526 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 15:12:29.213516 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 15:12:29.312023 systemd[1]: Started kubelet.service. Dec 13 15:12:29.321933 systemd[1]: Stopping kubelet.service... Dec 13 15:12:29.322719 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 15:12:29.323057 systemd[1]: Stopped kubelet.service. Dec 13 15:12:29.325605 systemd[1]: Starting kubelet.service... Dec 13 15:12:29.419419 systemd[1]: Started kubelet.service. Dec 13 15:12:29.471103 kubelet[1470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:12:29.471103 kubelet[1470]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 15:12:29.471103 kubelet[1470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 15:12:29.471750 kubelet[1470]: I1213 15:12:29.471167 1470 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 15:12:29.646834 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 15:12:30.074044 kubelet[1470]: I1213 15:12:30.073773 1470 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 15:12:30.074183 kubelet[1470]: I1213 15:12:30.074171 1470 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 15:12:30.074682 kubelet[1470]: I1213 15:12:30.074666 1470 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 15:12:30.102781 kubelet[1470]: I1213 15:12:30.102746 1470 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 15:12:30.127804 kubelet[1470]: I1213 15:12:30.127779 1470 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 15:12:30.129493 kubelet[1470]: I1213 15:12:30.129473 1470 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 15:12:30.130002 kubelet[1470]: I1213 15:12:30.129983 1470 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 15:12:30.130788 kubelet[1470]: I1213 15:12:30.130773 1470 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 15:12:30.130890 kubelet[1470]: I1213 15:12:30.130879 1470 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 15:12:30.131089 kubelet[1470]: I1213 15:12:30.131079 1470 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:12:30.131301 kubelet[1470]: I1213 15:12:30.131291 1470 kubelet.go:396] "Attempting to sync node with API server" Dec 13 15:12:30.131373 kubelet[1470]: I1213 15:12:30.131364 1470 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 15:12:30.131506 kubelet[1470]: I1213 15:12:30.131496 1470 kubelet.go:312] "Adding apiserver pod source" Dec 13 15:12:30.131596 kubelet[1470]: I1213 15:12:30.131587 1470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 15:12:30.132949 kubelet[1470]: E1213 15:12:30.132916 1470 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:30.133051 kubelet[1470]: E1213 15:12:30.133035 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:30.134854 kubelet[1470]: I1213 15:12:30.134836 1470 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 15:12:30.138435 kubelet[1470]: I1213 15:12:30.138398 1470 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 15:12:30.139750 kubelet[1470]: W1213 15:12:30.139729 1470 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 15:12:30.139846 kubelet[1470]: E1213 15:12:30.139827 1470 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 15:12:30.139884 kubelet[1470]: W1213 15:12:30.139867 1470 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.244.95.150" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 15:12:30.139884 kubelet[1470]: E1213 15:12:30.139876 1470 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.244.95.150" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 15:12:30.140109 kubelet[1470]: W1213 15:12:30.140090 1470 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 15:12:30.142027 kubelet[1470]: I1213 15:12:30.142006 1470 server.go:1256] "Started kubelet" Dec 13 15:12:30.148889 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 15:12:30.149045 kubelet[1470]: I1213 15:12:30.149030 1470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 15:12:30.153768 kubelet[1470]: E1213 15:12:30.153736 1470 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.244.95.150.1810c5437a3d32ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.244.95.150,UID:10.244.95.150,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.244.95.150,},FirstTimestamp:2024-12-13 15:12:30.141952698 +0000 UTC m=+0.718411911,LastTimestamp:2024-12-13 15:12:30.141952698 +0000 UTC m=+0.718411911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.244.95.150,}" Dec 13 15:12:30.154811 kubelet[1470]: I1213 15:12:30.154799 1470 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 15:12:30.155843 kubelet[1470]: I1213 15:12:30.155829 1470 server.go:461] "Adding debug handlers to kubelet server" Dec 13 15:12:30.156957 kubelet[1470]: I1213 15:12:30.156941 1470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 15:12:30.157177 kubelet[1470]: I1213 15:12:30.157166 1470 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 15:12:30.159331 kubelet[1470]: I1213 15:12:30.159312 1470 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 15:12:30.161786 kubelet[1470]: I1213 15:12:30.160698 1470 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 15:12:30.161786 kubelet[1470]: I1213 15:12:30.160778 1470 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 15:12:30.162288 kubelet[1470]: I1213 15:12:30.162271 1470 factory.go:221] Registration of the systemd container factory successfully Dec 13 15:12:30.162371 kubelet[1470]: I1213 15:12:30.162349 1470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 15:12:30.164045 kubelet[1470]: I1213 15:12:30.164027 1470 factory.go:221] Registration of the containerd container factory successfully Dec 13 15:12:30.174102 kubelet[1470]: E1213 15:12:30.174085 1470 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 15:12:30.178223 kubelet[1470]: E1213 15:12:30.178203 1470 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.244.95.150\" not found" node="10.244.95.150" Dec 13 15:12:30.179667 kubelet[1470]: I1213 15:12:30.179638 1470 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 15:12:30.179783 kubelet[1470]: I1213 15:12:30.179768 1470 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 15:12:30.179891 kubelet[1470]: I1213 15:12:30.179880 1470 state_mem.go:36] "Initialized new in-memory state store" Dec 13 15:12:30.181411 kubelet[1470]: I1213 15:12:30.181390 1470 policy_none.go:49] "None policy: Start" Dec 13 15:12:30.181981 kubelet[1470]: I1213 15:12:30.181966 1470 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 15:12:30.182055 kubelet[1470]: I1213 15:12:30.181997 1470 state_mem.go:35] "Initializing new in-memory state store" Dec 13 15:12:30.197683 systemd[1]: Created slice kubepods.slice. Dec 13 15:12:30.204652 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 15:12:30.207471 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 15:12:30.214568 kubelet[1470]: I1213 15:12:30.214539 1470 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 15:12:30.214813 kubelet[1470]: I1213 15:12:30.214773 1470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 15:12:30.218945 kubelet[1470]: E1213 15:12:30.218929 1470 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.244.95.150\" not found" Dec 13 15:12:30.261443 kubelet[1470]: I1213 15:12:30.261411 1470 kubelet_node_status.go:73] "Attempting to register node" node="10.244.95.150" Dec 13 15:12:30.268090 kubelet[1470]: I1213 15:12:30.267951 1470 kubelet_node_status.go:76] "Successfully registered node" node="10.244.95.150" Dec 13 15:12:30.276002 kubelet[1470]: I1213 15:12:30.275978 1470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 15:12:30.278312 kubelet[1470]: I1213 15:12:30.278295 1470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 15:12:30.278395 kubelet[1470]: I1213 15:12:30.278332 1470 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 15:12:30.278434 kubelet[1470]: I1213 15:12:30.278396 1470 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 15:12:30.278463 kubelet[1470]: E1213 15:12:30.278455 1470 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 15:12:30.279395 kubelet[1470]: E1213 15:12:30.279369 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.381873 kubelet[1470]: E1213 15:12:30.379544 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.480323 kubelet[1470]: E1213 15:12:30.480260 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.582057 kubelet[1470]: E1213 15:12:30.581883 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.682529 kubelet[1470]: E1213 15:12:30.682352 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.783534 kubelet[1470]: E1213 15:12:30.783364 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.884557 kubelet[1470]: E1213 15:12:30.884374 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:30.965552 sudo[1345]: pam_unix(sudo:session): session closed for user root Dec 13 15:12:30.985469 kubelet[1470]: E1213 15:12:30.985342 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:31.077740 kubelet[1470]: I1213 15:12:31.077660 1470 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 15:12:31.078087 kubelet[1470]: W1213 15:12:31.078055 1470 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 15:12:31.078191 kubelet[1470]: W1213 15:12:31.078131 1470 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 15:12:31.086068 kubelet[1470]: E1213 15:12:31.086002 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:31.113036 sshd[1332]: pam_unix(sshd:session): session closed for user core Dec 13 15:12:31.120198 systemd[1]: sshd@6-10.244.95.150:22-139.178.68.195:34218.service: Deactivated successfully. Dec 13 15:12:31.121746 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 15:12:31.123010 systemd-logind[1185]: Session 7 logged out. Waiting for processes to exit. Dec 13 15:12:31.124849 systemd-logind[1185]: Removed session 7. Dec 13 15:12:31.133855 kubelet[1470]: E1213 15:12:31.133809 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:31.187267 kubelet[1470]: E1213 15:12:31.187213 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:31.288803 kubelet[1470]: E1213 15:12:31.288588 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:31.390205 kubelet[1470]: E1213 15:12:31.390136 1470 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.244.95.150\" not found" Dec 13 15:12:31.492420 kubelet[1470]: I1213 15:12:31.492363 1470 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 15:12:31.493525 env[1193]: time="2024-12-13T15:12:31.493358156Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 15:12:31.494338 kubelet[1470]: I1213 15:12:31.494028 1470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 15:12:32.134780 kubelet[1470]: E1213 15:12:32.134526 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:32.135713 kubelet[1470]: I1213 15:12:32.135669 1470 apiserver.go:52] "Watching apiserver" Dec 13 15:12:32.152724 kubelet[1470]: I1213 15:12:32.152681 1470 topology_manager.go:215] "Topology Admit Handler" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" podNamespace="kube-system" podName="cilium-wqlsp" Dec 13 15:12:32.152997 kubelet[1470]: I1213 15:12:32.152978 1470 topology_manager.go:215] "Topology Admit Handler" podUID="f72c5d43-dc70-4691-a00a-c05baf534f28" podNamespace="kube-system" podName="kube-proxy-5vgvl" Dec 13 15:12:32.161301 kubelet[1470]: I1213 15:12:32.161137 1470 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 15:12:32.161171 systemd[1]: Created slice kubepods-besteffort-podf72c5d43_dc70_4691_a00a_c05baf534f28.slice. Dec 13 15:12:32.170494 systemd[1]: Created slice kubepods-burstable-podb571842d_a1fa_46ee_8350_c490d5a28eb8.slice. Dec 13 15:12:32.172670 kubelet[1470]: I1213 15:12:32.172637 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b571842d-a1fa-46ee-8350-c490d5a28eb8-clustermesh-secrets\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172798 kubelet[1470]: I1213 15:12:32.172700 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-config-path\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172798 kubelet[1470]: I1213 15:12:32.172732 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpvh\" (UniqueName: \"kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-kube-api-access-7tpvh\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172798 kubelet[1470]: I1213 15:12:32.172765 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-hostproc\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172798 kubelet[1470]: I1213 15:12:32.172786 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-cgroup\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172921 kubelet[1470]: I1213 15:12:32.172806 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-etc-cni-netd\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172921 kubelet[1470]: I1213 15:12:32.172826 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f72c5d43-dc70-4691-a00a-c05baf534f28-kube-proxy\") pod \"kube-proxy-5vgvl\" (UID: \"f72c5d43-dc70-4691-a00a-c05baf534f28\") " pod="kube-system/kube-proxy-5vgvl" Dec 13 15:12:32.172921 kubelet[1470]: I1213 15:12:32.172844 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f72c5d43-dc70-4691-a00a-c05baf534f28-lib-modules\") pod \"kube-proxy-5vgvl\" (UID: \"f72c5d43-dc70-4691-a00a-c05baf534f28\") " pod="kube-system/kube-proxy-5vgvl" Dec 13 15:12:32.172921 kubelet[1470]: I1213 15:12:32.172865 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cni-path\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172921 kubelet[1470]: I1213 15:12:32.172891 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-lib-modules\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.172921 kubelet[1470]: I1213 15:12:32.172915 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-xtables-lock\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.173102 kubelet[1470]: I1213 15:12:32.172938 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-run\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.173102 kubelet[1470]: I1213 15:12:32.172957 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-net\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.173102 kubelet[1470]: I1213 15:12:32.172983 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrc4m\" (UniqueName: \"kubernetes.io/projected/f72c5d43-dc70-4691-a00a-c05baf534f28-kube-api-access-qrc4m\") pod \"kube-proxy-5vgvl\" (UID: \"f72c5d43-dc70-4691-a00a-c05baf534f28\") " pod="kube-system/kube-proxy-5vgvl" Dec 13 15:12:32.173102 kubelet[1470]: I1213 15:12:32.173016 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f72c5d43-dc70-4691-a00a-c05baf534f28-xtables-lock\") pod \"kube-proxy-5vgvl\" (UID: \"f72c5d43-dc70-4691-a00a-c05baf534f28\") " pod="kube-system/kube-proxy-5vgvl" Dec 13 15:12:32.173102 kubelet[1470]: I1213 15:12:32.173042 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-bpf-maps\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.173247 kubelet[1470]: I1213 15:12:32.173062 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-kernel\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.173247 kubelet[1470]: I1213 15:12:32.173082 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-hubble-tls\") pod \"cilium-wqlsp\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " pod="kube-system/cilium-wqlsp" Dec 13 15:12:32.470017 env[1193]: time="2024-12-13T15:12:32.469905128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5vgvl,Uid:f72c5d43-dc70-4691-a00a-c05baf534f28,Namespace:kube-system,Attempt:0,}" Dec 13 15:12:32.479138 env[1193]: time="2024-12-13T15:12:32.478634620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqlsp,Uid:b571842d-a1fa-46ee-8350-c490d5a28eb8,Namespace:kube-system,Attempt:0,}" Dec 13 15:12:33.135679 kubelet[1470]: E1213 15:12:33.135595 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:33.294241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588688717.mount: Deactivated successfully. Dec 13 15:12:33.299201 env[1193]: time="2024-12-13T15:12:33.299164016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.300231 env[1193]: time="2024-12-13T15:12:33.300206406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.300951 env[1193]: time="2024-12-13T15:12:33.300931878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.302319 env[1193]: time="2024-12-13T15:12:33.302296113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.303238 env[1193]: time="2024-12-13T15:12:33.303218113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.304777 env[1193]: time="2024-12-13T15:12:33.304739835Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.307334 env[1193]: time="2024-12-13T15:12:33.307310744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.315670 env[1193]: time="2024-12-13T15:12:33.315640558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:33.333183 env[1193]: time="2024-12-13T15:12:33.333113394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:12:33.333325 env[1193]: time="2024-12-13T15:12:33.333185850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:12:33.333325 env[1193]: time="2024-12-13T15:12:33.333208715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:12:33.333869 env[1193]: time="2024-12-13T15:12:33.333493735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:12:33.333869 env[1193]: time="2024-12-13T15:12:33.333541291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:12:33.333869 env[1193]: time="2024-12-13T15:12:33.333563894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:12:33.334181 env[1193]: time="2024-12-13T15:12:33.334134609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdf63f5d487d35fe45a42031857cebdb3b4aecaa79bc61713c1a8635457ef516 pid=1530 runtime=io.containerd.runc.v2 Dec 13 15:12:33.334386 env[1193]: time="2024-12-13T15:12:33.334347651Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2 pid=1533 runtime=io.containerd.runc.v2 Dec 13 15:12:33.359134 systemd[1]: Started cri-containerd-e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2.scope. Dec 13 15:12:33.377855 systemd[1]: Started cri-containerd-fdf63f5d487d35fe45a42031857cebdb3b4aecaa79bc61713c1a8635457ef516.scope. Dec 13 15:12:33.410327 env[1193]: time="2024-12-13T15:12:33.410212853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqlsp,Uid:b571842d-a1fa-46ee-8350-c490d5a28eb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\"" Dec 13 15:12:33.413133 env[1193]: time="2024-12-13T15:12:33.413099884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 15:12:33.418739 env[1193]: time="2024-12-13T15:12:33.418709408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5vgvl,Uid:f72c5d43-dc70-4691-a00a-c05baf534f28,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf63f5d487d35fe45a42031857cebdb3b4aecaa79bc61713c1a8635457ef516\"" Dec 13 15:12:34.136968 kubelet[1470]: E1213 15:12:34.136847 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:35.137384 kubelet[1470]: E1213 15:12:35.137296 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:36.138353 kubelet[1470]: E1213 15:12:36.138304 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:37.139654 kubelet[1470]: E1213 15:12:37.139536 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:38.139902 kubelet[1470]: E1213 15:12:38.139808 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:39.140656 kubelet[1470]: E1213 15:12:39.140496 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:40.141347 kubelet[1470]: E1213 15:12:40.141286 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:41.142217 kubelet[1470]: E1213 15:12:41.142040 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:42.143323 kubelet[1470]: E1213 15:12:42.143254 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:43.143544 kubelet[1470]: E1213 15:12:43.143441 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:43.199143 update_engine[1187]: I1213 15:12:43.197880 1187 update_attempter.cc:509] Updating boot flags... Dec 13 15:12:44.144750 kubelet[1470]: E1213 15:12:44.144620 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:45.145312 kubelet[1470]: E1213 15:12:45.145219 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:45.180663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424412155.mount: Deactivated successfully. Dec 13 15:12:46.146653 kubelet[1470]: E1213 15:12:46.145663 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:47.146392 kubelet[1470]: E1213 15:12:47.146301 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:48.146687 kubelet[1470]: E1213 15:12:48.146589 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:48.212048 env[1193]: time="2024-12-13T15:12:48.211892374Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:48.214147 env[1193]: time="2024-12-13T15:12:48.214098977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:48.215975 env[1193]: time="2024-12-13T15:12:48.215922743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:48.216555 env[1193]: time="2024-12-13T15:12:48.216475019Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 15:12:48.220376 env[1193]: time="2024-12-13T15:12:48.220332714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 15:12:48.223310 env[1193]: time="2024-12-13T15:12:48.223152341Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:12:48.235351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150646811.mount: Deactivated successfully. Dec 13 15:12:48.240413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336464817.mount: Deactivated successfully. Dec 13 15:12:48.243830 env[1193]: time="2024-12-13T15:12:48.243795936Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\"" Dec 13 15:12:48.244851 env[1193]: time="2024-12-13T15:12:48.244826434Z" level=info msg="StartContainer for \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\"" Dec 13 15:12:48.274041 systemd[1]: Started cri-containerd-9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe.scope. Dec 13 15:12:48.309328 env[1193]: time="2024-12-13T15:12:48.306987050Z" level=info msg="StartContainer for \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\" returns successfully" Dec 13 15:12:48.323850 systemd[1]: cri-containerd-9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe.scope: Deactivated successfully. Dec 13 15:12:48.435230 env[1193]: time="2024-12-13T15:12:48.433787287Z" level=info msg="shim disconnected" id=9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe Dec 13 15:12:48.435230 env[1193]: time="2024-12-13T15:12:48.433880916Z" level=warning msg="cleaning up after shim disconnected" id=9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe namespace=k8s.io Dec 13 15:12:48.435230 env[1193]: time="2024-12-13T15:12:48.433905885Z" level=info msg="cleaning up dead shim" Dec 13 15:12:48.450834 env[1193]: time="2024-12-13T15:12:48.450786039Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:12:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1665 runtime=io.containerd.runc.v2\n" Dec 13 15:12:49.147701 kubelet[1470]: E1213 15:12:49.147645 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:49.236680 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe-rootfs.mount: Deactivated successfully. Dec 13 15:12:49.362370 env[1193]: time="2024-12-13T15:12:49.362286053Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 15:12:49.381282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212851200.mount: Deactivated successfully. Dec 13 15:12:49.394900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount400629507.mount: Deactivated successfully. Dec 13 15:12:49.398465 env[1193]: time="2024-12-13T15:12:49.398036009Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\"" Dec 13 15:12:49.398837 env[1193]: time="2024-12-13T15:12:49.398810358Z" level=info msg="StartContainer for \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\"" Dec 13 15:12:49.431895 systemd[1]: Started cri-containerd-e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1.scope. Dec 13 15:12:49.471024 env[1193]: time="2024-12-13T15:12:49.470980211Z" level=info msg="StartContainer for \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\" returns successfully" Dec 13 15:12:49.481949 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 15:12:49.482189 systemd[1]: Stopped systemd-sysctl.service. Dec 13 15:12:49.482638 systemd[1]: Stopping systemd-sysctl.service... Dec 13 15:12:49.485332 systemd[1]: Starting systemd-sysctl.service... Dec 13 15:12:49.490253 systemd[1]: cri-containerd-e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1.scope: Deactivated successfully. Dec 13 15:12:49.497691 systemd[1]: Finished systemd-sysctl.service. Dec 13 15:12:49.545085 env[1193]: time="2024-12-13T15:12:49.545023287Z" level=info msg="shim disconnected" id=e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1 Dec 13 15:12:49.545085 env[1193]: time="2024-12-13T15:12:49.545069575Z" level=warning msg="cleaning up after shim disconnected" id=e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1 namespace=k8s.io Dec 13 15:12:49.545085 env[1193]: time="2024-12-13T15:12:49.545079451Z" level=info msg="cleaning up dead shim" Dec 13 15:12:49.565564 env[1193]: time="2024-12-13T15:12:49.565519490Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:12:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1731 runtime=io.containerd.runc.v2\n" Dec 13 15:12:50.132019 kubelet[1470]: E1213 15:12:50.131901 1470 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:50.148439 kubelet[1470]: E1213 15:12:50.148044 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:50.234086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330740649.mount: Deactivated successfully. Dec 13 15:12:50.320743 env[1193]: time="2024-12-13T15:12:50.320633524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:50.322737 env[1193]: time="2024-12-13T15:12:50.322687657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:50.323435 env[1193]: time="2024-12-13T15:12:50.323399725Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:50.325461 env[1193]: time="2024-12-13T15:12:50.325428400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:12:50.326628 env[1193]: time="2024-12-13T15:12:50.326578071Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 15:12:50.329691 env[1193]: time="2024-12-13T15:12:50.329655318Z" level=info msg="CreateContainer within sandbox \"fdf63f5d487d35fe45a42031857cebdb3b4aecaa79bc61713c1a8635457ef516\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 15:12:50.340387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479915225.mount: Deactivated successfully. Dec 13 15:12:50.346411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162013436.mount: Deactivated successfully. Dec 13 15:12:50.350020 env[1193]: time="2024-12-13T15:12:50.349979959Z" level=info msg="CreateContainer within sandbox \"fdf63f5d487d35fe45a42031857cebdb3b4aecaa79bc61713c1a8635457ef516\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c53f465d4d7148e095849b65e65f4c616aef14e5a089a6b8d0b459621b65088\"" Dec 13 15:12:50.350584 env[1193]: time="2024-12-13T15:12:50.350556258Z" level=info msg="StartContainer for \"9c53f465d4d7148e095849b65e65f4c616aef14e5a089a6b8d0b459621b65088\"" Dec 13 15:12:50.363681 env[1193]: time="2024-12-13T15:12:50.363649796Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 15:12:50.393418 systemd[1]: Started cri-containerd-9c53f465d4d7148e095849b65e65f4c616aef14e5a089a6b8d0b459621b65088.scope. Dec 13 15:12:50.428009 env[1193]: time="2024-12-13T15:12:50.427946995Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\"" Dec 13 15:12:50.429506 env[1193]: time="2024-12-13T15:12:50.429475059Z" level=info msg="StartContainer for \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\"" Dec 13 15:12:50.446417 env[1193]: time="2024-12-13T15:12:50.446374597Z" level=info msg="StartContainer for \"9c53f465d4d7148e095849b65e65f4c616aef14e5a089a6b8d0b459621b65088\" returns successfully" Dec 13 15:12:50.472344 systemd[1]: Started cri-containerd-ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7.scope. Dec 13 15:12:50.514885 env[1193]: time="2024-12-13T15:12:50.514835163Z" level=info msg="StartContainer for \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\" returns successfully" Dec 13 15:12:50.522525 systemd[1]: cri-containerd-ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7.scope: Deactivated successfully. Dec 13 15:12:50.558339 env[1193]: time="2024-12-13T15:12:50.558295473Z" level=info msg="shim disconnected" id=ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7 Dec 13 15:12:50.558556 env[1193]: time="2024-12-13T15:12:50.558538864Z" level=warning msg="cleaning up after shim disconnected" id=ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7 namespace=k8s.io Dec 13 15:12:50.558619 env[1193]: time="2024-12-13T15:12:50.558607574Z" level=info msg="cleaning up dead shim" Dec 13 15:12:50.567679 env[1193]: time="2024-12-13T15:12:50.567647099Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:12:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1855 runtime=io.containerd.runc.v2\n" Dec 13 15:12:51.148571 kubelet[1470]: E1213 15:12:51.148495 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:51.377338 env[1193]: time="2024-12-13T15:12:51.376140774Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 15:12:51.386514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999762899.mount: Deactivated successfully. Dec 13 15:12:51.392235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690178226.mount: Deactivated successfully. Dec 13 15:12:51.394369 env[1193]: time="2024-12-13T15:12:51.394333019Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\"" Dec 13 15:12:51.395005 env[1193]: time="2024-12-13T15:12:51.394983026Z" level=info msg="StartContainer for \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\"" Dec 13 15:12:51.408206 kubelet[1470]: I1213 15:12:51.407794 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5vgvl" podStartSLOduration=4.500305058 podStartE2EDuration="21.407699092s" podCreationTimestamp="2024-12-13 15:12:30 +0000 UTC" firstStartedPulling="2024-12-13 15:12:33.419717933 +0000 UTC m=+3.996177147" lastFinishedPulling="2024-12-13 15:12:50.327111935 +0000 UTC m=+20.903571181" observedRunningTime="2024-12-13 15:12:51.406934608 +0000 UTC m=+21.983393836" watchObservedRunningTime="2024-12-13 15:12:51.407699092 +0000 UTC m=+21.984158328" Dec 13 15:12:51.414199 systemd[1]: Started cri-containerd-089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8.scope. Dec 13 15:12:51.443980 systemd[1]: cri-containerd-089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8.scope: Deactivated successfully. Dec 13 15:12:51.446081 env[1193]: time="2024-12-13T15:12:51.445997280Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb571842d_a1fa_46ee_8350_c490d5a28eb8.slice/cri-containerd-089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8.scope/memory.events\": no such file or directory" Dec 13 15:12:51.446657 env[1193]: time="2024-12-13T15:12:51.446626796Z" level=info msg="StartContainer for \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\" returns successfully" Dec 13 15:12:51.468279 env[1193]: time="2024-12-13T15:12:51.468234838Z" level=info msg="shim disconnected" id=089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8 Dec 13 15:12:51.468279 env[1193]: time="2024-12-13T15:12:51.468278717Z" level=warning msg="cleaning up after shim disconnected" id=089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8 namespace=k8s.io Dec 13 15:12:51.468523 env[1193]: time="2024-12-13T15:12:51.468287875Z" level=info msg="cleaning up dead shim" Dec 13 15:12:51.476248 env[1193]: time="2024-12-13T15:12:51.476211509Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:12:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2005 runtime=io.containerd.runc.v2\n" Dec 13 15:12:52.149264 kubelet[1470]: E1213 15:12:52.149048 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:52.389326 env[1193]: time="2024-12-13T15:12:52.389267749Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 15:12:52.406583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044006916.mount: Deactivated successfully. Dec 13 15:12:52.411865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838353210.mount: Deactivated successfully. Dec 13 15:12:52.414325 env[1193]: time="2024-12-13T15:12:52.414286958Z" level=info msg="CreateContainer within sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\"" Dec 13 15:12:52.414793 env[1193]: time="2024-12-13T15:12:52.414735312Z" level=info msg="StartContainer for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\"" Dec 13 15:12:52.430959 systemd[1]: Started cri-containerd-e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4.scope. Dec 13 15:12:52.470733 env[1193]: time="2024-12-13T15:12:52.470692410Z" level=info msg="StartContainer for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" returns successfully" Dec 13 15:12:52.584052 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 15:12:52.615945 kubelet[1470]: I1213 15:12:52.615124 1470 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 15:12:52.919881 kernel: Initializing XFRM netlink socket Dec 13 15:12:52.924810 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 15:12:53.150983 kubelet[1470]: E1213 15:12:53.150091 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:53.435448 kubelet[1470]: I1213 15:12:53.435368 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-wqlsp" podStartSLOduration=8.629320274 podStartE2EDuration="23.435143281s" podCreationTimestamp="2024-12-13 15:12:30 +0000 UTC" firstStartedPulling="2024-12-13 15:12:33.412389508 +0000 UTC m=+3.988848721" lastFinishedPulling="2024-12-13 15:12:48.218212514 +0000 UTC m=+18.794671728" observedRunningTime="2024-12-13 15:12:53.433956829 +0000 UTC m=+24.010416082" watchObservedRunningTime="2024-12-13 15:12:53.435143281 +0000 UTC m=+24.011602535" Dec 13 15:12:54.151730 kubelet[1470]: E1213 15:12:54.151658 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:54.649644 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 15:12:54.649857 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 15:12:54.647224 systemd-networkd[1032]: cilium_host: Link UP Dec 13 15:12:54.647599 systemd-networkd[1032]: cilium_net: Link UP Dec 13 15:12:54.648315 systemd-networkd[1032]: cilium_net: Gained carrier Dec 13 15:12:54.651237 systemd-networkd[1032]: cilium_host: Gained carrier Dec 13 15:12:54.764983 systemd-networkd[1032]: cilium_host: Gained IPv6LL Dec 13 15:12:54.795271 systemd-networkd[1032]: cilium_vxlan: Link UP Dec 13 15:12:54.795283 systemd-networkd[1032]: cilium_vxlan: Gained carrier Dec 13 15:12:55.008919 kubelet[1470]: I1213 15:12:55.007266 1470 topology_manager.go:215] "Topology Admit Handler" podUID="0bfaff8c-80ad-4cd2-9615-4099eda2adbe" podNamespace="default" podName="nginx-deployment-6d5f899847-rxm2z" Dec 13 15:12:55.014667 systemd[1]: Created slice kubepods-besteffort-pod0bfaff8c_80ad_4cd2_9615_4099eda2adbe.slice. Dec 13 15:12:55.051192 kubelet[1470]: I1213 15:12:55.051156 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lmk\" (UniqueName: \"kubernetes.io/projected/0bfaff8c-80ad-4cd2-9615-4099eda2adbe-kube-api-access-94lmk\") pod \"nginx-deployment-6d5f899847-rxm2z\" (UID: \"0bfaff8c-80ad-4cd2-9615-4099eda2adbe\") " pod="default/nginx-deployment-6d5f899847-rxm2z" Dec 13 15:12:55.068791 kernel: NET: Registered PF_ALG protocol family Dec 13 15:12:55.075886 systemd-networkd[1032]: cilium_net: Gained IPv6LL Dec 13 15:12:55.153706 kubelet[1470]: E1213 15:12:55.153671 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:55.319895 env[1193]: time="2024-12-13T15:12:55.319320259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rxm2z,Uid:0bfaff8c-80ad-4cd2-9615-4099eda2adbe,Namespace:default,Attempt:0,}" Dec 13 15:12:55.798141 systemd-networkd[1032]: lxc_health: Link UP Dec 13 15:12:55.819941 systemd-networkd[1032]: cilium_vxlan: Gained IPv6LL Dec 13 15:12:55.823568 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 15:12:55.823804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 15:12:56.154959 kubelet[1470]: E1213 15:12:56.154909 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:56.370769 systemd-networkd[1032]: lxcb6ba8f76cc2d: Link UP Dec 13 15:12:56.378841 kernel: eth0: renamed from tmp84407 Dec 13 15:12:56.385055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb6ba8f76cc2d: link becomes ready Dec 13 15:12:56.387359 systemd-networkd[1032]: lxcb6ba8f76cc2d: Gained carrier Dec 13 15:12:57.155391 kubelet[1470]: E1213 15:12:57.155307 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:57.484068 systemd-networkd[1032]: lxcb6ba8f76cc2d: Gained IPv6LL Dec 13 15:12:57.548056 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 15:12:58.157326 kubelet[1470]: E1213 15:12:58.157179 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:12:59.159082 kubelet[1470]: E1213 15:12:59.158947 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:00.160387 kubelet[1470]: E1213 15:13:00.160338 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:00.238020 env[1193]: time="2024-12-13T15:13:00.237936768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:00.238561 env[1193]: time="2024-12-13T15:13:00.237994047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:00.238561 env[1193]: time="2024-12-13T15:13:00.238007056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:00.238561 env[1193]: time="2024-12-13T15:13:00.238263602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84407a832870d148c552ee027bf809c0fa7af363bf01cd39a3e481990ec2a4cf pid=2525 runtime=io.containerd.runc.v2 Dec 13 15:13:00.262128 systemd[1]: run-containerd-runc-k8s.io-84407a832870d148c552ee027bf809c0fa7af363bf01cd39a3e481990ec2a4cf-runc.bfLrVM.mount: Deactivated successfully. Dec 13 15:13:00.276253 systemd[1]: Started cri-containerd-84407a832870d148c552ee027bf809c0fa7af363bf01cd39a3e481990ec2a4cf.scope. Dec 13 15:13:00.324549 env[1193]: time="2024-12-13T15:13:00.324484017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rxm2z,Uid:0bfaff8c-80ad-4cd2-9615-4099eda2adbe,Namespace:default,Attempt:0,} returns sandbox id \"84407a832870d148c552ee027bf809c0fa7af363bf01cd39a3e481990ec2a4cf\"" Dec 13 15:13:00.326729 env[1193]: time="2024-12-13T15:13:00.326697642Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 15:13:01.161084 kubelet[1470]: E1213 15:13:01.160999 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:02.161488 kubelet[1470]: E1213 15:13:02.161414 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:03.162081 kubelet[1470]: E1213 15:13:03.161924 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:03.665415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3314028671.mount: Deactivated successfully. Dec 13 15:13:04.163336 kubelet[1470]: E1213 15:13:04.163089 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:05.163817 kubelet[1470]: E1213 15:13:05.163618 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:05.264471 env[1193]: time="2024-12-13T15:13:05.264318770Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:05.268178 env[1193]: time="2024-12-13T15:13:05.268097844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:05.270327 env[1193]: time="2024-12-13T15:13:05.270284323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:05.271012 env[1193]: time="2024-12-13T15:13:05.270978023Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:05.272856 env[1193]: time="2024-12-13T15:13:05.272819340Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 15:13:05.276189 env[1193]: time="2024-12-13T15:13:05.276157570Z" level=info msg="CreateContainer within sandbox \"84407a832870d148c552ee027bf809c0fa7af363bf01cd39a3e481990ec2a4cf\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 15:13:05.288412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881091877.mount: Deactivated successfully. Dec 13 15:13:05.293566 env[1193]: time="2024-12-13T15:13:05.293508684Z" level=info msg="CreateContainer within sandbox \"84407a832870d148c552ee027bf809c0fa7af363bf01cd39a3e481990ec2a4cf\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6f8ca4019f432ef9f2a248bfb84e24f3c2f7a4bf9db67c0ccf5eaa4d0a6fc4b4\"" Dec 13 15:13:05.294604 env[1193]: time="2024-12-13T15:13:05.294580882Z" level=info msg="StartContainer for \"6f8ca4019f432ef9f2a248bfb84e24f3c2f7a4bf9db67c0ccf5eaa4d0a6fc4b4\"" Dec 13 15:13:05.318690 systemd[1]: Started cri-containerd-6f8ca4019f432ef9f2a248bfb84e24f3c2f7a4bf9db67c0ccf5eaa4d0a6fc4b4.scope. Dec 13 15:13:05.367783 env[1193]: time="2024-12-13T15:13:05.367719174Z" level=info msg="StartContainer for \"6f8ca4019f432ef9f2a248bfb84e24f3c2f7a4bf9db67c0ccf5eaa4d0a6fc4b4\" returns successfully" Dec 13 15:13:05.464500 kubelet[1470]: I1213 15:13:05.463441 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-rxm2z" podStartSLOduration=6.515609184 podStartE2EDuration="11.4632829s" podCreationTimestamp="2024-12-13 15:12:54 +0000 UTC" firstStartedPulling="2024-12-13 15:13:00.325748721 +0000 UTC m=+30.902207939" lastFinishedPulling="2024-12-13 15:13:05.27342244 +0000 UTC m=+35.849881655" observedRunningTime="2024-12-13 15:13:05.461636338 +0000 UTC m=+36.038095609" watchObservedRunningTime="2024-12-13 15:13:05.4632829 +0000 UTC m=+36.039742190" Dec 13 15:13:06.164021 kubelet[1470]: E1213 15:13:06.163952 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:07.165505 kubelet[1470]: E1213 15:13:07.165421 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:08.167523 kubelet[1470]: E1213 15:13:08.167457 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:09.169328 kubelet[1470]: E1213 15:13:09.169235 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:10.132815 kubelet[1470]: E1213 15:13:10.132686 1470 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:10.169920 kubelet[1470]: E1213 15:13:10.169850 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:11.171141 kubelet[1470]: E1213 15:13:11.171036 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:12.172199 kubelet[1470]: E1213 15:13:12.172130 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:13.173950 kubelet[1470]: E1213 15:13:13.173880 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:14.175145 kubelet[1470]: E1213 15:13:14.175066 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:15.175928 kubelet[1470]: E1213 15:13:15.175823 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:16.176703 kubelet[1470]: E1213 15:13:16.176606 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:16.273380 kubelet[1470]: I1213 15:13:16.273323 1470 topology_manager.go:215] "Topology Admit Handler" podUID="0855aaf0-6a87-49f2-b478-8137188c944c" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 15:13:16.285015 systemd[1]: Created slice kubepods-besteffort-pod0855aaf0_6a87_49f2_b478_8137188c944c.slice. Dec 13 15:13:16.308652 kubelet[1470]: I1213 15:13:16.308602 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0855aaf0-6a87-49f2-b478-8137188c944c-data\") pod \"nfs-server-provisioner-0\" (UID: \"0855aaf0-6a87-49f2-b478-8137188c944c\") " pod="default/nfs-server-provisioner-0" Dec 13 15:13:16.308652 kubelet[1470]: I1213 15:13:16.308661 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpm2\" (UniqueName: \"kubernetes.io/projected/0855aaf0-6a87-49f2-b478-8137188c944c-kube-api-access-wkpm2\") pod \"nfs-server-provisioner-0\" (UID: \"0855aaf0-6a87-49f2-b478-8137188c944c\") " pod="default/nfs-server-provisioner-0" Dec 13 15:13:16.591734 env[1193]: time="2024-12-13T15:13:16.590721193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0855aaf0-6a87-49f2-b478-8137188c944c,Namespace:default,Attempt:0,}" Dec 13 15:13:16.640405 systemd-networkd[1032]: lxc88180b598f67: Link UP Dec 13 15:13:16.651831 kernel: eth0: renamed from tmp51c76 Dec 13 15:13:16.659031 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 15:13:16.659120 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc88180b598f67: link becomes ready Dec 13 15:13:16.659629 systemd-networkd[1032]: lxc88180b598f67: Gained carrier Dec 13 15:13:16.862866 env[1193]: time="2024-12-13T15:13:16.861901602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:16.862866 env[1193]: time="2024-12-13T15:13:16.861958478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:16.862866 env[1193]: time="2024-12-13T15:13:16.861973245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:16.863782 env[1193]: time="2024-12-13T15:13:16.863361960Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51c76acca2b46fdf9461e08934a2f15699f58cdd4a43143b97d82525d02f35aa pid=2656 runtime=io.containerd.runc.v2 Dec 13 15:13:16.881512 systemd[1]: Started cri-containerd-51c76acca2b46fdf9461e08934a2f15699f58cdd4a43143b97d82525d02f35aa.scope. Dec 13 15:13:16.937144 env[1193]: time="2024-12-13T15:13:16.937090992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0855aaf0-6a87-49f2-b478-8137188c944c,Namespace:default,Attempt:0,} returns sandbox id \"51c76acca2b46fdf9461e08934a2f15699f58cdd4a43143b97d82525d02f35aa\"" Dec 13 15:13:16.939031 env[1193]: time="2024-12-13T15:13:16.939002445Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 15:13:17.177253 kubelet[1470]: E1213 15:13:17.177153 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:17.432006 systemd[1]: run-containerd-runc-k8s.io-51c76acca2b46fdf9461e08934a2f15699f58cdd4a43143b97d82525d02f35aa-runc.9KqSBG.mount: Deactivated successfully. Dec 13 15:13:18.033468 systemd-networkd[1032]: lxc88180b598f67: Gained IPv6LL Dec 13 15:13:18.178197 kubelet[1470]: E1213 15:13:18.178126 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:19.179855 kubelet[1470]: E1213 15:13:19.179606 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:20.180927 kubelet[1470]: E1213 15:13:20.180798 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:20.262101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount935993516.mount: Deactivated successfully. Dec 13 15:13:21.181534 kubelet[1470]: E1213 15:13:21.181422 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:22.182515 kubelet[1470]: E1213 15:13:22.182343 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:22.478180 env[1193]: time="2024-12-13T15:13:22.476980436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:22.482101 env[1193]: time="2024-12-13T15:13:22.482038071Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:22.483394 env[1193]: time="2024-12-13T15:13:22.483329945Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 15:13:22.485513 env[1193]: time="2024-12-13T15:13:22.484067715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:22.485513 env[1193]: time="2024-12-13T15:13:22.484774255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:22.487339 env[1193]: time="2024-12-13T15:13:22.487294576Z" level=info msg="CreateContainer within sandbox \"51c76acca2b46fdf9461e08934a2f15699f58cdd4a43143b97d82525d02f35aa\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 15:13:22.498586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656075222.mount: Deactivated successfully. Dec 13 15:13:22.505084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779190428.mount: Deactivated successfully. Dec 13 15:13:22.507468 env[1193]: time="2024-12-13T15:13:22.507428463Z" level=info msg="CreateContainer within sandbox \"51c76acca2b46fdf9461e08934a2f15699f58cdd4a43143b97d82525d02f35aa\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ce7ca280c67e22785101790304467979736d50029308537b5a7564708ee256eb\"" Dec 13 15:13:22.508251 env[1193]: time="2024-12-13T15:13:22.508220583Z" level=info msg="StartContainer for \"ce7ca280c67e22785101790304467979736d50029308537b5a7564708ee256eb\"" Dec 13 15:13:22.536391 systemd[1]: Started cri-containerd-ce7ca280c67e22785101790304467979736d50029308537b5a7564708ee256eb.scope. Dec 13 15:13:22.577848 env[1193]: time="2024-12-13T15:13:22.577809941Z" level=info msg="StartContainer for \"ce7ca280c67e22785101790304467979736d50029308537b5a7564708ee256eb\" returns successfully" Dec 13 15:13:23.182887 kubelet[1470]: E1213 15:13:23.182718 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:24.183964 kubelet[1470]: E1213 15:13:24.183896 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:25.185411 kubelet[1470]: E1213 15:13:25.185333 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:26.187227 kubelet[1470]: E1213 15:13:26.186961 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:27.187507 kubelet[1470]: E1213 15:13:27.187416 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:28.188835 kubelet[1470]: E1213 15:13:28.188740 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:29.191035 kubelet[1470]: E1213 15:13:29.190945 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:30.131950 kubelet[1470]: E1213 15:13:30.131851 1470 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:30.192256 kubelet[1470]: E1213 15:13:30.192185 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:31.193966 kubelet[1470]: E1213 15:13:31.193872 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:31.851225 kubelet[1470]: I1213 15:13:31.851152 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.305363224 podStartE2EDuration="15.850930011s" podCreationTimestamp="2024-12-13 15:13:16 +0000 UTC" firstStartedPulling="2024-12-13 15:13:16.938471898 +0000 UTC m=+47.514931110" lastFinishedPulling="2024-12-13 15:13:22.484038629 +0000 UTC m=+53.060497897" observedRunningTime="2024-12-13 15:13:23.544834423 +0000 UTC m=+54.121293690" watchObservedRunningTime="2024-12-13 15:13:31.850930011 +0000 UTC m=+62.427389343" Dec 13 15:13:31.852323 kubelet[1470]: I1213 15:13:31.852240 1470 topology_manager.go:215] "Topology Admit Handler" podUID="fd5e3669-b908-4fad-be6e-47b1e71031a5" podNamespace="default" podName="test-pod-1" Dec 13 15:13:31.862282 systemd[1]: Created slice kubepods-besteffort-podfd5e3669_b908_4fad_be6e_47b1e71031a5.slice. Dec 13 15:13:31.933491 kubelet[1470]: I1213 15:13:31.933415 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a07a0086-b890-4481-aada-7456884cd2ab\" (UniqueName: \"kubernetes.io/nfs/fd5e3669-b908-4fad-be6e-47b1e71031a5-pvc-a07a0086-b890-4481-aada-7456884cd2ab\") pod \"test-pod-1\" (UID: \"fd5e3669-b908-4fad-be6e-47b1e71031a5\") " pod="default/test-pod-1" Dec 13 15:13:31.934158 kubelet[1470]: I1213 15:13:31.934098 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cttnr\" (UniqueName: \"kubernetes.io/projected/fd5e3669-b908-4fad-be6e-47b1e71031a5-kube-api-access-cttnr\") pod \"test-pod-1\" (UID: \"fd5e3669-b908-4fad-be6e-47b1e71031a5\") " pod="default/test-pod-1" Dec 13 15:13:32.077790 kernel: FS-Cache: Loaded Dec 13 15:13:32.125263 kernel: RPC: Registered named UNIX socket transport module. Dec 13 15:13:32.125539 kernel: RPC: Registered udp transport module. Dec 13 15:13:32.125626 kernel: RPC: Registered tcp transport module. Dec 13 15:13:32.126801 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 15:13:32.194954 kubelet[1470]: E1213 15:13:32.194405 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:32.196020 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 15:13:32.392058 kernel: NFS: Registering the id_resolver key type Dec 13 15:13:32.392284 kernel: Key type id_resolver registered Dec 13 15:13:32.392369 kernel: Key type id_legacy registered Dec 13 15:13:32.444771 nfsidmap[2776]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 15:13:32.452542 nfsidmap[2779]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 15:13:32.768484 env[1193]: time="2024-12-13T15:13:32.767700606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fd5e3669-b908-4fad-be6e-47b1e71031a5,Namespace:default,Attempt:0,}" Dec 13 15:13:32.807076 systemd-networkd[1032]: lxc5a3188b21b75: Link UP Dec 13 15:13:32.814994 kernel: eth0: renamed from tmpfa470 Dec 13 15:13:32.825421 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 15:13:32.825515 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5a3188b21b75: link becomes ready Dec 13 15:13:32.826009 systemd-networkd[1032]: lxc5a3188b21b75: Gained carrier Dec 13 15:13:33.057377 env[1193]: time="2024-12-13T15:13:33.056748237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:33.057621 env[1193]: time="2024-12-13T15:13:33.056834518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:33.057778 env[1193]: time="2024-12-13T15:13:33.057724625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:33.058085 env[1193]: time="2024-12-13T15:13:33.058037750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa4706f761974fb82f881c97bde610afb9834d2aaca0e9e95d137ae10a1f065d pid=2816 runtime=io.containerd.runc.v2 Dec 13 15:13:33.080415 systemd[1]: run-containerd-runc-k8s.io-fa4706f761974fb82f881c97bde610afb9834d2aaca0e9e95d137ae10a1f065d-runc.ObrTZe.mount: Deactivated successfully. Dec 13 15:13:33.085992 systemd[1]: Started cri-containerd-fa4706f761974fb82f881c97bde610afb9834d2aaca0e9e95d137ae10a1f065d.scope. Dec 13 15:13:33.137067 env[1193]: time="2024-12-13T15:13:33.137021502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fd5e3669-b908-4fad-be6e-47b1e71031a5,Namespace:default,Attempt:0,} returns sandbox id \"fa4706f761974fb82f881c97bde610afb9834d2aaca0e9e95d137ae10a1f065d\"" Dec 13 15:13:33.138677 env[1193]: time="2024-12-13T15:13:33.138521472Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 15:13:33.195314 kubelet[1470]: E1213 15:13:33.195215 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:33.491735 env[1193]: time="2024-12-13T15:13:33.491658079Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:33.494718 env[1193]: time="2024-12-13T15:13:33.493672939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:33.496266 env[1193]: time="2024-12-13T15:13:33.496230935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:33.498404 env[1193]: time="2024-12-13T15:13:33.498373921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:33.499312 env[1193]: time="2024-12-13T15:13:33.499201066Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 15:13:33.501622 env[1193]: time="2024-12-13T15:13:33.501585249Z" level=info msg="CreateContainer within sandbox \"fa4706f761974fb82f881c97bde610afb9834d2aaca0e9e95d137ae10a1f065d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 15:13:33.511293 env[1193]: time="2024-12-13T15:13:33.511256575Z" level=info msg="CreateContainer within sandbox \"fa4706f761974fb82f881c97bde610afb9834d2aaca0e9e95d137ae10a1f065d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"832c5e179f2aef5e9574bb29c910b4cbe9594f130cd886eb6691163fde031597\"" Dec 13 15:13:33.512055 env[1193]: time="2024-12-13T15:13:33.512029724Z" level=info msg="StartContainer for \"832c5e179f2aef5e9574bb29c910b4cbe9594f130cd886eb6691163fde031597\"" Dec 13 15:13:33.528838 systemd[1]: Started cri-containerd-832c5e179f2aef5e9574bb29c910b4cbe9594f130cd886eb6691163fde031597.scope. Dec 13 15:13:33.561717 env[1193]: time="2024-12-13T15:13:33.561668542Z" level=info msg="StartContainer for \"832c5e179f2aef5e9574bb29c910b4cbe9594f130cd886eb6691163fde031597\" returns successfully" Dec 13 15:13:34.196312 kubelet[1470]: E1213 15:13:34.196247 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:34.571451 kubelet[1470]: I1213 15:13:34.570859 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.209258125 podStartE2EDuration="16.570776046s" podCreationTimestamp="2024-12-13 15:13:18 +0000 UTC" firstStartedPulling="2024-12-13 15:13:33.138222678 +0000 UTC m=+63.714681895" lastFinishedPulling="2024-12-13 15:13:33.499740593 +0000 UTC m=+64.076199816" observedRunningTime="2024-12-13 15:13:34.570409629 +0000 UTC m=+65.146868979" watchObservedRunningTime="2024-12-13 15:13:34.570776046 +0000 UTC m=+65.147235308" Dec 13 15:13:34.796278 systemd-networkd[1032]: lxc5a3188b21b75: Gained IPv6LL Dec 13 15:13:35.197388 kubelet[1470]: E1213 15:13:35.197308 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:36.198654 kubelet[1470]: E1213 15:13:36.198583 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:37.199539 kubelet[1470]: E1213 15:13:37.199442 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:38.199996 kubelet[1470]: E1213 15:13:38.199916 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:39.201515 kubelet[1470]: E1213 15:13:39.201397 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:40.202320 kubelet[1470]: E1213 15:13:40.202212 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:41.203200 kubelet[1470]: E1213 15:13:41.203132 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:42.204573 kubelet[1470]: E1213 15:13:42.204505 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:43.205736 kubelet[1470]: E1213 15:13:43.205604 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:44.207434 kubelet[1470]: E1213 15:13:44.207356 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:44.722061 systemd[1]: run-containerd-runc-k8s.io-e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4-runc.is1hu0.mount: Deactivated successfully. Dec 13 15:13:44.740395 env[1193]: time="2024-12-13T15:13:44.740329435Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 15:13:44.745515 env[1193]: time="2024-12-13T15:13:44.745475892Z" level=info msg="StopContainer for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" with timeout 2 (s)" Dec 13 15:13:44.745849 env[1193]: time="2024-12-13T15:13:44.745826298Z" level=info msg="Stop container \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" with signal terminated" Dec 13 15:13:44.753742 systemd-networkd[1032]: lxc_health: Link DOWN Dec 13 15:13:44.753764 systemd-networkd[1032]: lxc_health: Lost carrier Dec 13 15:13:44.787156 systemd[1]: cri-containerd-e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4.scope: Deactivated successfully. Dec 13 15:13:44.787499 systemd[1]: cri-containerd-e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4.scope: Consumed 7.171s CPU time. Dec 13 15:13:44.809415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4-rootfs.mount: Deactivated successfully. Dec 13 15:13:44.816698 env[1193]: time="2024-12-13T15:13:44.816578664Z" level=info msg="shim disconnected" id=e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4 Dec 13 15:13:44.817372 env[1193]: time="2024-12-13T15:13:44.817311428Z" level=warning msg="cleaning up after shim disconnected" id=e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4 namespace=k8s.io Dec 13 15:13:44.817600 env[1193]: time="2024-12-13T15:13:44.817560305Z" level=info msg="cleaning up dead shim" Dec 13 15:13:44.830580 env[1193]: time="2024-12-13T15:13:44.830539494Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2942 runtime=io.containerd.runc.v2\n" Dec 13 15:13:44.831806 env[1193]: time="2024-12-13T15:13:44.831771777Z" level=info msg="StopContainer for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" returns successfully" Dec 13 15:13:44.832417 env[1193]: time="2024-12-13T15:13:44.832389061Z" level=info msg="StopPodSandbox for \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\"" Dec 13 15:13:44.832564 env[1193]: time="2024-12-13T15:13:44.832545200Z" level=info msg="Container to stop \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:44.832644 env[1193]: time="2024-12-13T15:13:44.832628220Z" level=info msg="Container to stop \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:44.832733 env[1193]: time="2024-12-13T15:13:44.832718183Z" level=info msg="Container to stop \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:44.832827 env[1193]: time="2024-12-13T15:13:44.832812447Z" level=info msg="Container to stop \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:44.832893 env[1193]: time="2024-12-13T15:13:44.832878550Z" level=info msg="Container to stop \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:44.834768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2-shm.mount: Deactivated successfully. Dec 13 15:13:44.841638 systemd[1]: cri-containerd-e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2.scope: Deactivated successfully. Dec 13 15:13:44.866736 env[1193]: time="2024-12-13T15:13:44.866662671Z" level=info msg="shim disconnected" id=e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2 Dec 13 15:13:44.866736 env[1193]: time="2024-12-13T15:13:44.866727954Z" level=warning msg="cleaning up after shim disconnected" id=e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2 namespace=k8s.io Dec 13 15:13:44.866736 env[1193]: time="2024-12-13T15:13:44.866738372Z" level=info msg="cleaning up dead shim" Dec 13 15:13:44.875462 env[1193]: time="2024-12-13T15:13:44.875423789Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2973 runtime=io.containerd.runc.v2\n" Dec 13 15:13:44.875753 env[1193]: time="2024-12-13T15:13:44.875728377Z" level=info msg="TearDown network for sandbox \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" successfully" Dec 13 15:13:44.875832 env[1193]: time="2024-12-13T15:13:44.875753629Z" level=info msg="StopPodSandbox for \"e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2\" returns successfully" Dec 13 15:13:45.031818 kubelet[1470]: I1213 15:13:45.029423 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cni-path\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.032195 kubelet[1470]: I1213 15:13:45.029560 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cni-path" (OuterVolumeSpecName: "cni-path") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.032386 kubelet[1470]: I1213 15:13:45.032366 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-net\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.032587 kubelet[1470]: I1213 15:13:45.032573 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-config-path\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.033963 kubelet[1470]: I1213 15:13:45.033914 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-hubble-tls\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034089 kubelet[1470]: I1213 15:13:45.034001 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-cgroup\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034089 kubelet[1470]: I1213 15:13:45.034052 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-bpf-maps\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034202 kubelet[1470]: I1213 15:13:45.034101 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-lib-modules\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034202 kubelet[1470]: I1213 15:13:45.034153 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-xtables-lock\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034308 kubelet[1470]: I1213 15:13:45.034210 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b571842d-a1fa-46ee-8350-c490d5a28eb8-clustermesh-secrets\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034308 kubelet[1470]: I1213 15:13:45.034266 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tpvh\" (UniqueName: \"kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-kube-api-access-7tpvh\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034415 kubelet[1470]: I1213 15:13:45.034317 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-hostproc\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034415 kubelet[1470]: I1213 15:13:45.034364 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-etc-cni-netd\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034533 kubelet[1470]: I1213 15:13:45.034412 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-run\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034533 kubelet[1470]: I1213 15:13:45.034464 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-kernel\") pod \"b571842d-a1fa-46ee-8350-c490d5a28eb8\" (UID: \"b571842d-a1fa-46ee-8350-c490d5a28eb8\") " Dec 13 15:13:45.034637 kubelet[1470]: I1213 15:13:45.034537 1470 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cni-path\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.034637 kubelet[1470]: I1213 15:13:45.032503 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.037050 kubelet[1470]: I1213 15:13:45.037017 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:13:45.042080 kubelet[1470]: I1213 15:13:45.042054 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b571842d-a1fa-46ee-8350-c490d5a28eb8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:13:45.042231 kubelet[1470]: I1213 15:13:45.042211 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042326 kubelet[1470]: I1213 15:13:45.042313 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042403 kubelet[1470]: I1213 15:13:45.042391 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042479 kubelet[1470]: I1213 15:13:45.042468 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042561 kubelet[1470]: I1213 15:13:45.042549 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042710 kubelet[1470]: I1213 15:13:45.042665 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:45.042787 kubelet[1470]: I1213 15:13:45.042741 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042827 kubelet[1470]: I1213 15:13:45.042795 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.042864 kubelet[1470]: I1213 15:13:45.042823 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-hostproc" (OuterVolumeSpecName: "hostproc") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:45.046335 kubelet[1470]: I1213 15:13:45.046309 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-kube-api-access-7tpvh" (OuterVolumeSpecName: "kube-api-access-7tpvh") pod "b571842d-a1fa-46ee-8350-c490d5a28eb8" (UID: "b571842d-a1fa-46ee-8350-c490d5a28eb8"). InnerVolumeSpecName "kube-api-access-7tpvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:45.135207 kubelet[1470]: I1213 15:13:45.135139 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-config-path\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.135533 kubelet[1470]: I1213 15:13:45.135504 1470 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-hubble-tls\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.135825 kubelet[1470]: I1213 15:13:45.135750 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-cgroup\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.136051 kubelet[1470]: I1213 15:13:45.136027 1470 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-bpf-maps\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.136232 kubelet[1470]: I1213 15:13:45.136208 1470 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b571842d-a1fa-46ee-8350-c490d5a28eb8-clustermesh-secrets\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.136426 kubelet[1470]: I1213 15:13:45.136403 1470 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7tpvh\" (UniqueName: \"kubernetes.io/projected/b571842d-a1fa-46ee-8350-c490d5a28eb8-kube-api-access-7tpvh\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.136600 kubelet[1470]: I1213 15:13:45.136577 1470 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-hostproc\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.136812 kubelet[1470]: I1213 15:13:45.136789 1470 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-etc-cni-netd\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.136994 kubelet[1470]: I1213 15:13:45.136971 1470 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-lib-modules\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.137187 kubelet[1470]: I1213 15:13:45.137138 1470 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-xtables-lock\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.137380 kubelet[1470]: I1213 15:13:45.137357 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-cilium-run\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.137560 kubelet[1470]: I1213 15:13:45.137538 1470 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-kernel\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.137790 kubelet[1470]: I1213 15:13:45.137736 1470 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b571842d-a1fa-46ee-8350-c490d5a28eb8-host-proc-sys-net\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:45.208559 kubelet[1470]: E1213 15:13:45.208473 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:45.238028 kubelet[1470]: E1213 15:13:45.237978 1470 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:13:45.599143 kubelet[1470]: I1213 15:13:45.599100 1470 scope.go:117] "RemoveContainer" containerID="e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4" Dec 13 15:13:45.608450 systemd[1]: Removed slice kubepods-burstable-podb571842d_a1fa_46ee_8350_c490d5a28eb8.slice. Dec 13 15:13:45.608684 systemd[1]: kubepods-burstable-podb571842d_a1fa_46ee_8350_c490d5a28eb8.slice: Consumed 7.305s CPU time. Dec 13 15:13:45.611234 env[1193]: time="2024-12-13T15:13:45.611160982Z" level=info msg="RemoveContainer for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\"" Dec 13 15:13:45.615214 env[1193]: time="2024-12-13T15:13:45.615141599Z" level=info msg="RemoveContainer for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" returns successfully" Dec 13 15:13:45.625997 kubelet[1470]: I1213 15:13:45.625926 1470 scope.go:117] "RemoveContainer" containerID="089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8" Dec 13 15:13:45.636833 env[1193]: time="2024-12-13T15:13:45.636402427Z" level=info msg="RemoveContainer for \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\"" Dec 13 15:13:45.639452 env[1193]: time="2024-12-13T15:13:45.639403663Z" level=info msg="RemoveContainer for \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\" returns successfully" Dec 13 15:13:45.639995 kubelet[1470]: I1213 15:13:45.639940 1470 scope.go:117] "RemoveContainer" containerID="ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7" Dec 13 15:13:45.644511 env[1193]: time="2024-12-13T15:13:45.644469017Z" level=info msg="RemoveContainer for \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\"" Dec 13 15:13:45.647000 env[1193]: time="2024-12-13T15:13:45.646961309Z" level=info msg="RemoveContainer for \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\" returns successfully" Dec 13 15:13:45.647393 kubelet[1470]: I1213 15:13:45.647366 1470 scope.go:117] "RemoveContainer" containerID="e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1" Dec 13 15:13:45.649360 env[1193]: time="2024-12-13T15:13:45.649304246Z" level=info msg="RemoveContainer for \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\"" Dec 13 15:13:45.654073 env[1193]: time="2024-12-13T15:13:45.653979711Z" level=info msg="RemoveContainer for \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\" returns successfully" Dec 13 15:13:45.654351 kubelet[1470]: I1213 15:13:45.654332 1470 scope.go:117] "RemoveContainer" containerID="9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe" Dec 13 15:13:45.655931 env[1193]: time="2024-12-13T15:13:45.655903505Z" level=info msg="RemoveContainer for \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\"" Dec 13 15:13:45.657709 env[1193]: time="2024-12-13T15:13:45.657680913Z" level=info msg="RemoveContainer for \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\" returns successfully" Dec 13 15:13:45.658025 kubelet[1470]: I1213 15:13:45.658001 1470 scope.go:117] "RemoveContainer" containerID="e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4" Dec 13 15:13:45.658412 env[1193]: time="2024-12-13T15:13:45.658292387Z" level=error msg="ContainerStatus for \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\": not found" Dec 13 15:13:45.658642 kubelet[1470]: E1213 15:13:45.658615 1470 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\": not found" containerID="e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4" Dec 13 15:13:45.658910 kubelet[1470]: I1213 15:13:45.658893 1470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4"} err="failed to get container status \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"e45a50bc036d56c394d0b8750b22aed19b5d4f56b7c1c5c0a1c432fa3aad63a4\": not found" Dec 13 15:13:45.659010 kubelet[1470]: I1213 15:13:45.659001 1470 scope.go:117] "RemoveContainer" containerID="089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8" Dec 13 15:13:45.659312 env[1193]: time="2024-12-13T15:13:45.659257169Z" level=error msg="ContainerStatus for \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\": not found" Dec 13 15:13:45.659571 kubelet[1470]: E1213 15:13:45.659548 1470 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\": not found" containerID="089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8" Dec 13 15:13:45.659656 kubelet[1470]: I1213 15:13:45.659616 1470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8"} err="failed to get container status \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"089342635a62f01cda4e08671be554155d8ebc6ea5c4f0e33dbd0d3b464b7bd8\": not found" Dec 13 15:13:45.659656 kubelet[1470]: I1213 15:13:45.659637 1470 scope.go:117] "RemoveContainer" containerID="ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7" Dec 13 15:13:45.660125 env[1193]: time="2024-12-13T15:13:45.660036462Z" level=error msg="ContainerStatus for \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\": not found" Dec 13 15:13:45.660281 kubelet[1470]: E1213 15:13:45.660263 1470 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\": not found" containerID="ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7" Dec 13 15:13:45.660388 kubelet[1470]: I1213 15:13:45.660377 1470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7"} err="failed to get container status \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab7d55494e56cae6f03c12bc5c52ff4b53048c732d002b907ea667a51c4f0db7\": not found" Dec 13 15:13:45.660482 kubelet[1470]: I1213 15:13:45.660472 1470 scope.go:117] "RemoveContainer" containerID="e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1" Dec 13 15:13:45.660900 env[1193]: time="2024-12-13T15:13:45.660809582Z" level=error msg="ContainerStatus for \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\": not found" Dec 13 15:13:45.661075 kubelet[1470]: E1213 15:13:45.661057 1470 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\": not found" containerID="e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1" Dec 13 15:13:45.661140 kubelet[1470]: I1213 15:13:45.661103 1470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1"} err="failed to get container status \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e164c9f302679264a8b59028b1b95e46d7b20047ef8bf36463e494889ffca6a1\": not found" Dec 13 15:13:45.661140 kubelet[1470]: I1213 15:13:45.661119 1470 scope.go:117] "RemoveContainer" containerID="9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe" Dec 13 15:13:45.661400 env[1193]: time="2024-12-13T15:13:45.661355057Z" level=error msg="ContainerStatus for \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\": not found" Dec 13 15:13:45.661646 kubelet[1470]: E1213 15:13:45.661629 1470 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\": not found" containerID="9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe" Dec 13 15:13:45.661722 kubelet[1470]: I1213 15:13:45.661665 1470 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe"} err="failed to get container status \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\": rpc error: code = NotFound desc = an error occurred when try to find container \"9503ea694d8b9fabf864d0f9b43c2edb3bded9090712b3313320eb8d7b209dbe\": not found" Dec 13 15:13:45.720014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e357c7c56839400c1bd03c3a2cdb6d48dcb0c082406a85d0b17fedd1a3d724e2-rootfs.mount: Deactivated successfully. Dec 13 15:13:45.720263 systemd[1]: var-lib-kubelet-pods-b571842d\x2da1fa\x2d46ee\x2d8350\x2dc490d5a28eb8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7tpvh.mount: Deactivated successfully. Dec 13 15:13:45.720413 systemd[1]: var-lib-kubelet-pods-b571842d\x2da1fa\x2d46ee\x2d8350\x2dc490d5a28eb8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 15:13:45.720548 systemd[1]: var-lib-kubelet-pods-b571842d\x2da1fa\x2d46ee\x2d8350\x2dc490d5a28eb8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 15:13:46.209840 kubelet[1470]: E1213 15:13:46.209718 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:46.285254 kubelet[1470]: I1213 15:13:46.285174 1470 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" path="/var/lib/kubelet/pods/b571842d-a1fa-46ee-8350-c490d5a28eb8/volumes" Dec 13 15:13:47.210071 kubelet[1470]: E1213 15:13:47.209972 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:48.210809 kubelet[1470]: E1213 15:13:48.210733 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:49.119811 kubelet[1470]: I1213 15:13:49.119706 1470 topology_manager.go:215] "Topology Admit Handler" podUID="32d2d03c-65b4-484d-9123-4dbd1c8d1cb3" podNamespace="kube-system" podName="cilium-operator-5cc964979-kp559" Dec 13 15:13:49.121517 kubelet[1470]: E1213 15:13:49.120267 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" containerName="apply-sysctl-overwrites" Dec 13 15:13:49.121517 kubelet[1470]: E1213 15:13:49.120317 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" containerName="clean-cilium-state" Dec 13 15:13:49.121517 kubelet[1470]: E1213 15:13:49.120339 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" containerName="mount-cgroup" Dec 13 15:13:49.121517 kubelet[1470]: E1213 15:13:49.120357 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" containerName="cilium-agent" Dec 13 15:13:49.121517 kubelet[1470]: E1213 15:13:49.120374 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" containerName="mount-bpf-fs" Dec 13 15:13:49.121517 kubelet[1470]: I1213 15:13:49.120439 1470 memory_manager.go:354] "RemoveStaleState removing state" podUID="b571842d-a1fa-46ee-8350-c490d5a28eb8" containerName="cilium-agent" Dec 13 15:13:49.125327 kubelet[1470]: I1213 15:13:49.125268 1470 topology_manager.go:215] "Topology Admit Handler" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" podNamespace="kube-system" podName="cilium-8nfpg" Dec 13 15:13:49.131953 systemd[1]: Created slice kubepods-besteffort-pod32d2d03c_65b4_484d_9123_4dbd1c8d1cb3.slice. Dec 13 15:13:49.145166 systemd[1]: Created slice kubepods-burstable-pode2aa5dcc_9401_4b87_9842_40a5078629d4.slice. Dec 13 15:13:49.166711 kubelet[1470]: I1213 15:13:49.166635 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-net\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.166938 kubelet[1470]: I1213 15:13:49.166925 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-hubble-tls\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.167183 kubelet[1470]: I1213 15:13:49.167169 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-cgroup\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.167316 kubelet[1470]: I1213 15:13:49.167305 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cni-path\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.167463 kubelet[1470]: I1213 15:13:49.167444 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-etc-cni-netd\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.167608 kubelet[1470]: I1213 15:13:49.167589 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fks\" (UniqueName: \"kubernetes.io/projected/32d2d03c-65b4-484d-9123-4dbd1c8d1cb3-kube-api-access-m9fks\") pod \"cilium-operator-5cc964979-kp559\" (UID: \"32d2d03c-65b4-484d-9123-4dbd1c8d1cb3\") " pod="kube-system/cilium-operator-5cc964979-kp559" Dec 13 15:13:49.167751 kubelet[1470]: I1213 15:13:49.167730 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-lib-modules\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.167956 kubelet[1470]: I1213 15:13:49.167914 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-ipsec-secrets\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.168095 kubelet[1470]: I1213 15:13:49.168084 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgm96\" (UniqueName: \"kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-kube-api-access-mgm96\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.168300 kubelet[1470]: I1213 15:13:49.168284 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-clustermesh-secrets\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.168437 kubelet[1470]: I1213 15:13:49.168425 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-run\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.168613 kubelet[1470]: I1213 15:13:49.168573 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-hostproc\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.168749 kubelet[1470]: I1213 15:13:49.168738 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-xtables-lock\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.168943 kubelet[1470]: I1213 15:13:49.168905 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-kernel\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.169073 kubelet[1470]: I1213 15:13:49.169062 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32d2d03c-65b4-484d-9123-4dbd1c8d1cb3-cilium-config-path\") pod \"cilium-operator-5cc964979-kp559\" (UID: \"32d2d03c-65b4-484d-9123-4dbd1c8d1cb3\") " pod="kube-system/cilium-operator-5cc964979-kp559" Dec 13 15:13:49.169231 kubelet[1470]: I1213 15:13:49.169213 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-bpf-maps\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.169376 kubelet[1470]: I1213 15:13:49.169357 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-config-path\") pod \"cilium-8nfpg\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " pod="kube-system/cilium-8nfpg" Dec 13 15:13:49.212173 kubelet[1470]: E1213 15:13:49.212056 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:49.443562 env[1193]: time="2024-12-13T15:13:49.443283821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kp559,Uid:32d2d03c-65b4-484d-9123-4dbd1c8d1cb3,Namespace:kube-system,Attempt:0,}" Dec 13 15:13:49.460139 env[1193]: time="2024-12-13T15:13:49.458692811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nfpg,Uid:e2aa5dcc-9401-4b87-9842-40a5078629d4,Namespace:kube-system,Attempt:0,}" Dec 13 15:13:49.477420 env[1193]: time="2024-12-13T15:13:49.477351389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:49.477617 env[1193]: time="2024-12-13T15:13:49.477588776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:49.477711 env[1193]: time="2024-12-13T15:13:49.477692150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:49.478026 env[1193]: time="2024-12-13T15:13:49.477986390Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0852b390f0bf64e4e26a9f4ef3e25d6b22a6dbeaabe03e48372a55d0c7570117 pid=3004 runtime=io.containerd.runc.v2 Dec 13 15:13:49.478724 env[1193]: time="2024-12-13T15:13:49.478672150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:49.478840 env[1193]: time="2024-12-13T15:13:49.478706437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:49.478840 env[1193]: time="2024-12-13T15:13:49.478738382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:49.478979 env[1193]: time="2024-12-13T15:13:49.478950354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e pid=3015 runtime=io.containerd.runc.v2 Dec 13 15:13:49.496078 systemd[1]: Started cri-containerd-0852b390f0bf64e4e26a9f4ef3e25d6b22a6dbeaabe03e48372a55d0c7570117.scope. Dec 13 15:13:49.515120 systemd[1]: Started cri-containerd-d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e.scope. Dec 13 15:13:49.545959 env[1193]: time="2024-12-13T15:13:49.545913909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nfpg,Uid:e2aa5dcc-9401-4b87-9842-40a5078629d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\"" Dec 13 15:13:49.548893 env[1193]: time="2024-12-13T15:13:49.548862688Z" level=info msg="CreateContainer within sandbox \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:13:49.560096 env[1193]: time="2024-12-13T15:13:49.560061116Z" level=info msg="CreateContainer within sandbox \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\"" Dec 13 15:13:49.560550 env[1193]: time="2024-12-13T15:13:49.560529664Z" level=info msg="StartContainer for \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\"" Dec 13 15:13:49.571262 env[1193]: time="2024-12-13T15:13:49.571229354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kp559,Uid:32d2d03c-65b4-484d-9123-4dbd1c8d1cb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0852b390f0bf64e4e26a9f4ef3e25d6b22a6dbeaabe03e48372a55d0c7570117\"" Dec 13 15:13:49.572651 env[1193]: time="2024-12-13T15:13:49.572622815Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 15:13:49.587535 systemd[1]: Started cri-containerd-be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8.scope. Dec 13 15:13:49.604813 systemd[1]: cri-containerd-be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8.scope: Deactivated successfully. Dec 13 15:13:49.620481 env[1193]: time="2024-12-13T15:13:49.620332266Z" level=info msg="shim disconnected" id=be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8 Dec 13 15:13:49.620481 env[1193]: time="2024-12-13T15:13:49.620459726Z" level=warning msg="cleaning up after shim disconnected" id=be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8 namespace=k8s.io Dec 13 15:13:49.620481 env[1193]: time="2024-12-13T15:13:49.620484853Z" level=info msg="cleaning up dead shim" Dec 13 15:13:49.635330 env[1193]: time="2024-12-13T15:13:49.635275546Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3105 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T15:13:49Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 15:13:49.635954 env[1193]: time="2024-12-13T15:13:49.635802031Z" level=error msg="copy shim log" error="read /proc/self/fd/67: file already closed" Dec 13 15:13:49.636853 env[1193]: time="2024-12-13T15:13:49.636123614Z" level=error msg="Failed to pipe stdout of container \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\"" error="reading from a closed fifo" Dec 13 15:13:49.637026 env[1193]: time="2024-12-13T15:13:49.636994504Z" level=error msg="Failed to pipe stderr of container \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\"" error="reading from a closed fifo" Dec 13 15:13:49.637903 env[1193]: time="2024-12-13T15:13:49.637846918Z" level=error msg="StartContainer for \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 15:13:49.638340 kubelet[1470]: E1213 15:13:49.638317 1470 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8" Dec 13 15:13:49.638929 kubelet[1470]: E1213 15:13:49.638887 1470 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 15:13:49.638929 kubelet[1470]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 15:13:49.638929 kubelet[1470]: rm /hostbin/cilium-mount Dec 13 15:13:49.639115 kubelet[1470]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mgm96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8nfpg_kube-system(e2aa5dcc-9401-4b87-9842-40a5078629d4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 15:13:49.639115 kubelet[1470]: E1213 15:13:49.639011 1470 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8nfpg" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" Dec 13 15:13:50.132741 kubelet[1470]: E1213 15:13:50.132597 1470 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:50.213903 kubelet[1470]: E1213 15:13:50.213750 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:50.239583 kubelet[1470]: E1213 15:13:50.239500 1470 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:13:50.628482 env[1193]: time="2024-12-13T15:13:50.628211880Z" level=info msg="CreateContainer within sandbox \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 15:13:50.643360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311078392.mount: Deactivated successfully. Dec 13 15:13:50.646705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount754477939.mount: Deactivated successfully. Dec 13 15:13:50.648061 env[1193]: time="2024-12-13T15:13:50.647461123Z" level=info msg="CreateContainer within sandbox \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\"" Dec 13 15:13:50.649034 env[1193]: time="2024-12-13T15:13:50.648989839Z" level=info msg="StartContainer for \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\"" Dec 13 15:13:50.665391 systemd[1]: Started cri-containerd-f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624.scope. Dec 13 15:13:50.677735 systemd[1]: cri-containerd-f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624.scope: Deactivated successfully. Dec 13 15:13:50.687579 env[1193]: time="2024-12-13T15:13:50.687512936Z" level=info msg="shim disconnected" id=f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624 Dec 13 15:13:50.687579 env[1193]: time="2024-12-13T15:13:50.687565881Z" level=warning msg="cleaning up after shim disconnected" id=f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624 namespace=k8s.io Dec 13 15:13:50.687579 env[1193]: time="2024-12-13T15:13:50.687576952Z" level=info msg="cleaning up dead shim" Dec 13 15:13:50.698877 env[1193]: time="2024-12-13T15:13:50.698818927Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3142 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T15:13:50Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 15:13:50.699160 env[1193]: time="2024-12-13T15:13:50.699065037Z" level=error msg="copy shim log" error="read /proc/self/fd/72: file already closed" Dec 13 15:13:50.699897 env[1193]: time="2024-12-13T15:13:50.699853055Z" level=error msg="Failed to pipe stdout of container \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\"" error="reading from a closed fifo" Dec 13 15:13:50.699997 env[1193]: time="2024-12-13T15:13:50.699915454Z" level=error msg="Failed to pipe stderr of container \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\"" error="reading from a closed fifo" Dec 13 15:13:50.700810 env[1193]: time="2024-12-13T15:13:50.700773264Z" level=error msg="StartContainer for \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 15:13:50.701674 kubelet[1470]: E1213 15:13:50.701104 1470 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624" Dec 13 15:13:50.701674 kubelet[1470]: E1213 15:13:50.701591 1470 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 15:13:50.701674 kubelet[1470]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 15:13:50.701674 kubelet[1470]: rm /hostbin/cilium-mount Dec 13 15:13:50.701674 kubelet[1470]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mgm96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-8nfpg_kube-system(e2aa5dcc-9401-4b87-9842-40a5078629d4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 15:13:50.701674 kubelet[1470]: E1213 15:13:50.701646 1470 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-8nfpg" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" Dec 13 15:13:51.129274 kubelet[1470]: I1213 15:13:51.129217 1470 setters.go:568] "Node became not ready" node="10.244.95.150" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T15:13:51Z","lastTransitionTime":"2024-12-13T15:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 15:13:51.214828 kubelet[1470]: E1213 15:13:51.214734 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:51.297315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624-rootfs.mount: Deactivated successfully. Dec 13 15:13:51.630683 kubelet[1470]: I1213 15:13:51.630627 1470 scope.go:117] "RemoveContainer" containerID="be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8" Dec 13 15:13:51.632591 kubelet[1470]: I1213 15:13:51.632540 1470 scope.go:117] "RemoveContainer" containerID="be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8" Dec 13 15:13:51.635144 env[1193]: time="2024-12-13T15:13:51.635074046Z" level=info msg="RemoveContainer for \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\"" Dec 13 15:13:51.636834 env[1193]: time="2024-12-13T15:13:51.636748463Z" level=info msg="RemoveContainer for \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\"" Dec 13 15:13:51.637055 env[1193]: time="2024-12-13T15:13:51.636976427Z" level=error msg="RemoveContainer for \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\" failed" error="failed to set removing state for container \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\": container is already in removing state" Dec 13 15:13:51.637867 kubelet[1470]: E1213 15:13:51.637803 1470 remote_runtime.go:385] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\": container is already in removing state" containerID="be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8" Dec 13 15:13:51.638036 kubelet[1470]: E1213 15:13:51.637973 1470 kuberuntime_container.go:858] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8": container is already in removing state; Skipping pod "cilium-8nfpg_kube-system(e2aa5dcc-9401-4b87-9842-40a5078629d4)" Dec 13 15:13:51.639216 kubelet[1470]: E1213 15:13:51.639153 1470 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-8nfpg_kube-system(e2aa5dcc-9401-4b87-9842-40a5078629d4)\"" pod="kube-system/cilium-8nfpg" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" Dec 13 15:13:51.643223 env[1193]: time="2024-12-13T15:13:51.643132291Z" level=info msg="RemoveContainer for \"be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8\" returns successfully" Dec 13 15:13:52.215284 kubelet[1470]: E1213 15:13:52.215220 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:52.638797 env[1193]: time="2024-12-13T15:13:52.638693376Z" level=info msg="StopPodSandbox for \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\"" Dec 13 15:13:52.639717 env[1193]: time="2024-12-13T15:13:52.639651993Z" level=info msg="Container to stop \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 15:13:52.644406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e-shm.mount: Deactivated successfully. Dec 13 15:13:52.654344 systemd[1]: cri-containerd-d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e.scope: Deactivated successfully. Dec 13 15:13:52.694881 env[1193]: time="2024-12-13T15:13:52.693701380Z" level=info msg="shim disconnected" id=d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e Dec 13 15:13:52.694881 env[1193]: time="2024-12-13T15:13:52.693840967Z" level=warning msg="cleaning up after shim disconnected" id=d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e namespace=k8s.io Dec 13 15:13:52.694881 env[1193]: time="2024-12-13T15:13:52.693855535Z" level=info msg="cleaning up dead shim" Dec 13 15:13:52.693907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e-rootfs.mount: Deactivated successfully. Dec 13 15:13:52.703352 env[1193]: time="2024-12-13T15:13:52.703311631Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3174 runtime=io.containerd.runc.v2\n" Dec 13 15:13:52.703689 env[1193]: time="2024-12-13T15:13:52.703627094Z" level=info msg="TearDown network for sandbox \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\" successfully" Dec 13 15:13:52.703746 env[1193]: time="2024-12-13T15:13:52.703686806Z" level=info msg="StopPodSandbox for \"d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e\" returns successfully" Dec 13 15:13:52.734251 kubelet[1470]: W1213 15:13:52.734172 1470 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2aa5dcc_9401_4b87_9842_40a5078629d4.slice/cri-containerd-be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8.scope WatchSource:0}: container "be6cb3f152e801a7b1186ab9a4233c8d1e500275fcf6209dd696b8cadcdcd0e8" in namespace "k8s.io": not found Dec 13 15:13:52.795794 kubelet[1470]: I1213 15:13:52.795618 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-xtables-lock\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.795794 kubelet[1470]: I1213 15:13:52.795635 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.795794 kubelet[1470]: I1213 15:13:52.795714 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgm96\" (UniqueName: \"kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-kube-api-access-mgm96\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796149 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-etc-cni-netd\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796199 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-bpf-maps\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796256 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-lib-modules\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796287 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-run\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796326 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-config-path\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796362 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-hubble-tls\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796392 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-cgroup\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796424 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-hostproc\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796458 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-net\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796490 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cni-path\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796528 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-ipsec-secrets\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796565 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-clustermesh-secrets\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796597 1470 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-kernel\") pod \"e2aa5dcc-9401-4b87-9842-40a5078629d4\" (UID: \"e2aa5dcc-9401-4b87-9842-40a5078629d4\") " Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796654 1470 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-xtables-lock\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.796710 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.798793 kubelet[1470]: I1213 15:13:52.798385 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798421 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798439 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798454 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798468 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798490 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798509 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.799809 kubelet[1470]: I1213 15:13:52.798532 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.804313 systemd[1]: var-lib-kubelet-pods-e2aa5dcc\x2d9401\x2d4b87\x2d9842\x2d40a5078629d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmgm96.mount: Deactivated successfully. Dec 13 15:13:52.806013 kubelet[1470]: I1213 15:13:52.805986 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-kube-api-access-mgm96" (OuterVolumeSpecName: "kube-api-access-mgm96") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "kube-api-access-mgm96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:52.806095 kubelet[1470]: I1213 15:13:52.806033 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 15:13:52.808267 systemd[1]: var-lib-kubelet-pods-e2aa5dcc\x2d9401\x2d4b87\x2d9842\x2d40a5078629d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 15:13:52.809573 kubelet[1470]: I1213 15:13:52.809550 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 15:13:52.811698 systemd[1]: var-lib-kubelet-pods-e2aa5dcc\x2d9401\x2d4b87\x2d9842\x2d40a5078629d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 15:13:52.816865 kubelet[1470]: I1213 15:13:52.816839 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:13:52.816955 kubelet[1470]: I1213 15:13:52.816934 1470 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e2aa5dcc-9401-4b87-9842-40a5078629d4" (UID: "e2aa5dcc-9401-4b87-9842-40a5078629d4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896856 1470 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-hubble-tls\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896904 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-cgroup\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896916 1470 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-hostproc\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896929 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-ipsec-secrets\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896941 1470 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-net\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896951 1470 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cni-path\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896963 1470 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-host-proc-sys-kernel\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896973 1470 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e2aa5dcc-9401-4b87-9842-40a5078629d4-clustermesh-secrets\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896984 1470 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mgm96\" (UniqueName: \"kubernetes.io/projected/e2aa5dcc-9401-4b87-9842-40a5078629d4-kube-api-access-mgm96\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.896994 1470 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-bpf-maps\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.897004 1470 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-etc-cni-netd\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.897014 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-run\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.897024 1470 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e2aa5dcc-9401-4b87-9842-40a5078629d4-cilium-config-path\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:52.897119 kubelet[1470]: I1213 15:13:52.897034 1470 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2aa5dcc-9401-4b87-9842-40a5078629d4-lib-modules\") on node \"10.244.95.150\" DevicePath \"\"" Dec 13 15:13:53.216538 kubelet[1470]: E1213 15:13:53.216217 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:53.644524 systemd[1]: var-lib-kubelet-pods-e2aa5dcc\x2d9401\x2d4b87\x2d9842\x2d40a5078629d4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 15:13:53.648848 kubelet[1470]: I1213 15:13:53.648659 1470 scope.go:117] "RemoveContainer" containerID="f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624" Dec 13 15:13:53.651787 env[1193]: time="2024-12-13T15:13:53.651711716Z" level=info msg="RemoveContainer for \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\"" Dec 13 15:13:53.653307 systemd[1]: Removed slice kubepods-burstable-pode2aa5dcc_9401_4b87_9842_40a5078629d4.slice. Dec 13 15:13:53.654424 env[1193]: time="2024-12-13T15:13:53.653786552Z" level=info msg="RemoveContainer for \"f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624\" returns successfully" Dec 13 15:13:53.722171 kubelet[1470]: I1213 15:13:53.722126 1470 topology_manager.go:215] "Topology Admit Handler" podUID="ddadfe90-0c96-44fc-9151-7bceb8480504" podNamespace="kube-system" podName="cilium-d8f8q" Dec 13 15:13:53.722435 kubelet[1470]: E1213 15:13:53.722213 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" containerName="mount-cgroup" Dec 13 15:13:53.722435 kubelet[1470]: E1213 15:13:53.722229 1470 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" containerName="mount-cgroup" Dec 13 15:13:53.722435 kubelet[1470]: I1213 15:13:53.722273 1470 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" containerName="mount-cgroup" Dec 13 15:13:53.722435 kubelet[1470]: I1213 15:13:53.722280 1470 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" containerName="mount-cgroup" Dec 13 15:13:53.732061 systemd[1]: Created slice kubepods-burstable-podddadfe90_0c96_44fc_9151_7bceb8480504.slice. Dec 13 15:13:53.802593 kubelet[1470]: I1213 15:13:53.802515 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-cni-path\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.802916 kubelet[1470]: I1213 15:13:53.802814 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ddadfe90-0c96-44fc-9151-7bceb8480504-cilium-config-path\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803093 kubelet[1470]: I1213 15:13:53.802918 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4g2w\" (UniqueName: \"kubernetes.io/projected/ddadfe90-0c96-44fc-9151-7bceb8480504-kube-api-access-j4g2w\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803093 kubelet[1470]: I1213 15:13:53.803013 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-bpf-maps\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803246 kubelet[1470]: I1213 15:13:53.803102 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-xtables-lock\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803246 kubelet[1470]: I1213 15:13:53.803178 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ddadfe90-0c96-44fc-9151-7bceb8480504-hubble-tls\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803408 kubelet[1470]: I1213 15:13:53.803250 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-etc-cni-netd\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803408 kubelet[1470]: I1213 15:13:53.803343 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ddadfe90-0c96-44fc-9151-7bceb8480504-clustermesh-secrets\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803574 kubelet[1470]: I1213 15:13:53.803416 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-cilium-cgroup\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803574 kubelet[1470]: I1213 15:13:53.803493 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-host-proc-sys-net\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803574 kubelet[1470]: I1213 15:13:53.803567 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-host-proc-sys-kernel\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803817 kubelet[1470]: I1213 15:13:53.803640 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ddadfe90-0c96-44fc-9151-7bceb8480504-cilium-ipsec-secrets\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.803817 kubelet[1470]: I1213 15:13:53.803713 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-cilium-run\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.804456 kubelet[1470]: I1213 15:13:53.804401 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-hostproc\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:53.804793 kubelet[1470]: I1213 15:13:53.804730 1470 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddadfe90-0c96-44fc-9151-7bceb8480504-lib-modules\") pod \"cilium-d8f8q\" (UID: \"ddadfe90-0c96-44fc-9151-7bceb8480504\") " pod="kube-system/cilium-d8f8q" Dec 13 15:13:54.041909 env[1193]: time="2024-12-13T15:13:54.038951433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8f8q,Uid:ddadfe90-0c96-44fc-9151-7bceb8480504,Namespace:kube-system,Attempt:0,}" Dec 13 15:13:54.056717 env[1193]: time="2024-12-13T15:13:54.056627531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 15:13:54.056717 env[1193]: time="2024-12-13T15:13:54.056674600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 15:13:54.057045 env[1193]: time="2024-12-13T15:13:54.056992704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 15:13:54.057324 env[1193]: time="2024-12-13T15:13:54.057268285Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66 pid=3202 runtime=io.containerd.runc.v2 Dec 13 15:13:54.070995 systemd[1]: Started cri-containerd-c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66.scope. Dec 13 15:13:54.114946 env[1193]: time="2024-12-13T15:13:54.114900124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d8f8q,Uid:ddadfe90-0c96-44fc-9151-7bceb8480504,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\"" Dec 13 15:13:54.117876 env[1193]: time="2024-12-13T15:13:54.117813209Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:54.118130 env[1193]: time="2024-12-13T15:13:54.118098630Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 15:13:54.118411 env[1193]: time="2024-12-13T15:13:54.118382079Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 15:13:54.119126 env[1193]: time="2024-12-13T15:13:54.118800483Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:54.119333 env[1193]: time="2024-12-13T15:13:54.119301494Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 15:13:54.120524 env[1193]: time="2024-12-13T15:13:54.120487469Z" level=info msg="CreateContainer within sandbox \"0852b390f0bf64e4e26a9f4ef3e25d6b22a6dbeaabe03e48372a55d0c7570117\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 15:13:54.145584 env[1193]: time="2024-12-13T15:13:54.145534979Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b\"" Dec 13 15:13:54.146349 env[1193]: time="2024-12-13T15:13:54.146303702Z" level=info msg="StartContainer for \"b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b\"" Dec 13 15:13:54.152605 env[1193]: time="2024-12-13T15:13:54.152550367Z" level=info msg="CreateContainer within sandbox \"0852b390f0bf64e4e26a9f4ef3e25d6b22a6dbeaabe03e48372a55d0c7570117\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a82399684e5a2635485d7f66ddacf5499296c433ad56c313b7bf83273a6ee47a\"" Dec 13 15:13:54.153244 env[1193]: time="2024-12-13T15:13:54.153219631Z" level=info msg="StartContainer for \"a82399684e5a2635485d7f66ddacf5499296c433ad56c313b7bf83273a6ee47a\"" Dec 13 15:13:54.180975 systemd[1]: Started cri-containerd-b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b.scope. Dec 13 15:13:54.213508 systemd[1]: Started cri-containerd-a82399684e5a2635485d7f66ddacf5499296c433ad56c313b7bf83273a6ee47a.scope. Dec 13 15:13:54.217040 kubelet[1470]: E1213 15:13:54.216984 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:54.227084 env[1193]: time="2024-12-13T15:13:54.227042243Z" level=info msg="StartContainer for \"b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b\" returns successfully" Dec 13 15:13:54.248498 systemd[1]: cri-containerd-b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b.scope: Deactivated successfully. Dec 13 15:13:54.254067 env[1193]: time="2024-12-13T15:13:54.254011430Z" level=info msg="StartContainer for \"a82399684e5a2635485d7f66ddacf5499296c433ad56c313b7bf83273a6ee47a\" returns successfully" Dec 13 15:13:54.306986 kubelet[1470]: I1213 15:13:54.302970 1470 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e2aa5dcc-9401-4b87-9842-40a5078629d4" path="/var/lib/kubelet/pods/e2aa5dcc-9401-4b87-9842-40a5078629d4/volumes" Dec 13 15:13:54.308837 env[1193]: time="2024-12-13T15:13:54.308787495Z" level=info msg="shim disconnected" id=b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b Dec 13 15:13:54.308837 env[1193]: time="2024-12-13T15:13:54.308835367Z" level=warning msg="cleaning up after shim disconnected" id=b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b namespace=k8s.io Dec 13 15:13:54.309008 env[1193]: time="2024-12-13T15:13:54.308846232Z" level=info msg="cleaning up dead shim" Dec 13 15:13:54.317181 env[1193]: time="2024-12-13T15:13:54.317142001Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" Dec 13 15:13:54.659435 env[1193]: time="2024-12-13T15:13:54.659285145Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 15:13:54.679742 env[1193]: time="2024-12-13T15:13:54.673678263Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b\"" Dec 13 15:13:54.679742 env[1193]: time="2024-12-13T15:13:54.678824638Z" level=info msg="StartContainer for \"6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b\"" Dec 13 15:13:54.705192 systemd[1]: Started cri-containerd-6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b.scope. Dec 13 15:13:54.711187 kubelet[1470]: I1213 15:13:54.711042 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-kp559" podStartSLOduration=1.164596948 podStartE2EDuration="5.710971825s" podCreationTimestamp="2024-12-13 15:13:49 +0000 UTC" firstStartedPulling="2024-12-13 15:13:49.572335452 +0000 UTC m=+80.148794670" lastFinishedPulling="2024-12-13 15:13:54.118710328 +0000 UTC m=+84.695169547" observedRunningTime="2024-12-13 15:13:54.680712186 +0000 UTC m=+85.257171423" watchObservedRunningTime="2024-12-13 15:13:54.710971825 +0000 UTC m=+85.287431062" Dec 13 15:13:54.739941 env[1193]: time="2024-12-13T15:13:54.739901926Z" level=info msg="StartContainer for \"6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b\" returns successfully" Dec 13 15:13:54.750335 systemd[1]: cri-containerd-6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b.scope: Deactivated successfully. Dec 13 15:13:54.769722 env[1193]: time="2024-12-13T15:13:54.769678735Z" level=info msg="shim disconnected" id=6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b Dec 13 15:13:54.770068 env[1193]: time="2024-12-13T15:13:54.770041697Z" level=warning msg="cleaning up after shim disconnected" id=6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b namespace=k8s.io Dec 13 15:13:54.770165 env[1193]: time="2024-12-13T15:13:54.770147110Z" level=info msg="cleaning up dead shim" Dec 13 15:13:54.778409 env[1193]: time="2024-12-13T15:13:54.778317869Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3387 runtime=io.containerd.runc.v2\n" Dec 13 15:13:55.217435 kubelet[1470]: E1213 15:13:55.217314 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:55.241122 kubelet[1470]: E1213 15:13:55.241020 1470 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 15:13:55.645050 systemd[1]: run-containerd-runc-k8s.io-6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b-runc.rpZOeS.mount: Deactivated successfully. Dec 13 15:13:55.645236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b-rootfs.mount: Deactivated successfully. Dec 13 15:13:55.669372 env[1193]: time="2024-12-13T15:13:55.669305659Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 15:13:55.682514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422251040.mount: Deactivated successfully. Dec 13 15:13:55.688005 env[1193]: time="2024-12-13T15:13:55.687598381Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c\"" Dec 13 15:13:55.688785 env[1193]: time="2024-12-13T15:13:55.688670762Z" level=info msg="StartContainer for \"0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c\"" Dec 13 15:13:55.708498 systemd[1]: Started cri-containerd-0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c.scope. Dec 13 15:13:55.748524 env[1193]: time="2024-12-13T15:13:55.747839291Z" level=info msg="StartContainer for \"0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c\" returns successfully" Dec 13 15:13:55.752507 systemd[1]: cri-containerd-0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c.scope: Deactivated successfully. Dec 13 15:13:55.781404 env[1193]: time="2024-12-13T15:13:55.781346792Z" level=info msg="shim disconnected" id=0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c Dec 13 15:13:55.781750 env[1193]: time="2024-12-13T15:13:55.781727301Z" level=warning msg="cleaning up after shim disconnected" id=0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c namespace=k8s.io Dec 13 15:13:55.781866 env[1193]: time="2024-12-13T15:13:55.781849007Z" level=info msg="cleaning up dead shim" Dec 13 15:13:55.791564 env[1193]: time="2024-12-13T15:13:55.791523568Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3444 runtime=io.containerd.runc.v2\n" Dec 13 15:13:55.842883 kubelet[1470]: W1213 15:13:55.842784 1470 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2aa5dcc_9401_4b87_9842_40a5078629d4.slice/cri-containerd-f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624.scope WatchSource:0}: container "f2a64e714a868251e85c3f06c0670707d21a061bb197a932b30f4105e2f31624" in namespace "k8s.io": not found Dec 13 15:13:55.849693 kubelet[1470]: E1213 15:13:55.849636 1470 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2aa5dcc_9401_4b87_9842_40a5078629d4.slice/cri-containerd-d0e4a5956ae5296d1757a84da182c63365cb9360aa4a9630a56bf0014d33389e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2aa5dcc_9401_4b87_9842_40a5078629d4.slice\": RecentStats: unable to find data in memory cache]" Dec 13 15:13:56.218619 kubelet[1470]: E1213 15:13:56.218556 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:56.645256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c-rootfs.mount: Deactivated successfully. Dec 13 15:13:56.673166 env[1193]: time="2024-12-13T15:13:56.673084554Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 15:13:56.688499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373643531.mount: Deactivated successfully. Dec 13 15:13:56.694583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466075437.mount: Deactivated successfully. Dec 13 15:13:56.696795 env[1193]: time="2024-12-13T15:13:56.696736630Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67\"" Dec 13 15:13:56.697342 env[1193]: time="2024-12-13T15:13:56.697309374Z" level=info msg="StartContainer for \"63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67\"" Dec 13 15:13:56.713374 systemd[1]: Started cri-containerd-63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67.scope. Dec 13 15:13:56.750225 systemd[1]: cri-containerd-63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67.scope: Deactivated successfully. Dec 13 15:13:56.751667 env[1193]: time="2024-12-13T15:13:56.751500478Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddadfe90_0c96_44fc_9151_7bceb8480504.slice/cri-containerd-63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67.scope/memory.events\": no such file or directory" Dec 13 15:13:56.752971 env[1193]: time="2024-12-13T15:13:56.752936107Z" level=info msg="StartContainer for \"63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67\" returns successfully" Dec 13 15:13:56.774119 env[1193]: time="2024-12-13T15:13:56.774074159Z" level=info msg="shim disconnected" id=63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67 Dec 13 15:13:56.774119 env[1193]: time="2024-12-13T15:13:56.774121678Z" level=warning msg="cleaning up after shim disconnected" id=63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67 namespace=k8s.io Dec 13 15:13:56.774332 env[1193]: time="2024-12-13T15:13:56.774131460Z" level=info msg="cleaning up dead shim" Dec 13 15:13:56.782595 env[1193]: time="2024-12-13T15:13:56.782557865Z" level=warning msg="cleanup warnings time=\"2024-12-13T15:13:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3502 runtime=io.containerd.runc.v2\n" Dec 13 15:13:57.220538 kubelet[1470]: E1213 15:13:57.220423 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:57.681267 env[1193]: time="2024-12-13T15:13:57.681214066Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 15:13:57.694020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648177848.mount: Deactivated successfully. Dec 13 15:13:57.700267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1058056097.mount: Deactivated successfully. Dec 13 15:13:57.704890 env[1193]: time="2024-12-13T15:13:57.704820087Z" level=info msg="CreateContainer within sandbox \"c4a981236a051323fbadbcd30056320ed91a90222b73ffd7e831e4abf406bf66\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04\"" Dec 13 15:13:57.705609 env[1193]: time="2024-12-13T15:13:57.705578319Z" level=info msg="StartContainer for \"51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04\"" Dec 13 15:13:57.723196 systemd[1]: Started cri-containerd-51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04.scope. Dec 13 15:13:57.768062 env[1193]: time="2024-12-13T15:13:57.768019660Z" level=info msg="StartContainer for \"51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04\" returns successfully" Dec 13 15:13:58.194792 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 15:13:58.221216 kubelet[1470]: E1213 15:13:58.221138 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:13:58.715917 kubelet[1470]: I1213 15:13:58.715848 1470 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d8f8q" podStartSLOduration=5.715718152 podStartE2EDuration="5.715718152s" podCreationTimestamp="2024-12-13 15:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 15:13:58.715139635 +0000 UTC m=+89.291598963" watchObservedRunningTime="2024-12-13 15:13:58.715718152 +0000 UTC m=+89.292177473" Dec 13 15:13:58.962916 kubelet[1470]: W1213 15:13:58.962279 1470 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddadfe90_0c96_44fc_9151_7bceb8480504.slice/cri-containerd-b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b.scope WatchSource:0}: task b26c6cba1a80dbb3446453dd9037398bf7d42749a0f77475633cd6ecd00f673b not found: not found Dec 13 15:13:59.221666 kubelet[1470]: E1213 15:13:59.221579 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:00.222457 kubelet[1470]: E1213 15:14:00.222418 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:01.174688 systemd[1]: run-containerd-runc-k8s.io-51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04-runc.0RRybV.mount: Deactivated successfully. Dec 13 15:14:01.223281 kubelet[1470]: E1213 15:14:01.223208 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:01.309160 systemd-networkd[1032]: lxc_health: Link UP Dec 13 15:14:01.338839 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 15:14:01.338569 systemd-networkd[1032]: lxc_health: Gained carrier Dec 13 15:14:02.074876 kubelet[1470]: W1213 15:14:02.074784 1470 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddadfe90_0c96_44fc_9151_7bceb8480504.slice/cri-containerd-6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b.scope WatchSource:0}: task 6fa97ca429d16ddd5c0eb08c0447078acf8411240d81789d2497c7c44545490b not found: not found Dec 13 15:14:02.224441 kubelet[1470]: E1213 15:14:02.224384 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:03.225191 kubelet[1470]: E1213 15:14:03.225125 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:03.405336 systemd-networkd[1032]: lxc_health: Gained IPv6LL Dec 13 15:14:03.518957 systemd[1]: run-containerd-runc-k8s.io-51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04-runc.O3D2ff.mount: Deactivated successfully. Dec 13 15:14:04.226667 kubelet[1470]: E1213 15:14:04.226554 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:05.185196 kubelet[1470]: W1213 15:14:05.185130 1470 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddadfe90_0c96_44fc_9151_7bceb8480504.slice/cri-containerd-0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c.scope WatchSource:0}: task 0bfdf89a3a3ab498fdbc7adff4700e94c52320e3997151bd1c0fa6714ab6785c not found: not found Dec 13 15:14:05.226929 kubelet[1470]: E1213 15:14:05.226900 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:05.865959 systemd[1]: run-containerd-runc-k8s.io-51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04-runc.WTaa5o.mount: Deactivated successfully. Dec 13 15:14:06.228307 kubelet[1470]: E1213 15:14:06.228223 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:07.229131 kubelet[1470]: E1213 15:14:07.228990 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:08.133601 systemd[1]: run-containerd-runc-k8s.io-51ccfcfc166748f863c3ba1eaea93042b04afc0ac11a355f5ed4ed98b5ff2c04-runc.TSFBN8.mount: Deactivated successfully. Dec 13 15:14:08.229316 kubelet[1470]: E1213 15:14:08.229233 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:08.296451 kubelet[1470]: W1213 15:14:08.296298 1470 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddadfe90_0c96_44fc_9151_7bceb8480504.slice/cri-containerd-63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67.scope WatchSource:0}: task 63312686d198899330044edcad1c33940e2676a972232d43f8dfac88b1949b67 not found: not found Dec 13 15:14:09.229555 kubelet[1470]: E1213 15:14:09.229492 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:10.132866 kubelet[1470]: E1213 15:14:10.132660 1470 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:10.230920 kubelet[1470]: E1213 15:14:10.230818 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 15:14:11.231333 kubelet[1470]: E1213 15:14:11.231266 1470 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"