Dec 13 14:23:34.277198 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:23:34.277237 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:23:34.277248 kernel: BIOS-provided physical RAM map: Dec 13 14:23:34.277255 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:23:34.277262 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:23:34.277269 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:23:34.277278 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 14:23:34.277286 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 14:23:34.277294 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:23:34.277302 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:23:34.277309 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:23:34.277316 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:23:34.277323 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 14:23:34.277331 kernel: NX (Execute Disable) protection: active Dec 13 14:23:34.277342 kernel: SMBIOS 2.8 present. Dec 13 14:23:34.277350 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 14:23:34.277358 kernel: Hypervisor detected: KVM Dec 13 14:23:34.277365 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:23:34.277373 kernel: kvm-clock: cpu 0, msr 3419a001, primary cpu clock Dec 13 14:23:34.277381 kernel: kvm-clock: using sched offset of 2904090897 cycles Dec 13 14:23:34.277389 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:23:34.277400 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:23:34.277409 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:23:34.277419 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:23:34.277427 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 14:23:34.277435 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:23:34.277444 kernel: Using GB pages for direct mapping Dec 13 14:23:34.277452 kernel: ACPI: Early table checksum verification disabled Dec 13 14:23:34.277460 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 14:23:34.277468 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277476 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277484 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277494 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 14:23:34.277502 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277510 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277518 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277526 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:23:34.277534 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 14:23:34.277542 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 14:23:34.277550 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 14:23:34.277563 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 14:23:34.277571 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 14:23:34.277580 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 14:23:34.277589 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 14:23:34.277597 kernel: No NUMA configuration found Dec 13 14:23:34.277606 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 14:23:34.277616 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 14:23:34.277624 kernel: Zone ranges: Dec 13 14:23:34.277633 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:23:34.277642 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 14:23:34.277659 kernel: Normal empty Dec 13 14:23:34.277667 kernel: Movable zone start for each node Dec 13 14:23:34.277676 kernel: Early memory node ranges Dec 13 14:23:34.277684 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:23:34.277693 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 14:23:34.277703 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 14:23:34.277715 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:23:34.277723 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:23:34.277732 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 14:23:34.277741 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:23:34.277749 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:23:34.277758 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:23:34.277766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:23:34.277775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:23:34.277784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:23:34.277794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:23:34.277802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:23:34.277811 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:23:34.277822 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:23:34.277830 kernel: TSC deadline timer available Dec 13 14:23:34.277839 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:23:34.277847 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:23:34.277856 kernel: kvm-guest: setup PV sched yield Dec 13 14:23:34.277864 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:23:34.277874 kernel: Booting paravirtualized kernel on KVM Dec 13 14:23:34.277883 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:23:34.277892 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:23:34.277901 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:23:34.277909 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:23:34.277917 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:23:34.277926 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:23:34.277934 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 14:23:34.277943 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:23:34.277953 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:23:34.277961 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 14:23:34.277970 kernel: Policy zone: DMA32 Dec 13 14:23:34.277980 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:23:34.278015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:23:34.278036 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:23:34.278049 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:23:34.278058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:23:34.278070 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 134796K reserved, 0K cma-reserved) Dec 13 14:23:34.278079 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:23:34.278088 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:23:34.278096 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:23:34.278105 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:23:34.278114 kernel: rcu: RCU event tracing is enabled. Dec 13 14:23:34.278123 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:23:34.278132 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:23:34.278152 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:23:34.278164 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:23:34.278173 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:23:34.278182 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:23:34.278191 kernel: random: crng init done Dec 13 14:23:34.278200 kernel: Console: colour VGA+ 80x25 Dec 13 14:23:34.278208 kernel: printk: console [ttyS0] enabled Dec 13 14:23:34.278217 kernel: ACPI: Core revision 20210730 Dec 13 14:23:34.278226 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:23:34.278235 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:23:34.278247 kernel: x2apic enabled Dec 13 14:23:34.278255 kernel: Switched APIC routing to physical x2apic. Dec 13 14:23:34.278264 kernel: kvm-guest: setup PV IPIs Dec 13 14:23:34.278273 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:23:34.278283 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:23:34.278298 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:23:34.278307 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:23:34.278316 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:23:34.278326 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:23:34.278344 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:23:34.278354 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:23:34.278365 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:23:34.278374 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:23:34.278384 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:23:34.278394 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:23:34.278403 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:23:34.278413 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:23:34.278423 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:23:34.278435 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:23:34.278445 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:23:34.278454 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:23:34.278463 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:23:34.278473 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:23:34.278483 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:23:34.278493 kernel: LSM: Security Framework initializing Dec 13 14:23:34.278504 kernel: SELinux: Initializing. Dec 13 14:23:34.278514 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:23:34.278524 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:23:34.278534 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:23:34.278543 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:23:34.278553 kernel: ... version: 0 Dec 13 14:23:34.278563 kernel: ... bit width: 48 Dec 13 14:23:34.278572 kernel: ... generic registers: 6 Dec 13 14:23:34.278582 kernel: ... value mask: 0000ffffffffffff Dec 13 14:23:34.278593 kernel: ... max period: 00007fffffffffff Dec 13 14:23:34.278602 kernel: ... fixed-purpose events: 0 Dec 13 14:23:34.278612 kernel: ... event mask: 000000000000003f Dec 13 14:23:34.278621 kernel: signal: max sigframe size: 1776 Dec 13 14:23:34.278630 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:23:34.278639 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:23:34.278658 kernel: x86: Booting SMP configuration: Dec 13 14:23:34.278667 kernel: .... node #0, CPUs: #1 Dec 13 14:23:34.278676 kernel: kvm-clock: cpu 1, msr 3419a041, secondary cpu clock Dec 13 14:23:34.278688 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:23:34.278697 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 14:23:34.278707 kernel: #2 Dec 13 14:23:34.278716 kernel: kvm-clock: cpu 2, msr 3419a081, secondary cpu clock Dec 13 14:23:34.278725 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:23:34.278734 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 14:23:34.278743 kernel: #3 Dec 13 14:23:34.278753 kernel: kvm-clock: cpu 3, msr 3419a0c1, secondary cpu clock Dec 13 14:23:34.278762 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:23:34.278775 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 14:23:34.278786 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:23:34.278796 kernel: smpboot: Max logical packages: 1 Dec 13 14:23:34.278805 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:23:34.278814 kernel: devtmpfs: initialized Dec 13 14:23:34.278824 kernel: x86/mm: Memory block size: 128MB Dec 13 14:23:34.278833 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:23:34.278843 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:23:34.278852 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:23:34.278862 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:23:34.278873 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:23:34.278883 kernel: audit: type=2000 audit(1734099813.387:1): state=initialized audit_enabled=0 res=1 Dec 13 14:23:34.278892 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:23:34.278901 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:23:34.278911 kernel: cpuidle: using governor menu Dec 13 14:23:34.278920 kernel: ACPI: bus type PCI registered Dec 13 14:23:34.278929 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:23:34.278938 kernel: dca service started, version 1.12.1 Dec 13 14:23:34.278952 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:23:34.278965 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:23:34.278974 kernel: PCI: Using configuration type 1 for base access Dec 13 14:23:34.278984 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:23:34.278993 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:23:34.279003 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:23:34.279012 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:23:34.279021 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:23:34.279031 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:23:34.279040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:23:34.279053 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:23:34.279062 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:23:34.279072 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:23:34.279081 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:23:34.279090 kernel: ACPI: Interpreter enabled Dec 13 14:23:34.279100 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:23:34.279109 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:23:34.279119 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:23:34.279128 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:23:34.279163 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:23:34.280245 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:23:34.280378 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:23:34.280517 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:23:34.280532 kernel: PCI host bridge to bus 0000:00 Dec 13 14:23:34.280673 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:23:34.280784 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:23:34.280904 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:23:34.281017 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:23:34.281177 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:23:34.281317 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 14:23:34.281465 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:23:34.281621 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:23:34.281805 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:23:34.281965 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 14:23:34.282070 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 14:23:34.282188 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 14:23:34.282292 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:23:34.282427 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:23:34.282541 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:23:34.282664 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 14:23:34.282796 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 14:23:34.282908 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:23:34.283010 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:23:34.283110 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 14:23:34.283230 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 14:23:34.283342 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:23:34.283447 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 14:23:34.283547 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 14:23:34.283657 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 14:23:34.283759 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 14:23:34.283867 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:23:34.283968 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:23:34.284084 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:23:34.284214 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 14:23:34.284317 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 14:23:34.284426 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:23:34.284526 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:23:34.284540 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:23:34.284550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:23:34.284564 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:23:34.284577 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:23:34.284587 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:23:34.284596 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:23:34.284606 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:23:34.284615 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:23:34.284624 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:23:34.284634 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:23:34.284643 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:23:34.284661 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:23:34.284673 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:23:34.284682 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:23:34.284691 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:23:34.284700 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:23:34.284709 kernel: iommu: Default domain type: Translated Dec 13 14:23:34.284719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:23:34.284934 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:23:34.285034 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:23:34.285137 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:23:34.285164 kernel: vgaarb: loaded Dec 13 14:23:34.285174 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:23:34.285184 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:23:34.285193 kernel: PTP clock support registered Dec 13 14:23:34.285202 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:23:34.285212 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:23:34.285221 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:23:34.285231 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 14:23:34.285243 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:23:34.285253 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:23:34.285262 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:23:34.285272 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:23:34.285281 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:23:34.285291 kernel: pnp: PnP ACPI init Dec 13 14:23:34.285402 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:23:34.285417 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:23:34.285430 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:23:34.285440 kernel: NET: Registered PF_INET protocol family Dec 13 14:23:34.285449 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:23:34.285459 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:23:34.285468 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:23:34.285478 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:23:34.285487 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:23:34.285497 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:23:34.285506 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:23:34.285517 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:23:34.285527 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:23:34.285536 kernel: NET: Registered PF_XDP protocol family Dec 13 14:23:34.285626 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:23:34.285754 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:23:34.285842 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:23:34.285930 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:23:34.286019 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:23:34.286110 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 14:23:34.286123 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:23:34.286132 kernel: Initialise system trusted keyrings Dec 13 14:23:34.286156 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:23:34.286166 kernel: Key type asymmetric registered Dec 13 14:23:34.286175 kernel: Asymmetric key parser 'x509' registered Dec 13 14:23:34.286184 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:23:34.286193 kernel: io scheduler mq-deadline registered Dec 13 14:23:34.286202 kernel: io scheduler kyber registered Dec 13 14:23:34.286215 kernel: io scheduler bfq registered Dec 13 14:23:34.286225 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:23:34.286234 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:23:34.286244 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:23:34.286254 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:23:34.286263 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:23:34.286273 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:23:34.286283 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:23:34.286292 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:23:34.286301 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:23:34.286415 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:23:34.286430 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:23:34.286519 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:23:34.286612 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:23:33 UTC (1734099813) Dec 13 14:23:34.286714 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:23:34.286728 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:23:34.286737 kernel: Segment Routing with IPv6 Dec 13 14:23:34.286750 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:23:34.286760 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:23:34.286769 kernel: Key type dns_resolver registered Dec 13 14:23:34.286778 kernel: IPI shorthand broadcast: enabled Dec 13 14:23:34.286787 kernel: sched_clock: Marking stable (475126001, 123868520)->(670352400, -71357879) Dec 13 14:23:34.286796 kernel: registered taskstats version 1 Dec 13 14:23:34.286806 kernel: Loading compiled-in X.509 certificates Dec 13 14:23:34.286815 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:23:34.286824 kernel: Key type .fscrypt registered Dec 13 14:23:34.286836 kernel: Key type fscrypt-provisioning registered Dec 13 14:23:34.286845 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:23:34.286855 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:23:34.286864 kernel: ima: No architecture policies found Dec 13 14:23:34.286873 kernel: clk: Disabling unused clocks Dec 13 14:23:34.286882 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:23:34.286892 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:23:34.286901 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:23:34.286911 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:23:34.286922 kernel: Run /init as init process Dec 13 14:23:34.286931 kernel: with arguments: Dec 13 14:23:34.286940 kernel: /init Dec 13 14:23:34.286949 kernel: with environment: Dec 13 14:23:34.286958 kernel: HOME=/ Dec 13 14:23:34.286967 kernel: TERM=linux Dec 13 14:23:34.286976 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:23:34.286993 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:23:34.287007 systemd[1]: Detected virtualization kvm. Dec 13 14:23:34.287017 systemd[1]: Detected architecture x86-64. Dec 13 14:23:34.287027 systemd[1]: Running in initrd. Dec 13 14:23:34.287037 systemd[1]: No hostname configured, using default hostname. Dec 13 14:23:34.287047 systemd[1]: Hostname set to . Dec 13 14:23:34.287057 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:23:34.287067 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:23:34.287077 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:23:34.287089 systemd[1]: Reached target cryptsetup.target. Dec 13 14:23:34.287099 systemd[1]: Reached target paths.target. Dec 13 14:23:34.287118 systemd[1]: Reached target slices.target. Dec 13 14:23:34.287130 systemd[1]: Reached target swap.target. Dec 13 14:23:34.287154 systemd[1]: Reached target timers.target. Dec 13 14:23:34.287166 systemd[1]: Listening on iscsid.socket. Dec 13 14:23:34.287179 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:23:34.287189 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:23:34.287200 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:23:34.287210 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:23:34.287221 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:23:34.287231 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:23:34.287241 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:23:34.287252 systemd[1]: Reached target sockets.target. Dec 13 14:23:34.287264 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:23:34.287274 systemd[1]: Finished network-cleanup.service. Dec 13 14:23:34.287285 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:23:34.287295 systemd[1]: Starting systemd-journald.service... Dec 13 14:23:34.287306 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:23:34.287316 systemd[1]: Starting systemd-resolved.service... Dec 13 14:23:34.287327 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:23:34.287337 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:23:34.287351 systemd-journald[196]: Journal started Dec 13 14:23:34.287407 systemd-journald[196]: Runtime Journal (/run/log/journal/cd84c7ebbcdd4365ae80f41938078375) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:23:34.275274 systemd-modules-load[197]: Inserted module 'overlay' Dec 13 14:23:34.313665 systemd[1]: Started systemd-journald.service. Dec 13 14:23:34.313697 kernel: audit: type=1130 audit(1734099814.308:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.290422 systemd-resolved[198]: Positive Trust Anchors: Dec 13 14:23:34.318174 kernel: audit: type=1130 audit(1734099814.314:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.290436 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:23:34.323478 kernel: audit: type=1130 audit(1734099814.318:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.290464 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:23:34.327178 kernel: audit: type=1130 audit(1734099814.323:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.293003 systemd-resolved[198]: Defaulting to hostname 'linux'. Dec 13 14:23:34.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.314611 systemd[1]: Started systemd-resolved.service. Dec 13 14:23:34.340127 kernel: audit: type=1130 audit(1734099814.327:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.318702 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:23:34.324083 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:23:34.327719 systemd[1]: Reached target nss-lookup.target. Dec 13 14:23:34.340011 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:23:34.341304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:23:34.351733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:23:34.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.356197 kernel: audit: type=1130 audit(1734099814.352:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.359176 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:23:34.359345 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:23:34.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.360525 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:23:34.365465 kernel: audit: type=1130 audit(1734099814.359:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.370284 dracut-cmdline[214]: dracut-dracut-053 Dec 13 14:23:34.372009 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:23:34.378758 systemd-modules-load[197]: Inserted module 'br_netfilter' Dec 13 14:23:34.379921 kernel: Bridge firewalling registered Dec 13 14:23:34.401184 kernel: SCSI subsystem initialized Dec 13 14:23:34.415167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:23:34.415201 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:23:34.417386 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:23:34.420873 systemd-modules-load[197]: Inserted module 'dm_multipath' Dec 13 14:23:34.422695 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:23:34.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.424586 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:23:34.429360 kernel: audit: type=1130 audit(1734099814.423:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.431868 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:23:34.435864 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:23:34.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.441172 kernel: audit: type=1130 audit(1734099814.436:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.451168 kernel: iscsi: registered transport (tcp) Dec 13 14:23:34.479202 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:23:34.479282 kernel: QLogic iSCSI HBA Driver Dec 13 14:23:34.516078 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:23:34.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.517627 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:23:34.570201 kernel: raid6: avx2x4 gen() 29332 MB/s Dec 13 14:23:34.587183 kernel: raid6: avx2x4 xor() 6556 MB/s Dec 13 14:23:34.604204 kernel: raid6: avx2x2 gen() 29140 MB/s Dec 13 14:23:34.621200 kernel: raid6: avx2x2 xor() 18860 MB/s Dec 13 14:23:34.638191 kernel: raid6: avx2x1 gen() 22091 MB/s Dec 13 14:23:34.655198 kernel: raid6: avx2x1 xor() 13777 MB/s Dec 13 14:23:34.672188 kernel: raid6: sse2x4 gen() 13283 MB/s Dec 13 14:23:34.689188 kernel: raid6: sse2x4 xor() 6274 MB/s Dec 13 14:23:34.706187 kernel: raid6: sse2x2 gen() 14191 MB/s Dec 13 14:23:34.723196 kernel: raid6: sse2x2 xor() 9040 MB/s Dec 13 14:23:34.740195 kernel: raid6: sse2x1 gen() 11630 MB/s Dec 13 14:23:34.757652 kernel: raid6: sse2x1 xor() 7645 MB/s Dec 13 14:23:34.757744 kernel: raid6: using algorithm avx2x4 gen() 29332 MB/s Dec 13 14:23:34.757759 kernel: raid6: .... xor() 6556 MB/s, rmw enabled Dec 13 14:23:34.758345 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:23:34.771192 kernel: xor: automatically using best checksumming function avx Dec 13 14:23:34.867215 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:23:34.876426 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:23:34.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.878000 audit: BPF prog-id=7 op=LOAD Dec 13 14:23:34.878000 audit: BPF prog-id=8 op=LOAD Dec 13 14:23:34.878835 systemd[1]: Starting systemd-udevd.service... Dec 13 14:23:34.896260 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 14:23:34.902503 systemd[1]: Started systemd-udevd.service. Dec 13 14:23:34.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.903904 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:23:34.917180 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 14:23:34.948477 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:23:34.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.950049 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:23:34.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:34.991802 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:23:35.020176 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:23:35.052404 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:23:35.052429 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:23:35.052441 kernel: libata version 3.00 loaded. Dec 13 14:23:35.052453 kernel: AES CTR mode by8 optimization enabled Dec 13 14:23:35.052471 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:23:35.052483 kernel: GPT:9289727 != 19775487 Dec 13 14:23:35.052494 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:23:35.052505 kernel: GPT:9289727 != 19775487 Dec 13 14:23:35.052515 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:23:35.052526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:23:35.056179 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:23:35.110112 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:23:35.110140 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:23:35.110290 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:23:35.110387 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Dec 13 14:23:35.110399 kernel: scsi host0: ahci Dec 13 14:23:35.110519 kernel: scsi host1: ahci Dec 13 14:23:35.110649 kernel: scsi host2: ahci Dec 13 14:23:35.110793 kernel: scsi host3: ahci Dec 13 14:23:35.110914 kernel: scsi host4: ahci Dec 13 14:23:35.111024 kernel: scsi host5: ahci Dec 13 14:23:35.111135 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 14:23:35.111163 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 14:23:35.111175 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 14:23:35.111186 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 14:23:35.111198 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 14:23:35.111213 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 14:23:35.107582 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:23:35.132916 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:23:35.138095 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:23:35.141881 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:23:35.148510 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:23:35.150741 systemd[1]: Starting disk-uuid.service... Dec 13 14:23:35.253317 disk-uuid[533]: Primary Header is updated. Dec 13 14:23:35.253317 disk-uuid[533]: Secondary Entries is updated. Dec 13 14:23:35.253317 disk-uuid[533]: Secondary Header is updated. Dec 13 14:23:35.257055 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:23:35.423328 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:23:35.423423 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:23:35.423450 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:23:35.423462 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:23:35.425185 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:23:35.426182 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:23:35.427192 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:23:35.428695 kernel: ata3.00: applying bridge limits Dec 13 14:23:35.428716 kernel: ata3.00: configured for UDMA/100 Dec 13 14:23:35.429193 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:23:35.465747 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:23:35.484247 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:23:35.484272 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:23:36.314197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:23:36.314578 disk-uuid[534]: The operation has completed successfully. Dec 13 14:23:36.337536 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:23:36.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.337680 systemd[1]: Finished disk-uuid.service. Dec 13 14:23:36.363173 systemd[1]: Starting verity-setup.service... Dec 13 14:23:36.381169 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:23:36.404841 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:23:36.411800 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:23:36.414676 systemd[1]: Finished verity-setup.service. Dec 13 14:23:36.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.543168 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:23:36.543278 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:23:36.544239 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:23:36.544938 systemd[1]: Starting ignition-setup.service... Dec 13 14:23:36.559312 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:23:36.559341 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:23:36.559354 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:23:36.552695 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:23:36.571578 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:23:36.635710 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:23:36.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.638000 audit: BPF prog-id=9 op=LOAD Dec 13 14:23:36.639589 systemd[1]: Starting systemd-networkd.service... Dec 13 14:23:36.663891 systemd-networkd[717]: lo: Link UP Dec 13 14:23:36.663902 systemd-networkd[717]: lo: Gained carrier Dec 13 14:23:36.664468 systemd-networkd[717]: Enumeration completed Dec 13 14:23:36.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.664817 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:23:36.665976 systemd-networkd[717]: eth0: Link UP Dec 13 14:23:36.665986 systemd-networkd[717]: eth0: Gained carrier Dec 13 14:23:36.666387 systemd[1]: Started systemd-networkd.service. Dec 13 14:23:36.669082 systemd[1]: Reached target network.target. Dec 13 14:23:36.674998 systemd[1]: Starting iscsiuio.service... Dec 13 14:23:36.721186 systemd[1]: Started iscsiuio.service. Dec 13 14:23:36.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.722337 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:23:36.723051 systemd[1]: Starting iscsid.service... Dec 13 14:23:36.727601 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:23:36.727601 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:23:36.727601 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:23:36.727601 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:23:36.727601 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:23:36.727601 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:23:36.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.731257 systemd[1]: Started iscsid.service. Dec 13 14:23:36.737283 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:23:36.752772 systemd[1]: Finished ignition-setup.service. Dec 13 14:23:36.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.757670 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:23:36.760295 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:23:36.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.763826 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:23:36.766444 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:23:36.769185 systemd[1]: Reached target remote-fs.target. Dec 13 14:23:36.772523 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:23:36.781102 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:23:36.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.858680 ignition[732]: Ignition 2.14.0 Dec 13 14:23:36.858697 ignition[732]: Stage: fetch-offline Dec 13 14:23:36.858805 ignition[732]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:23:36.858818 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:23:36.858988 ignition[732]: parsed url from cmdline: "" Dec 13 14:23:36.858993 ignition[732]: no config URL provided Dec 13 14:23:36.859002 ignition[732]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:23:36.859011 ignition[732]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:23:36.859033 ignition[732]: op(1): [started] loading QEMU firmware config module Dec 13 14:23:36.859038 ignition[732]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:23:36.863833 ignition[732]: op(1): [finished] loading QEMU firmware config module Dec 13 14:23:36.865219 ignition[732]: parsing config with SHA512: 0c0c7b1fa13d8c75200c0dd85307a94da5996cb34df8514befb740a081a3810678186e856f24e623288e8c0a0f9a1d5eb8a82f18a909b82707cc46913464575c Dec 13 14:23:36.870017 unknown[732]: fetched base config from "system" Dec 13 14:23:36.870024 unknown[732]: fetched user config from "qemu" Dec 13 14:23:36.870449 ignition[732]: fetch-offline: fetch-offline passed Dec 13 14:23:36.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.872270 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:23:36.870546 ignition[732]: Ignition finished successfully Dec 13 14:23:36.874226 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:23:36.875249 systemd[1]: Starting ignition-kargs.service... Dec 13 14:23:36.911128 ignition[745]: Ignition 2.14.0 Dec 13 14:23:36.911139 ignition[745]: Stage: kargs Dec 13 14:23:36.911314 ignition[745]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:23:36.911324 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:23:36.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:36.913992 systemd[1]: Finished ignition-kargs.service. Dec 13 14:23:36.912078 ignition[745]: kargs: kargs passed Dec 13 14:23:36.912116 ignition[745]: Ignition finished successfully Dec 13 14:23:36.932253 systemd[1]: Starting ignition-disks.service... Dec 13 14:23:37.018951 ignition[751]: Ignition 2.14.0 Dec 13 14:23:37.018967 ignition[751]: Stage: disks Dec 13 14:23:37.019137 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:23:37.021358 systemd[1]: Finished ignition-disks.service. Dec 13 14:23:37.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:37.019161 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:23:37.023367 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:23:37.020292 ignition[751]: disks: disks passed Dec 13 14:23:37.025260 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:23:37.020345 ignition[751]: Ignition finished successfully Dec 13 14:23:37.026313 systemd[1]: Reached target local-fs.target. Dec 13 14:23:37.028623 systemd[1]: Reached target sysinit.target. Dec 13 14:23:37.029129 systemd[1]: Reached target basic.target. Dec 13 14:23:37.030812 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:23:37.044706 systemd-fsck[759]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:23:37.056569 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:23:37.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:37.059211 systemd[1]: Mounting sysroot.mount... Dec 13 14:23:37.069228 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:23:37.069338 systemd[1]: Mounted sysroot.mount. Dec 13 14:23:37.070600 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:23:37.073652 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:23:37.075288 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:23:37.075355 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:23:37.075387 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:23:37.079128 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:23:37.081574 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:23:37.087760 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:23:37.094140 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:23:37.098299 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:23:37.106079 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:23:37.143782 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:23:37.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:37.145759 systemd[1]: Starting ignition-mount.service... Dec 13 14:23:37.147246 systemd[1]: Starting sysroot-boot.service... Dec 13 14:23:37.154314 bash[810]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:23:37.198208 systemd[1]: Finished sysroot-boot.service. Dec 13 14:23:37.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:37.202215 ignition[812]: INFO : Ignition 2.14.0 Dec 13 14:23:37.202215 ignition[812]: INFO : Stage: mount Dec 13 14:23:37.204131 ignition[812]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:23:37.204131 ignition[812]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:23:37.204131 ignition[812]: INFO : mount: mount passed Dec 13 14:23:37.204131 ignition[812]: INFO : Ignition finished successfully Dec 13 14:23:37.221722 systemd[1]: Finished ignition-mount.service. Dec 13 14:23:37.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:37.423179 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:23:37.434154 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (821) Dec 13 14:23:37.434215 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:23:37.434227 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:23:37.435130 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:23:37.440428 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:23:37.443262 systemd[1]: Starting ignition-files.service... Dec 13 14:23:37.463849 ignition[841]: INFO : Ignition 2.14.0 Dec 13 14:23:37.463849 ignition[841]: INFO : Stage: files Dec 13 14:23:37.466022 ignition[841]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:23:37.466022 ignition[841]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:23:37.466022 ignition[841]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:23:37.470457 ignition[841]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:23:37.470457 ignition[841]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:23:37.474054 ignition[841]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:23:37.474054 ignition[841]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:23:37.474054 ignition[841]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:23:37.474054 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:23:37.472737 unknown[841]: wrote ssh authorized keys file for user: core Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:23:37.483653 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:23:37.830816 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:23:37.919106 systemd-networkd[717]: eth0: Gained IPv6LL Dec 13 14:23:38.752937 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:23:38.752937 ignition[841]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 14:23:38.757134 ignition[841]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:23:38.759577 ignition[841]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:23:38.759577 ignition[841]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 14:23:38.759577 ignition[841]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:23:38.759577 ignition[841]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:23:38.797395 ignition[841]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:23:38.799322 ignition[841]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:23:38.799322 ignition[841]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:23:38.799322 ignition[841]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:23:38.799322 ignition[841]: INFO : files: files passed Dec 13 14:23:38.799322 ignition[841]: INFO : Ignition finished successfully Dec 13 14:23:38.807683 systemd[1]: Finished ignition-files.service. Dec 13 14:23:38.813317 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 13 14:23:38.813380 kernel: audit: type=1130 audit(1734099818.808:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.809388 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:23:38.813702 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:23:38.815235 systemd[1]: Starting ignition-quench.service... Dec 13 14:23:38.818901 initrd-setup-root-after-ignition[866]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:23:38.821784 initrd-setup-root-after-ignition[868]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:23:38.822807 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:23:38.831172 kernel: audit: type=1130 audit(1734099818.824:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.825626 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:23:38.839977 kernel: audit: type=1130 audit(1734099818.831:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.840003 kernel: audit: type=1131 audit(1734099818.831:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.825733 systemd[1]: Finished ignition-quench.service. Dec 13 14:23:38.831709 systemd[1]: Reached target ignition-complete.target. Dec 13 14:23:38.841203 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:23:38.861083 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:23:38.861229 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:23:38.871844 kernel: audit: type=1130 audit(1734099818.863:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.871893 kernel: audit: type=1131 audit(1734099818.863:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.863467 systemd[1]: Reached target initrd-fs.target. Dec 13 14:23:38.871958 systemd[1]: Reached target initrd.target. Dec 13 14:23:38.872571 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:23:38.874012 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:23:38.888320 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:23:38.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.891415 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:23:38.896004 kernel: audit: type=1130 audit(1734099818.890:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.902412 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:23:38.904353 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:23:38.906429 systemd[1]: Stopped target timers.target. Dec 13 14:23:38.908349 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:23:38.909495 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:23:38.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.911436 systemd[1]: Stopped target initrd.target. Dec 13 14:23:38.915909 kernel: audit: type=1131 audit(1734099818.911:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.916009 systemd[1]: Stopped target basic.target. Dec 13 14:23:38.917739 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:23:38.919786 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:23:38.921958 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:23:38.924071 systemd[1]: Stopped target remote-fs.target. Dec 13 14:23:38.925874 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:23:38.927800 systemd[1]: Stopped target sysinit.target. Dec 13 14:23:38.929596 systemd[1]: Stopped target local-fs.target. Dec 13 14:23:38.931423 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:23:38.933319 systemd[1]: Stopped target swap.target. Dec 13 14:23:38.935031 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:23:38.936208 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:23:38.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.938181 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:23:38.943065 kernel: audit: type=1131 audit(1734099818.937:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.943111 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:23:38.944253 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:23:38.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.946221 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:23:38.950524 kernel: audit: type=1131 audit(1734099818.945:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.946337 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:23:38.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.952662 systemd[1]: Stopped target paths.target. Dec 13 14:23:38.954424 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:23:38.959193 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:23:38.961435 systemd[1]: Stopped target slices.target. Dec 13 14:23:38.963253 systemd[1]: Stopped target sockets.target. Dec 13 14:23:38.965072 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:23:38.966443 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:23:38.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.968905 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:23:38.969998 systemd[1]: Stopped ignition-files.service. Dec 13 14:23:38.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.973063 systemd[1]: Stopping ignition-mount.service... Dec 13 14:23:38.974908 systemd[1]: Stopping iscsid.service... Dec 13 14:23:38.975550 iscsid[722]: iscsid shutting down. Dec 13 14:23:38.977061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:23:38.978247 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:23:38.980121 ignition[882]: INFO : Ignition 2.14.0 Dec 13 14:23:38.980121 ignition[882]: INFO : Stage: umount Dec 13 14:23:38.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.981934 ignition[882]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:23:38.981934 ignition[882]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:23:38.981934 ignition[882]: INFO : umount: umount passed Dec 13 14:23:38.981934 ignition[882]: INFO : Ignition finished successfully Dec 13 14:23:38.986478 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:23:38.988132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:23:38.988298 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:23:38.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.991526 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:23:38.992797 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:23:38.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.996293 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:23:38.997219 systemd[1]: Stopped iscsid.service. Dec 13 14:23:38.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:38.999035 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:23:39.000056 systemd[1]: Stopped ignition-mount.service. Dec 13 14:23:39.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.001974 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:23:39.002859 systemd[1]: Closed iscsid.socket. Dec 13 14:23:39.005031 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:23:39.005065 systemd[1]: Stopped ignition-disks.service. Dec 13 14:23:39.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.007589 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:23:39.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.007625 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:23:39.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.009478 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:23:39.009513 systemd[1]: Stopped ignition-setup.service. Dec 13 14:23:39.011982 systemd[1]: Stopping iscsiuio.service... Dec 13 14:23:39.016081 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:23:39.016695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:23:39.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.016785 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:23:39.022868 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:23:39.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.023008 systemd[1]: Stopped iscsiuio.service. Dec 13 14:23:39.023676 systemd[1]: Stopped target network.target. Dec 13 14:23:39.024037 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:23:39.024087 systemd[1]: Closed iscsiuio.socket. Dec 13 14:23:39.024561 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:23:39.024712 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:23:39.033230 systemd-networkd[717]: eth0: DHCPv6 lease lost Dec 13 14:23:39.034784 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:23:39.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.034923 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:23:39.037080 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:23:39.037204 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:23:39.042000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:23:39.042000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:23:39.040481 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:23:39.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.040515 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:23:39.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.043280 systemd[1]: Stopping network-cleanup.service... Dec 13 14:23:39.044206 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:23:39.044263 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:23:39.045588 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:23:39.045634 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:23:39.047381 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:23:39.047422 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:23:39.048778 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:23:39.059610 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:23:39.061447 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:23:39.062612 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:23:39.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.065328 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:23:39.065370 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:23:39.068047 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:23:39.068077 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:23:39.070114 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:23:39.070177 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:23:39.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.073598 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:23:39.073636 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:23:39.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.075537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:23:39.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.076369 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:23:39.080194 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:23:39.081328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:23:39.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.081382 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:23:39.086229 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:23:39.087295 systemd[1]: Stopped network-cleanup.service. Dec 13 14:23:39.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.089081 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:23:39.089174 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:23:39.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.100422 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:23:39.100592 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:23:39.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.103981 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:23:39.106117 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:23:39.106202 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:23:39.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.110296 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:23:39.126115 systemd[1]: Switching root. Dec 13 14:23:39.146356 systemd-journald[196]: Journal stopped Dec 13 14:23:44.138327 systemd-journald[196]: Received SIGTERM from PID 1 (n/a). Dec 13 14:23:44.138382 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:23:44.138420 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:23:44.138441 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:23:44.138454 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:23:44.138463 kernel: SELinux: policy capability open_perms=1 Dec 13 14:23:44.138473 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:23:44.138490 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:23:44.138511 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:23:44.138521 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:23:44.138530 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:23:44.138546 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:23:44.138557 systemd[1]: Successfully loaded SELinux policy in 68.712ms. Dec 13 14:23:44.138577 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.329ms. Dec 13 14:23:44.138589 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:23:44.138600 systemd[1]: Detected virtualization kvm. Dec 13 14:23:44.138610 systemd[1]: Detected architecture x86-64. Dec 13 14:23:44.138621 systemd[1]: Detected first boot. Dec 13 14:23:44.138634 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:23:44.138645 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:23:44.138661 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:23:44.138674 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:23:44.138685 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:23:44.138697 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:23:44.138708 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:23:44.138718 kernel: audit: type=1334 audit(1734099823.970:86): prog-id=12 op=LOAD Dec 13 14:23:44.138727 kernel: audit: type=1334 audit(1734099823.970:87): prog-id=3 op=UNLOAD Dec 13 14:23:44.138744 kernel: audit: type=1334 audit(1734099823.972:88): prog-id=13 op=LOAD Dec 13 14:23:44.138759 kernel: audit: type=1334 audit(1734099823.973:89): prog-id=14 op=LOAD Dec 13 14:23:44.138768 kernel: audit: type=1334 audit(1734099823.973:90): prog-id=4 op=UNLOAD Dec 13 14:23:44.138777 kernel: audit: type=1334 audit(1734099823.973:91): prog-id=5 op=UNLOAD Dec 13 14:23:44.138787 kernel: audit: type=1334 audit(1734099823.975:92): prog-id=15 op=LOAD Dec 13 14:23:44.138796 kernel: audit: type=1334 audit(1734099823.975:93): prog-id=12 op=UNLOAD Dec 13 14:23:44.138806 kernel: audit: type=1334 audit(1734099823.977:94): prog-id=16 op=LOAD Dec 13 14:23:44.138815 kernel: audit: type=1334 audit(1734099823.979:95): prog-id=17 op=LOAD Dec 13 14:23:44.138832 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:23:44.138842 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:23:44.138853 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:23:44.138863 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:23:44.138873 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:23:44.138883 systemd[1]: Created slice system-getty.slice. Dec 13 14:23:44.138894 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:23:44.138904 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:23:44.138921 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:23:44.138932 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:23:44.138942 systemd[1]: Created slice user.slice. Dec 13 14:23:44.138952 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:23:44.138962 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:23:44.138973 systemd[1]: Set up automount boot.automount. Dec 13 14:23:44.138989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:23:44.138999 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:23:44.139009 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:23:44.139029 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:23:44.139039 systemd[1]: Reached target integritysetup.target. Dec 13 14:23:44.139053 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:23:44.139064 systemd[1]: Reached target remote-fs.target. Dec 13 14:23:44.139077 systemd[1]: Reached target slices.target. Dec 13 14:23:44.139088 systemd[1]: Reached target swap.target. Dec 13 14:23:44.139105 systemd[1]: Reached target torcx.target. Dec 13 14:23:44.139116 systemd[1]: Reached target veritysetup.target. Dec 13 14:23:44.139127 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:23:44.139137 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:23:44.139159 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:23:44.139170 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:23:44.139180 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:23:44.139191 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:23:44.139201 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:23:44.139219 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:23:44.139235 systemd[1]: Mounting media.mount... Dec 13 14:23:44.139246 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:44.139264 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:23:44.139274 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:23:44.139284 systemd[1]: Mounting tmp.mount... Dec 13 14:23:44.139295 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:23:44.139305 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:23:44.139316 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:23:44.139332 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:23:44.139342 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:23:44.139352 systemd[1]: Starting modprobe@drm.service... Dec 13 14:23:44.139363 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:23:44.139373 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:23:44.139383 systemd[1]: Starting modprobe@loop.service... Dec 13 14:23:44.139396 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:23:44.139411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:23:44.139425 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:23:44.139480 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:23:44.139495 kernel: loop: module loaded Dec 13 14:23:44.139505 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:23:44.139515 kernel: fuse: init (API version 7.34) Dec 13 14:23:44.139525 systemd[1]: Stopped systemd-journald.service. Dec 13 14:23:44.139535 systemd[1]: Starting systemd-journald.service... Dec 13 14:23:44.139553 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:23:44.139563 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:23:44.139573 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:23:44.139590 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:23:44.139601 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:23:44.139614 systemd-journald[1003]: Journal started Dec 13 14:23:44.139654 systemd-journald[1003]: Runtime Journal (/run/log/journal/cd84c7ebbcdd4365ae80f41938078375) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:23:39.242000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:23:39.619000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:23:39.619000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:23:39.619000 audit: BPF prog-id=10 op=LOAD Dec 13 14:23:39.619000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:23:39.619000 audit: BPF prog-id=11 op=LOAD Dec 13 14:23:39.619000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:23:39.659000 audit[914]: AVC avc: denied { associate } for pid=914 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:23:39.659000 audit[914]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001c58b2 a1=c000146de0 a2=c00014f0c0 a3=32 items=0 ppid=897 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:23:39.659000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:23:39.660000 audit[914]: AVC avc: denied { associate } for pid=914 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:23:39.660000 audit[914]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001c5989 a2=1ed a3=0 items=2 ppid=897 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:23:39.660000 audit: CWD cwd="/" Dec 13 14:23:39.660000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:39.660000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:39.660000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:23:43.970000 audit: BPF prog-id=12 op=LOAD Dec 13 14:23:43.970000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:23:43.972000 audit: BPF prog-id=13 op=LOAD Dec 13 14:23:43.973000 audit: BPF prog-id=14 op=LOAD Dec 13 14:23:43.973000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:23:43.973000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:23:43.975000 audit: BPF prog-id=15 op=LOAD Dec 13 14:23:43.975000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:23:43.977000 audit: BPF prog-id=16 op=LOAD Dec 13 14:23:43.979000 audit: BPF prog-id=17 op=LOAD Dec 13 14:23:43.979000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:23:43.979000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:23:43.982000 audit: BPF prog-id=18 op=LOAD Dec 13 14:23:43.982000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:23:43.982000 audit: BPF prog-id=19 op=LOAD Dec 13 14:23:43.982000 audit: BPF prog-id=20 op=LOAD Dec 13 14:23:43.982000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:23:43.982000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:23:43.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:43.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:43.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:43.994000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:23:44.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.121000 audit: BPF prog-id=21 op=LOAD Dec 13 14:23:44.122000 audit: BPF prog-id=22 op=LOAD Dec 13 14:23:44.122000 audit: BPF prog-id=23 op=LOAD Dec 13 14:23:44.122000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:23:44.122000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:23:44.137000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:23:44.137000 audit[1003]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffed51753c0 a2=4000 a3=7ffed517545c items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:23:44.137000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:23:39.656723 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:23:43.966871 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:23:39.657355 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:23:44.141245 systemd[1]: Stopped verity-setup.service. Dec 13 14:23:43.966885 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:23:39.657373 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:23:43.983015 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:23:39.657407 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:23:39.657419 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:23:39.657455 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:23:39.657467 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:23:44.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:39.657701 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:23:39.657739 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:23:39.657751 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:23:39.658365 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:23:39.658412 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:23:39.658437 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:23:39.658454 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:23:39.658478 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:23:39.658493 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:39Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:23:43.623275 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:43Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:23:43.623856 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:43Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:23:43.624051 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:43Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:23:43.624477 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:43Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:23:43.624571 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:43Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:23:43.624666 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T14:23:43Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:23:44.144211 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:44.147621 systemd[1]: Started systemd-journald.service. Dec 13 14:23:44.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.148254 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:23:44.149137 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:23:44.150049 systemd[1]: Mounted media.mount. Dec 13 14:23:44.150871 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:23:44.151873 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:23:44.152935 systemd[1]: Mounted tmp.mount. Dec 13 14:23:44.154035 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:23:44.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.155445 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:23:44.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.156622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:23:44.156793 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:23:44.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.158043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:23:44.158225 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:23:44.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.159447 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:23:44.159604 systemd[1]: Finished modprobe@drm.service. Dec 13 14:23:44.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.160803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:23:44.160930 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:23:44.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.162164 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:23:44.162296 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:23:44.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.163476 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:23:44.163596 systemd[1]: Finished modprobe@loop.service. Dec 13 14:23:44.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.164828 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:23:44.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.165951 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:23:44.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.167288 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:23:44.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.168680 systemd[1]: Reached target network-pre.target. Dec 13 14:23:44.170713 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:23:44.172707 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:23:44.173546 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:23:44.175336 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:23:44.177304 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:23:44.178422 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:23:44.179568 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:23:44.181983 systemd-journald[1003]: Time spent on flushing to /var/log/journal/cd84c7ebbcdd4365ae80f41938078375 is 19.957ms for 1096 entries. Dec 13 14:23:44.181983 systemd-journald[1003]: System Journal (/var/log/journal/cd84c7ebbcdd4365ae80f41938078375) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:23:44.226806 systemd-journald[1003]: Received client request to flush runtime journal. Dec 13 14:23:44.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.180861 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:23:44.183696 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:23:44.186014 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:23:44.189962 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:23:44.228324 udevadm[1018]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:23:44.191789 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:23:44.193740 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:23:44.195119 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:23:44.204538 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:23:44.209101 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:23:44.212471 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:23:44.223421 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:23:44.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.227661 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:23:44.897440 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:23:44.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.899000 audit: BPF prog-id=24 op=LOAD Dec 13 14:23:44.899000 audit: BPF prog-id=25 op=LOAD Dec 13 14:23:44.899000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:23:44.899000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:23:44.900510 systemd[1]: Starting systemd-udevd.service... Dec 13 14:23:44.922520 systemd-udevd[1020]: Using default interface naming scheme 'v252'. Dec 13 14:23:44.938627 systemd[1]: Started systemd-udevd.service. Dec 13 14:23:44.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:44.941000 audit: BPF prog-id=26 op=LOAD Dec 13 14:23:44.942713 systemd[1]: Starting systemd-networkd.service... Dec 13 14:23:44.949000 audit: BPF prog-id=27 op=LOAD Dec 13 14:23:44.949000 audit: BPF prog-id=28 op=LOAD Dec 13 14:23:44.949000 audit: BPF prog-id=29 op=LOAD Dec 13 14:23:44.950335 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:23:44.971339 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:23:44.986874 systemd[1]: Started systemd-userdbd.service. Dec 13 14:23:44.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.032189 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:23:45.033811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:23:45.038800 systemd-networkd[1028]: lo: Link UP Dec 13 14:23:45.038811 systemd-networkd[1028]: lo: Gained carrier Dec 13 14:23:45.039790 systemd-networkd[1028]: Enumeration completed Dec 13 14:23:45.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.039891 systemd[1]: Started systemd-networkd.service. Dec 13 14:23:45.039918 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:23:45.042607 systemd-networkd[1028]: eth0: Link UP Dec 13 14:23:45.042713 systemd-networkd[1028]: eth0: Gained carrier Dec 13 14:23:45.047192 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:23:45.054310 systemd-networkd[1028]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:23:45.043000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:23:45.043000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560a6648da70 a1=337fc a2=7f14dc0e6bc5 a3=5 items=110 ppid=1020 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:23:45.043000 audit: CWD cwd="/" Dec 13 14:23:45.043000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=1 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=2 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=3 name=(null) inode=13089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=4 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=5 name=(null) inode=13090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=6 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=7 name=(null) inode=13091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=8 name=(null) inode=13091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=9 name=(null) inode=13092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=10 name=(null) inode=13091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=11 name=(null) inode=13093 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=12 name=(null) inode=13091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=13 name=(null) inode=13094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=14 name=(null) inode=13091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=15 name=(null) inode=13095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=16 name=(null) inode=13091 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=17 name=(null) inode=13096 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=18 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=19 name=(null) inode=13097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=20 name=(null) inode=13097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=21 name=(null) inode=13098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=22 name=(null) inode=13097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=23 name=(null) inode=13099 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=24 name=(null) inode=13097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=25 name=(null) inode=13100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=26 name=(null) inode=13097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=27 name=(null) inode=13101 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=28 name=(null) inode=13097 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=29 name=(null) inode=13102 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=30 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=31 name=(null) inode=13103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=32 name=(null) inode=13103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=33 name=(null) inode=13104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=34 name=(null) inode=13103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=35 name=(null) inode=13105 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=36 name=(null) inode=13103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=37 name=(null) inode=13106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=38 name=(null) inode=13103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=39 name=(null) inode=13107 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=40 name=(null) inode=13103 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=41 name=(null) inode=13108 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=42 name=(null) inode=13088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=43 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=44 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=45 name=(null) inode=13110 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=46 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=47 name=(null) inode=13111 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=48 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=49 name=(null) inode=13112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=50 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=51 name=(null) inode=13113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=52 name=(null) inode=13109 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=53 name=(null) inode=13114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=55 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=56 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=57 name=(null) inode=13116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=58 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=59 name=(null) inode=13117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=60 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=61 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=62 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=63 name=(null) inode=13119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=64 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=65 name=(null) inode=13120 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=66 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=67 name=(null) inode=13121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=68 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=69 name=(null) inode=13122 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=70 name=(null) inode=13118 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=71 name=(null) inode=13123 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=72 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=73 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=74 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=75 name=(null) inode=13125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=76 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=77 name=(null) inode=13126 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=78 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=79 name=(null) inode=13127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=80 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=81 name=(null) inode=13128 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=82 name=(null) inode=13124 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=83 name=(null) inode=13129 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=84 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=85 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=86 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=87 name=(null) inode=13131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=88 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=89 name=(null) inode=13132 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=90 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=91 name=(null) inode=13133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=92 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=93 name=(null) inode=13134 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=94 name=(null) inode=13130 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=95 name=(null) inode=13135 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=96 name=(null) inode=13115 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=97 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=98 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=99 name=(null) inode=13137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=100 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=101 name=(null) inode=13138 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=102 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=103 name=(null) inode=13139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=104 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=105 name=(null) inode=13140 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=106 name=(null) inode=13136 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=107 name=(null) inode=13141 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PATH item=109 name=(null) inode=13142 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:23:45.043000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:23:45.087184 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:23:45.092171 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:23:45.105802 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:23:45.105833 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:23:45.106031 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:23:45.152202 kernel: kvm: Nested Virtualization enabled Dec 13 14:23:45.152346 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:23:45.152370 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:23:45.152390 kernel: SVM: Virtual GIF supported Dec 13 14:23:45.169214 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:23:45.201779 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:23:45.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.204446 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:23:45.214813 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:23:45.242565 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:23:45.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.243828 systemd[1]: Reached target cryptsetup.target. Dec 13 14:23:45.246202 systemd[1]: Starting lvm2-activation.service... Dec 13 14:23:45.250909 lvm[1056]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:23:45.276373 systemd[1]: Finished lvm2-activation.service. Dec 13 14:23:45.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.277534 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:23:45.278458 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:23:45.278482 systemd[1]: Reached target local-fs.target. Dec 13 14:23:45.279445 systemd[1]: Reached target machines.target. Dec 13 14:23:45.281819 systemd[1]: Starting ldconfig.service... Dec 13 14:23:45.283079 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:23:45.283131 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:45.284442 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:23:45.288503 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:23:45.290595 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:23:45.292815 systemd[1]: Starting systemd-sysext.service... Dec 13 14:23:45.294251 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1058 (bootctl) Dec 13 14:23:45.295368 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:23:45.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.303926 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:23:45.306656 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:23:45.311944 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:23:45.312254 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:23:45.323173 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:23:45.342787 systemd-fsck[1066]: fsck.fat 4.2 (2021-01-31) Dec 13 14:23:45.342787 systemd-fsck[1066]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:23:45.344647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:23:45.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.392842 systemd[1]: Mounting boot.mount... Dec 13 14:23:45.780445 systemd[1]: Mounted boot.mount. Dec 13 14:23:45.800348 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:23:45.804492 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:23:45.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.823211 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:23:45.828953 (sd-sysext)[1071]: Using extensions 'kubernetes'. Dec 13 14:23:45.829478 (sd-sysext)[1071]: Merged extensions into '/usr'. Dec 13 14:23:45.848432 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:45.849902 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:23:45.850949 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:23:45.852051 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:23:45.853804 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:23:45.855997 systemd[1]: Starting modprobe@loop.service... Dec 13 14:23:45.856927 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:23:45.857070 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:45.857206 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:45.859780 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:23:45.861043 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:23:45.861196 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:23:45.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.862545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:23:45.862689 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:23:45.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.864018 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:23:45.864117 systemd[1]: Finished modprobe@loop.service. Dec 13 14:23:45.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.865461 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:23:45.865598 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:23:45.866546 systemd[1]: Finished systemd-sysext.service. Dec 13 14:23:45.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:45.868655 systemd[1]: Starting ensure-sysext.service... Dec 13 14:23:45.870614 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:23:45.877456 systemd[1]: Reloading. Dec 13 14:23:45.944930 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-12-13T14:23:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:23:45.944957 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-12-13T14:23:45Z" level=info msg="torcx already run" Dec 13 14:23:45.963268 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:23:45.965972 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:23:45.970329 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:23:46.132745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:23:46.132771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:23:46.135682 ldconfig[1057]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:23:46.157802 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:23:46.237000 audit: BPF prog-id=30 op=LOAD Dec 13 14:23:46.237000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:23:46.237000 audit: BPF prog-id=31 op=LOAD Dec 13 14:23:46.237000 audit: BPF prog-id=32 op=LOAD Dec 13 14:23:46.237000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:23:46.237000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:23:46.238000 audit: BPF prog-id=33 op=LOAD Dec 13 14:23:46.239000 audit: BPF prog-id=34 op=LOAD Dec 13 14:23:46.239000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:23:46.239000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:23:46.241000 audit: BPF prog-id=35 op=LOAD Dec 13 14:23:46.241000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:23:46.243000 audit: BPF prog-id=36 op=LOAD Dec 13 14:23:46.243000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:23:46.243000 audit: BPF prog-id=37 op=LOAD Dec 13 14:23:46.243000 audit: BPF prog-id=38 op=LOAD Dec 13 14:23:46.243000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:23:46.243000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:23:46.256648 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:46.256906 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.258799 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:23:46.307036 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:23:46.309444 systemd[1]: Starting modprobe@loop.service... Dec 13 14:23:46.310364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.310604 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:46.310753 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:46.311960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:23:46.312128 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:23:46.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.313469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:23:46.313576 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:23:46.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.314911 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:23:46.315050 systemd[1]: Finished modprobe@loop.service. Dec 13 14:23:46.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.318621 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:46.318887 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.320704 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:23:46.399887 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:23:46.402006 systemd[1]: Starting modprobe@loop.service... Dec 13 14:23:46.402932 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.403040 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:46.403203 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:46.404124 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:23:46.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.405465 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:23:46.405574 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:23:46.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.487424 systemd-networkd[1028]: eth0: Gained IPv6LL Dec 13 14:23:46.487903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:23:46.488043 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:23:46.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.489723 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:23:46.489863 systemd[1]: Finished modprobe@loop.service. Dec 13 14:23:46.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.493794 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:46.495520 systemd[1]: Starting audit-rules.service... Dec 13 14:23:46.498593 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:23:46.499882 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.501864 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:23:46.504620 systemd[1]: Starting modprobe@drm.service... Dec 13 14:23:46.506757 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:23:46.509001 systemd[1]: Starting modprobe@loop.service... Dec 13 14:23:46.509973 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.510112 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:46.511656 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:23:46.514293 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:23:46.516000 audit: BPF prog-id=39 op=LOAD Dec 13 14:23:46.517525 systemd[1]: Starting systemd-resolved.service... Dec 13 14:23:46.642000 audit: BPF prog-id=40 op=LOAD Dec 13 14:23:46.644238 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:23:46.646580 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:23:46.647468 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:23:46.650188 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:23:46.651436 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:23:46.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.652832 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:23:46.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.654101 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:23:46.654229 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:23:46.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.655400 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:23:46.655520 systemd[1]: Finished modprobe@drm.service. Dec 13 14:23:46.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.657119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:23:46.657251 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:23:46.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.782955 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:23:46.783101 systemd[1]: Finished modprobe@loop.service. Dec 13 14:23:46.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.784467 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:23:46.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.786310 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:23:46.786394 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:23:46.786416 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:23:46.788060 systemd[1]: Finished ensure-sysext.service. Dec 13 14:23:46.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.797000 audit[1164]: SYSTEM_BOOT pid=1164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.800412 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:23:46.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.807927 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:23:46.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:23:46.816000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:23:46.817200 augenrules[1172]: No rules Dec 13 14:23:46.817268 systemd[1]: Finished ldconfig.service. Dec 13 14:23:46.816000 audit[1172]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc7cc3420 a2=420 a3=0 items=0 ppid=1146 pid=1172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:23:46.816000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:23:46.940839 systemd[1]: Finished audit-rules.service. Dec 13 14:23:46.944096 systemd[1]: Starting systemd-update-done.service... Dec 13 14:23:46.951496 systemd[1]: Finished systemd-update-done.service. Dec 13 14:23:46.958292 systemd-resolved[1160]: Positive Trust Anchors: Dec 13 14:23:46.958307 systemd-resolved[1160]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:23:46.958334 systemd-resolved[1160]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:23:46.959491 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:23:47.388628 systemd-timesyncd[1163]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:23:47.388683 systemd-timesyncd[1163]: Initial clock synchronization to Fri 2024-12-13 14:23:47.388523 UTC. Dec 13 14:23:47.389649 systemd[1]: Reached target time-set.target. Dec 13 14:23:47.397746 systemd-resolved[1160]: Defaulting to hostname 'linux'. Dec 13 14:23:47.399884 systemd[1]: Started systemd-resolved.service. Dec 13 14:23:47.401038 systemd[1]: Reached target network.target. Dec 13 14:23:47.402005 systemd[1]: Reached target network-online.target. Dec 13 14:23:47.403062 systemd[1]: Reached target nss-lookup.target. Dec 13 14:23:47.404109 systemd[1]: Reached target sysinit.target. Dec 13 14:23:47.405111 systemd[1]: Started motdgen.path. Dec 13 14:23:47.405950 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:23:47.407295 systemd[1]: Started logrotate.timer. Dec 13 14:23:47.408166 systemd[1]: Started mdadm.timer. Dec 13 14:23:47.408987 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:23:47.410153 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:23:47.410191 systemd[1]: Reached target paths.target. Dec 13 14:23:47.411201 systemd[1]: Reached target timers.target. Dec 13 14:23:47.470881 systemd[1]: Listening on dbus.socket. Dec 13 14:23:47.473125 systemd[1]: Starting docker.socket... Dec 13 14:23:47.476438 systemd[1]: Listening on sshd.socket. Dec 13 14:23:47.477511 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:47.477951 systemd[1]: Listening on docker.socket. Dec 13 14:23:47.478893 systemd[1]: Reached target sockets.target. Dec 13 14:23:47.479763 systemd[1]: Reached target basic.target. Dec 13 14:23:47.566256 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:23:47.566291 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:23:47.567684 systemd[1]: Starting containerd.service... Dec 13 14:23:47.569577 systemd[1]: Starting dbus.service... Dec 13 14:23:47.571407 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:23:47.573374 systemd[1]: Starting extend-filesystems.service... Dec 13 14:23:47.574490 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:23:47.576068 systemd[1]: Starting kubelet.service... Dec 13 14:23:47.576907 jq[1183]: false Dec 13 14:23:47.578375 systemd[1]: Starting motdgen.service... Dec 13 14:23:47.580505 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:23:47.582697 systemd[1]: Starting sshd-keygen.service... Dec 13 14:23:47.594547 dbus-daemon[1182]: [system] SELinux support is enabled Dec 13 14:23:47.587751 systemd[1]: Starting systemd-logind.service... Dec 13 14:23:47.588934 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:23:47.589049 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:23:47.589656 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:23:47.590869 systemd[1]: Starting update-engine.service... Dec 13 14:23:47.593513 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:23:47.596778 systemd[1]: Started dbus.service. Dec 13 14:23:47.601037 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:23:47.601249 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:23:47.604331 jq[1200]: true Dec 13 14:23:47.603462 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:23:47.603647 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:23:47.605193 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:23:47.605343 systemd[1]: Finished motdgen.service. Dec 13 14:23:47.611413 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:23:47.611443 systemd[1]: Reached target system-config.target. Dec 13 14:23:47.612729 extend-filesystems[1184]: Found loop1 Dec 13 14:23:47.704196 jq[1204]: true Dec 13 14:23:47.631354 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:23:47.739964 extend-filesystems[1184]: Found sr0 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda1 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda2 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda3 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found usr Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda4 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda6 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda7 Dec 13 14:23:47.739964 extend-filesystems[1184]: Found vda9 Dec 13 14:23:47.739964 extend-filesystems[1184]: Checking size of /dev/vda9 Dec 13 14:23:47.631381 systemd[1]: Reached target user-config.target. Dec 13 14:23:47.781461 env[1205]: time="2024-12-13T14:23:47.780788283Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:23:47.781719 update_engine[1197]: I1213 14:23:47.777844 1197 main.cc:92] Flatcar Update Engine starting Dec 13 14:23:47.781719 update_engine[1197]: I1213 14:23:47.780728 1197 update_check_scheduler.cc:74] Next update check in 6m40s Dec 13 14:23:47.783986 systemd[1]: Started update-engine.service. Dec 13 14:23:47.793998 extend-filesystems[1184]: Resized partition /dev/vda9 Dec 13 14:23:47.798770 systemd[1]: Started locksmithd.service. Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.828502324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.828935286Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.831818403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.831907220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.832720385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.832782150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.832850148Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.832874263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.833202459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:23:47.844653 env[1205]: time="2024-12-13T14:23:47.834160475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:23:47.845365 extend-filesystems[1233]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:23:47.848474 env[1205]: time="2024-12-13T14:23:47.834469615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:23:47.848474 env[1205]: time="2024-12-13T14:23:47.834495183Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:23:47.848474 env[1205]: time="2024-12-13T14:23:47.834634434Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:23:47.848474 env[1205]: time="2024-12-13T14:23:47.834662256Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:23:47.883543 systemd-logind[1195]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:23:47.883576 systemd-logind[1195]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:23:47.884150 systemd-logind[1195]: New seat seat0. Dec 13 14:23:47.888056 systemd[1]: Started systemd-logind.service. Dec 13 14:23:47.973654 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:23:48.241642 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:23:48.392468 locksmithd[1234]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:23:48.993467 extend-filesystems[1233]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:23:48.993467 extend-filesystems[1233]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:23:48.993467 extend-filesystems[1233]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:23:49.019464 extend-filesystems[1184]: Resized filesystem in /dev/vda9 Dec 13 14:23:48.994335 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:23:48.994503 systemd[1]: Finished extend-filesystems.service. Dec 13 14:23:49.121806 sshd_keygen[1207]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:23:49.143387 systemd[1]: Finished sshd-keygen.service. Dec 13 14:23:49.145913 systemd[1]: Starting issuegen.service... Dec 13 14:23:49.150734 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:23:49.150876 systemd[1]: Finished issuegen.service. Dec 13 14:23:49.153015 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:23:49.236737 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:23:49.256279 systemd[1]: Started getty@tty1.service. Dec 13 14:23:49.258933 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:23:49.260214 systemd[1]: Reached target getty.target. Dec 13 14:23:49.388342 env[1205]: time="2024-12-13T14:23:49.388237474Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:23:49.388342 env[1205]: time="2024-12-13T14:23:49.388351248Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388402243Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388530173Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388579366Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388675536Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388726241Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388755816Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388777748Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388815178Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388885850Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.389032 env[1205]: time="2024-12-13T14:23:49.388950672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:23:49.396756 env[1205]: time="2024-12-13T14:23:49.396591181Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:23:49.396820 env[1205]: time="2024-12-13T14:23:49.396758294Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:23:49.397373 env[1205]: time="2024-12-13T14:23:49.397315118Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:23:49.397429 env[1205]: time="2024-12-13T14:23:49.397395188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397429 env[1205]: time="2024-12-13T14:23:49.397421808Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:23:49.397577 env[1205]: time="2024-12-13T14:23:49.397536574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397577 env[1205]: time="2024-12-13T14:23:49.397570497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397699 env[1205]: time="2024-12-13T14:23:49.397608208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397699 env[1205]: time="2024-12-13T14:23:49.397655066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397699 env[1205]: time="2024-12-13T14:23:49.397676947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397812 env[1205]: time="2024-12-13T14:23:49.397707193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397812 env[1205]: time="2024-12-13T14:23:49.397726991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397812 env[1205]: time="2024-12-13T14:23:49.397742730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.397812 env[1205]: time="2024-12-13T14:23:49.397764180Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:23:49.398088 env[1205]: time="2024-12-13T14:23:49.398065415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.398130 env[1205]: time="2024-12-13T14:23:49.398092185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.398130 env[1205]: time="2024-12-13T14:23:49.398117062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.398185 env[1205]: time="2024-12-13T14:23:49.398136288Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:23:49.398185 env[1205]: time="2024-12-13T14:23:49.398158199Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:23:49.398185 env[1205]: time="2024-12-13T14:23:49.398172135Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:23:49.398278 env[1205]: time="2024-12-13T14:23:49.398197343Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:23:49.398278 env[1205]: time="2024-12-13T14:23:49.398257255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:23:49.398841 env[1205]: time="2024-12-13T14:23:49.398727918Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.398847462Z" level=info msg="Connect containerd service" Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.398919717Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.399822020Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.400220557Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.400283255Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.400367342Z" level=info msg="containerd successfully booted in 1.620375s" Dec 13 14:23:49.433987 env[1205]: time="2024-12-13T14:23:49.433818746Z" level=info msg="Start subscribing containerd event" Dec 13 14:23:49.400515 systemd[1]: Started containerd.service. Dec 13 14:23:49.434380 env[1205]: time="2024-12-13T14:23:49.434026465Z" level=info msg="Start recovering state" Dec 13 14:23:49.434380 env[1205]: time="2024-12-13T14:23:49.434356123Z" level=info msg="Start event monitor" Dec 13 14:23:49.434443 env[1205]: time="2024-12-13T14:23:49.434418130Z" level=info msg="Start snapshots syncer" Dec 13 14:23:49.434473 env[1205]: time="2024-12-13T14:23:49.434442575Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:23:49.434473 env[1205]: time="2024-12-13T14:23:49.434453145Z" level=info msg="Start streaming server" Dec 13 14:23:49.484702 bash[1229]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:23:49.485934 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:23:50.700222 systemd[1]: Started kubelet.service. Dec 13 14:23:50.706171 systemd[1]: Reached target multi-user.target. Dec 13 14:23:50.709514 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:23:50.719599 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:23:50.719910 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:23:50.726724 systemd[1]: Startup finished in 1.059s (kernel) + 5.077s (initrd) + 11.127s (userspace) = 17.265s. Dec 13 14:23:51.576415 kubelet[1259]: E1213 14:23:51.576343 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:23:51.578282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:23:51.578406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:23:51.578676 systemd[1]: kubelet.service: Consumed 2.269s CPU time. Dec 13 14:23:56.288784 systemd[1]: Created slice system-sshd.slice. Dec 13 14:23:56.290043 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:40060.service. Dec 13 14:23:56.325426 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 40060 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:23:56.327414 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:23:56.336124 systemd[1]: Created slice user-500.slice. Dec 13 14:23:56.337291 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:23:56.338796 systemd-logind[1195]: New session 1 of user core. Dec 13 14:23:56.345940 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:23:56.347592 systemd[1]: Starting user@500.service... Dec 13 14:23:56.350496 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:23:56.423497 systemd[1271]: Queued start job for default target default.target. Dec 13 14:23:56.423994 systemd[1271]: Reached target paths.target. Dec 13 14:23:56.424013 systemd[1271]: Reached target sockets.target. Dec 13 14:23:56.424024 systemd[1271]: Reached target timers.target. Dec 13 14:23:56.424034 systemd[1271]: Reached target basic.target. Dec 13 14:23:56.424071 systemd[1271]: Reached target default.target. Dec 13 14:23:56.424092 systemd[1271]: Startup finished in 68ms. Dec 13 14:23:56.424227 systemd[1]: Started user@500.service. Dec 13 14:23:56.425332 systemd[1]: Started session-1.scope. Dec 13 14:23:56.477330 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:40066.service. Dec 13 14:23:56.513813 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 40066 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:23:56.515834 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:23:56.521094 systemd-logind[1195]: New session 2 of user core. Dec 13 14:23:56.522079 systemd[1]: Started session-2.scope. Dec 13 14:23:56.580461 sshd[1280]: pam_unix(sshd:session): session closed for user core Dec 13 14:23:56.583537 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:40066.service: Deactivated successfully. Dec 13 14:23:56.584170 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:23:56.584815 systemd-logind[1195]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:23:56.585896 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:40076.service. Dec 13 14:23:56.586943 systemd-logind[1195]: Removed session 2. Dec 13 14:23:56.622885 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 40076 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:23:56.624493 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:23:56.628837 systemd-logind[1195]: New session 3 of user core. Dec 13 14:23:56.629856 systemd[1]: Started session-3.scope. Dec 13 14:23:56.682908 sshd[1286]: pam_unix(sshd:session): session closed for user core Dec 13 14:23:56.686405 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:40076.service: Deactivated successfully. Dec 13 14:23:56.687091 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:23:56.689199 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:40078.service. Dec 13 14:23:56.689541 systemd-logind[1195]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:23:56.692225 systemd-logind[1195]: Removed session 3. Dec 13 14:23:56.725304 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 40078 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:23:56.727446 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:23:56.732647 systemd-logind[1195]: New session 4 of user core. Dec 13 14:23:56.733751 systemd[1]: Started session-4.scope. Dec 13 14:23:56.793733 sshd[1292]: pam_unix(sshd:session): session closed for user core Dec 13 14:23:56.797297 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:40078.service: Deactivated successfully. Dec 13 14:23:56.798018 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:23:56.798713 systemd-logind[1195]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:23:56.799637 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:40090.service. Dec 13 14:23:56.800548 systemd-logind[1195]: Removed session 4. Dec 13 14:23:56.834378 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:23:56.836002 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:23:56.840203 systemd-logind[1195]: New session 5 of user core. Dec 13 14:23:56.841269 systemd[1]: Started session-5.scope. Dec 13 14:23:56.899549 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:23:56.899828 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:23:56.915953 systemd[1]: Starting coreos-metadata.service... Dec 13 14:23:56.923301 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:23:56.923451 systemd[1]: Finished coreos-metadata.service. Dec 13 14:23:58.156255 systemd[1]: Stopped kubelet.service. Dec 13 14:23:58.156458 systemd[1]: kubelet.service: Consumed 2.269s CPU time. Dec 13 14:23:58.158559 systemd[1]: Starting kubelet.service... Dec 13 14:23:58.186173 systemd[1]: Reloading. Dec 13 14:23:58.256735 /usr/lib/systemd/system-generators/torcx-generator[1359]: time="2024-12-13T14:23:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:23:58.259215 /usr/lib/systemd/system-generators/torcx-generator[1359]: time="2024-12-13T14:23:58Z" level=info msg="torcx already run" Dec 13 14:23:58.400540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:23:58.400561 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:23:58.420436 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:23:58.496209 systemd[1]: Started kubelet.service. Dec 13 14:23:58.499056 systemd[1]: Stopping kubelet.service... Dec 13 14:23:58.501926 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:23:58.502086 systemd[1]: Stopped kubelet.service. Dec 13 14:23:58.503547 systemd[1]: Starting kubelet.service... Dec 13 14:23:58.584516 systemd[1]: Started kubelet.service. Dec 13 14:23:58.627725 kubelet[1408]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:23:58.627725 kubelet[1408]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:23:58.627725 kubelet[1408]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:23:58.628157 kubelet[1408]: I1213 14:23:58.627796 1408 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:23:59.252870 kubelet[1408]: I1213 14:23:59.252812 1408 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:23:59.252870 kubelet[1408]: I1213 14:23:59.252850 1408 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:23:59.253174 kubelet[1408]: I1213 14:23:59.253146 1408 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:23:59.290346 kubelet[1408]: I1213 14:23:59.290235 1408 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:23:59.299883 kubelet[1408]: E1213 14:23:59.299828 1408 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:23:59.299883 kubelet[1408]: I1213 14:23:59.299885 1408 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:23:59.306187 kubelet[1408]: I1213 14:23:59.306131 1408 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:23:59.307420 kubelet[1408]: I1213 14:23:59.307356 1408 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:23:59.307693 kubelet[1408]: I1213 14:23:59.307630 1408 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:23:59.308061 kubelet[1408]: I1213 14:23:59.307692 1408 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.89","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:23:59.308190 kubelet[1408]: I1213 14:23:59.308078 1408 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:23:59.308190 kubelet[1408]: I1213 14:23:59.308092 1408 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:23:59.308297 kubelet[1408]: I1213 14:23:59.308242 1408 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:23:59.315332 kubelet[1408]: I1213 14:23:59.315271 1408 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:23:59.315332 kubelet[1408]: I1213 14:23:59.315318 1408 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:23:59.315521 kubelet[1408]: I1213 14:23:59.315381 1408 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:23:59.315521 kubelet[1408]: I1213 14:23:59.315412 1408 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:23:59.315588 kubelet[1408]: E1213 14:23:59.315523 1408 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:23:59.315629 kubelet[1408]: E1213 14:23:59.315581 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:23:59.319858 kubelet[1408]: W1213 14:23:59.319778 1408 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:23:59.319930 kubelet[1408]: E1213 14:23:59.319866 1408 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.89\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:23:59.324794 kubelet[1408]: I1213 14:23:59.324717 1408 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:23:59.329022 kubelet[1408]: I1213 14:23:59.328959 1408 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:23:59.329251 kubelet[1408]: W1213 14:23:59.329067 1408 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:23:59.329888 kubelet[1408]: I1213 14:23:59.329857 1408 server.go:1269] "Started kubelet" Dec 13 14:23:59.333336 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:23:59.333520 kubelet[1408]: I1213 14:23:59.333487 1408 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:23:59.333676 kubelet[1408]: I1213 14:23:59.332927 1408 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:23:59.334099 kubelet[1408]: I1213 14:23:59.334049 1408 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:23:59.334212 kubelet[1408]: I1213 14:23:59.334144 1408 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:23:59.334605 kubelet[1408]: I1213 14:23:59.334564 1408 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:23:59.335516 kubelet[1408]: I1213 14:23:59.335488 1408 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:23:59.336563 kubelet[1408]: I1213 14:23:59.336532 1408 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:23:59.336793 kubelet[1408]: I1213 14:23:59.336766 1408 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:23:59.336904 kubelet[1408]: I1213 14:23:59.336880 1408 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:23:59.337300 kubelet[1408]: I1213 14:23:59.337271 1408 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:23:59.337374 kubelet[1408]: E1213 14:23:59.337351 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:23:59.337419 kubelet[1408]: I1213 14:23:59.337395 1408 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:23:59.338237 kubelet[1408]: E1213 14:23:59.338217 1408 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:23:59.339675 kubelet[1408]: I1213 14:23:59.339635 1408 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:23:59.350806 kubelet[1408]: E1213 14:23:59.350755 1408 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.89\" not found" node="10.0.0.89" Dec 13 14:23:59.351342 kubelet[1408]: I1213 14:23:59.351314 1408 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:23:59.351342 kubelet[1408]: I1213 14:23:59.351335 1408 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:23:59.351463 kubelet[1408]: I1213 14:23:59.351359 1408 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:23:59.355883 kubelet[1408]: I1213 14:23:59.355849 1408 policy_none.go:49] "None policy: Start" Dec 13 14:23:59.357078 kubelet[1408]: I1213 14:23:59.357056 1408 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:23:59.357142 kubelet[1408]: I1213 14:23:59.357093 1408 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:23:59.365929 systemd[1]: Created slice kubepods.slice. Dec 13 14:23:59.373534 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:23:59.377429 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:23:59.385680 kubelet[1408]: I1213 14:23:59.385638 1408 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:23:59.386081 kubelet[1408]: I1213 14:23:59.386054 1408 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:23:59.386254 kubelet[1408]: I1213 14:23:59.386075 1408 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:23:59.386407 kubelet[1408]: I1213 14:23:59.386380 1408 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:23:59.388270 kubelet[1408]: E1213 14:23:59.388238 1408 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.89\" not found" Dec 13 14:23:59.483465 kubelet[1408]: I1213 14:23:59.483388 1408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:23:59.484587 kubelet[1408]: I1213 14:23:59.484526 1408 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:23:59.484660 kubelet[1408]: I1213 14:23:59.484632 1408 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:23:59.484698 kubelet[1408]: I1213 14:23:59.484677 1408 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:23:59.484772 kubelet[1408]: E1213 14:23:59.484743 1408 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:23:59.486995 kubelet[1408]: I1213 14:23:59.486957 1408 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.89" Dec 13 14:23:59.494743 kubelet[1408]: I1213 14:23:59.494687 1408 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.89" Dec 13 14:23:59.494743 kubelet[1408]: E1213 14:23:59.494735 1408 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.89\": node \"10.0.0.89\" not found" Dec 13 14:23:59.499526 kubelet[1408]: I1213 14:23:59.499486 1408 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:23:59.500425 env[1205]: time="2024-12-13T14:23:59.500347871Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:23:59.500884 kubelet[1408]: I1213 14:23:59.500822 1408 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:23:59.510105 kubelet[1408]: E1213 14:23:59.509952 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:23:59.610689 kubelet[1408]: E1213 14:23:59.610598 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:23:59.622485 sudo[1301]: pam_unix(sudo:session): session closed for user root Dec 13 14:23:59.624111 sshd[1298]: pam_unix(sshd:session): session closed for user core Dec 13 14:23:59.626708 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:40090.service: Deactivated successfully. Dec 13 14:23:59.627502 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:23:59.628245 systemd-logind[1195]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:23:59.629281 systemd-logind[1195]: Removed session 5. Dec 13 14:23:59.710977 kubelet[1408]: E1213 14:23:59.710918 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:23:59.811999 kubelet[1408]: E1213 14:23:59.811932 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:23:59.913043 kubelet[1408]: E1213 14:23:59.912981 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.013755 kubelet[1408]: E1213 14:24:00.013690 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.114514 kubelet[1408]: E1213 14:24:00.114375 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.215158 kubelet[1408]: E1213 14:24:00.215095 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.256827 kubelet[1408]: I1213 14:24:00.256758 1408 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:24:00.257036 kubelet[1408]: W1213 14:24:00.257016 1408 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:24:00.257095 kubelet[1408]: W1213 14:24:00.257043 1408 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:24:00.257152 kubelet[1408]: W1213 14:24:00.257083 1408 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:24:00.316452 kubelet[1408]: E1213 14:24:00.316381 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:00.316452 kubelet[1408]: E1213 14:24:00.316402 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.417002 kubelet[1408]: E1213 14:24:00.416864 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.517749 kubelet[1408]: E1213 14:24:00.517678 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.618485 kubelet[1408]: E1213 14:24:00.618391 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.719338 kubelet[1408]: E1213 14:24:00.719154 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.820129 kubelet[1408]: E1213 14:24:00.820045 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:00.920875 kubelet[1408]: E1213 14:24:00.920803 1408 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.89\" not found" Dec 13 14:24:01.316948 kubelet[1408]: I1213 14:24:01.316886 1408 apiserver.go:52] "Watching apiserver" Dec 13 14:24:01.316948 kubelet[1408]: E1213 14:24:01.316940 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:01.325315 systemd[1]: Created slice kubepods-besteffort-poda3ec2e2d_b85f_482b_a516_3861bd152580.slice. Dec 13 14:24:01.337190 kubelet[1408]: I1213 14:24:01.337153 1408 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:24:01.337496 systemd[1]: Created slice kubepods-burstable-pod9e96f16a_d9c6_4056_b4c4_b4a6d2d38754.slice. Dec 13 14:24:01.350099 kubelet[1408]: I1213 14:24:01.350014 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-config-path\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350099 kubelet[1408]: I1213 14:24:01.350069 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-net\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350099 kubelet[1408]: I1213 14:24:01.350102 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3ec2e2d-b85f-482b-a516-3861bd152580-kube-proxy\") pod \"kube-proxy-xqbjp\" (UID: \"a3ec2e2d-b85f-482b-a516-3861bd152580\") " pod="kube-system/kube-proxy-xqbjp" Dec 13 14:24:01.350322 kubelet[1408]: I1213 14:24:01.350122 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3ec2e2d-b85f-482b-a516-3861bd152580-xtables-lock\") pod \"kube-proxy-xqbjp\" (UID: \"a3ec2e2d-b85f-482b-a516-3861bd152580\") " pod="kube-system/kube-proxy-xqbjp" Dec 13 14:24:01.350322 kubelet[1408]: I1213 14:24:01.350177 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-clustermesh-secrets\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350322 kubelet[1408]: I1213 14:24:01.350206 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-kernel\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350322 kubelet[1408]: I1213 14:24:01.350226 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hubble-tls\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350322 kubelet[1408]: I1213 14:24:01.350251 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jpq\" (UniqueName: \"kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-kube-api-access-n4jpq\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350484 kubelet[1408]: I1213 14:24:01.350354 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-cgroup\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350484 kubelet[1408]: I1213 14:24:01.350413 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-xtables-lock\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350484 kubelet[1408]: I1213 14:24:01.350462 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-run\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350589 kubelet[1408]: I1213 14:24:01.350491 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-lib-modules\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350589 kubelet[1408]: I1213 14:24:01.350506 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ec2e2d-b85f-482b-a516-3861bd152580-lib-modules\") pod \"kube-proxy-xqbjp\" (UID: \"a3ec2e2d-b85f-482b-a516-3861bd152580\") " pod="kube-system/kube-proxy-xqbjp" Dec 13 14:24:01.350589 kubelet[1408]: I1213 14:24:01.350532 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrsd7\" (UniqueName: \"kubernetes.io/projected/a3ec2e2d-b85f-482b-a516-3861bd152580-kube-api-access-hrsd7\") pod \"kube-proxy-xqbjp\" (UID: \"a3ec2e2d-b85f-482b-a516-3861bd152580\") " pod="kube-system/kube-proxy-xqbjp" Dec 13 14:24:01.350589 kubelet[1408]: I1213 14:24:01.350547 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cni-path\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350589 kubelet[1408]: I1213 14:24:01.350560 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-etc-cni-netd\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350809 kubelet[1408]: I1213 14:24:01.350591 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-bpf-maps\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.350809 kubelet[1408]: I1213 14:24:01.350654 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hostproc\") pod \"cilium-4vvxg\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " pod="kube-system/cilium-4vvxg" Dec 13 14:24:01.451897 kubelet[1408]: I1213 14:24:01.451832 1408 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:24:01.637540 kubelet[1408]: E1213 14:24:01.636015 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:01.637813 env[1205]: time="2024-12-13T14:24:01.637197296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xqbjp,Uid:a3ec2e2d-b85f-482b-a516-3861bd152580,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:01.649679 kubelet[1408]: E1213 14:24:01.649638 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:01.683502 env[1205]: time="2024-12-13T14:24:01.683435499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vvxg,Uid:9e96f16a-d9c6-4056-b4c4-b4a6d2d38754,Namespace:kube-system,Attempt:0,}" Dec 13 14:24:02.317555 kubelet[1408]: E1213 14:24:02.317463 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:02.783036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964160597.mount: Deactivated successfully. Dec 13 14:24:02.791005 env[1205]: time="2024-12-13T14:24:02.790940487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.794086 env[1205]: time="2024-12-13T14:24:02.794014962Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.798838 env[1205]: time="2024-12-13T14:24:02.798774929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.800119 env[1205]: time="2024-12-13T14:24:02.800085417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.802119 env[1205]: time="2024-12-13T14:24:02.802080288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.803462 env[1205]: time="2024-12-13T14:24:02.803411745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.805945 env[1205]: time="2024-12-13T14:24:02.805887748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.808675 env[1205]: time="2024-12-13T14:24:02.808636854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:02.822545 env[1205]: time="2024-12-13T14:24:02.822397710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:02.822545 env[1205]: time="2024-12-13T14:24:02.822486316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:02.822545 env[1205]: time="2024-12-13T14:24:02.822510341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:02.822792 env[1205]: time="2024-12-13T14:24:02.822747697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c pid=1463 runtime=io.containerd.runc.v2 Dec 13 14:24:02.840946 systemd[1]: Started cri-containerd-8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c.scope. Dec 13 14:24:02.846522 env[1205]: time="2024-12-13T14:24:02.846326806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:02.846522 env[1205]: time="2024-12-13T14:24:02.846377170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:02.846522 env[1205]: time="2024-12-13T14:24:02.846389674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:02.849014 env[1205]: time="2024-12-13T14:24:02.847712855Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39895f2c552327198173f209252b9790122b8d5c076dbde1f79440aa6461f912 pid=1488 runtime=io.containerd.runc.v2 Dec 13 14:24:02.918405 env[1205]: time="2024-12-13T14:24:02.918352910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4vvxg,Uid:9e96f16a-d9c6-4056-b4c4-b4a6d2d38754,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\"" Dec 13 14:24:02.919865 kubelet[1408]: E1213 14:24:02.919835 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:02.921679 env[1205]: time="2024-12-13T14:24:02.921631279Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:24:02.926011 systemd[1]: Started cri-containerd-39895f2c552327198173f209252b9790122b8d5c076dbde1f79440aa6461f912.scope. Dec 13 14:24:02.953514 env[1205]: time="2024-12-13T14:24:02.953459708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xqbjp,Uid:a3ec2e2d-b85f-482b-a516-3861bd152580,Namespace:kube-system,Attempt:0,} returns sandbox id \"39895f2c552327198173f209252b9790122b8d5c076dbde1f79440aa6461f912\"" Dec 13 14:24:02.954055 kubelet[1408]: E1213 14:24:02.954029 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:03.317921 kubelet[1408]: E1213 14:24:03.317844 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:04.318592 kubelet[1408]: E1213 14:24:04.318519 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:05.319779 kubelet[1408]: E1213 14:24:05.319678 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:06.320674 kubelet[1408]: E1213 14:24:06.320508 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:07.321402 kubelet[1408]: E1213 14:24:07.321333 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:08.321790 kubelet[1408]: E1213 14:24:08.321709 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:09.322690 kubelet[1408]: E1213 14:24:09.322640 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:10.323131 kubelet[1408]: E1213 14:24:10.323047 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:11.323847 kubelet[1408]: E1213 14:24:11.323796 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:12.324835 kubelet[1408]: E1213 14:24:12.324763 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:12.339954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257692470.mount: Deactivated successfully. Dec 13 14:24:13.325654 kubelet[1408]: E1213 14:24:13.325579 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:14.326121 kubelet[1408]: E1213 14:24:14.326041 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:15.326741 kubelet[1408]: E1213 14:24:15.326686 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:16.326847 kubelet[1408]: E1213 14:24:16.326792 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:17.327642 kubelet[1408]: E1213 14:24:17.327572 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:18.232817 env[1205]: time="2024-12-13T14:24:18.232724784Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:18.237095 env[1205]: time="2024-12-13T14:24:18.237037592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:18.241302 env[1205]: time="2024-12-13T14:24:18.241197674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:18.241855 env[1205]: time="2024-12-13T14:24:18.241805874Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:24:18.243977 env[1205]: time="2024-12-13T14:24:18.243828898Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:24:18.244950 env[1205]: time="2024-12-13T14:24:18.244879990Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:24:18.269094 env[1205]: time="2024-12-13T14:24:18.269027314Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\"" Dec 13 14:24:18.270225 env[1205]: time="2024-12-13T14:24:18.270186218Z" level=info msg="StartContainer for \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\"" Dec 13 14:24:18.289630 systemd[1]: Started cri-containerd-56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06.scope. Dec 13 14:24:18.316573 env[1205]: time="2024-12-13T14:24:18.316514269Z" level=info msg="StartContainer for \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\" returns successfully" Dec 13 14:24:18.328645 kubelet[1408]: E1213 14:24:18.328572 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:18.329096 systemd[1]: cri-containerd-56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06.scope: Deactivated successfully. Dec 13 14:24:18.529570 kubelet[1408]: E1213 14:24:18.529509 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:19.260491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06-rootfs.mount: Deactivated successfully. Dec 13 14:24:19.315548 kubelet[1408]: E1213 14:24:19.315480 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:19.328880 kubelet[1408]: E1213 14:24:19.328787 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:19.530760 kubelet[1408]: E1213 14:24:19.530716 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:19.582093 env[1205]: time="2024-12-13T14:24:19.582035223Z" level=info msg="shim disconnected" id=56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06 Dec 13 14:24:19.582603 env[1205]: time="2024-12-13T14:24:19.582545330Z" level=warning msg="cleaning up after shim disconnected" id=56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06 namespace=k8s.io Dec 13 14:24:19.582603 env[1205]: time="2024-12-13T14:24:19.582569615Z" level=info msg="cleaning up dead shim" Dec 13 14:24:19.591031 env[1205]: time="2024-12-13T14:24:19.590992341Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1586 runtime=io.containerd.runc.v2\n" Dec 13 14:24:20.329433 kubelet[1408]: E1213 14:24:20.329384 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:20.533669 kubelet[1408]: E1213 14:24:20.533605 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:20.535378 env[1205]: time="2024-12-13T14:24:20.535341660Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:24:21.089409 env[1205]: time="2024-12-13T14:24:21.089344835Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\"" Dec 13 14:24:21.090123 env[1205]: time="2024-12-13T14:24:21.090070707Z" level=info msg="StartContainer for \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\"" Dec 13 14:24:21.162799 systemd[1]: Started cri-containerd-b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4.scope. Dec 13 14:24:21.310503 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:24:21.310805 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:24:21.311014 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:24:21.312938 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:24:21.314663 systemd[1]: cri-containerd-b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4.scope: Deactivated successfully. Dec 13 14:24:21.317524 env[1205]: time="2024-12-13T14:24:21.317478530Z" level=info msg="StartContainer for \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\" returns successfully" Dec 13 14:24:21.325069 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:24:21.330346 kubelet[1408]: E1213 14:24:21.330298 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:21.462391 env[1205]: time="2024-12-13T14:24:21.462229975Z" level=info msg="shim disconnected" id=b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4 Dec 13 14:24:21.462391 env[1205]: time="2024-12-13T14:24:21.462295281Z" level=warning msg="cleaning up after shim disconnected" id=b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4 namespace=k8s.io Dec 13 14:24:21.462391 env[1205]: time="2024-12-13T14:24:21.462308776Z" level=info msg="cleaning up dead shim" Dec 13 14:24:21.474019 env[1205]: time="2024-12-13T14:24:21.473925838Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1651 runtime=io.containerd.runc.v2\n" Dec 13 14:24:21.537811 kubelet[1408]: E1213 14:24:21.537777 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:21.543879 env[1205]: time="2024-12-13T14:24:21.543814648Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:24:21.588158 env[1205]: time="2024-12-13T14:24:21.588002995Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\"" Dec 13 14:24:21.589064 env[1205]: time="2024-12-13T14:24:21.588944270Z" level=info msg="StartContainer for \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\"" Dec 13 14:24:21.610056 systemd[1]: Started cri-containerd-529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5.scope. Dec 13 14:24:21.654841 systemd[1]: cri-containerd-529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5.scope: Deactivated successfully. Dec 13 14:24:21.657790 env[1205]: time="2024-12-13T14:24:21.657730246Z" level=info msg="StartContainer for \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\" returns successfully" Dec 13 14:24:21.668927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2225714132.mount: Deactivated successfully. Dec 13 14:24:21.680359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5-rootfs.mount: Deactivated successfully. Dec 13 14:24:21.736783 env[1205]: time="2024-12-13T14:24:21.736596086Z" level=info msg="shim disconnected" id=529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5 Dec 13 14:24:21.736783 env[1205]: time="2024-12-13T14:24:21.736681590Z" level=warning msg="cleaning up after shim disconnected" id=529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5 namespace=k8s.io Dec 13 14:24:21.736783 env[1205]: time="2024-12-13T14:24:21.736696989Z" level=info msg="cleaning up dead shim" Dec 13 14:24:21.753427 env[1205]: time="2024-12-13T14:24:21.753353043Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1707 runtime=io.containerd.runc.v2\n" Dec 13 14:24:22.330573 kubelet[1408]: E1213 14:24:22.330477 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:22.541635 kubelet[1408]: E1213 14:24:22.541564 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:22.543242 env[1205]: time="2024-12-13T14:24:22.543194731Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:24:23.147748 env[1205]: time="2024-12-13T14:24:23.147659745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:23.331298 kubelet[1408]: E1213 14:24:23.331246 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:23.647427 env[1205]: time="2024-12-13T14:24:23.647340787Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:23.812245 env[1205]: time="2024-12-13T14:24:23.812137138Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\"" Dec 13 14:24:23.812565 env[1205]: time="2024-12-13T14:24:23.812511143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:23.813076 env[1205]: time="2024-12-13T14:24:23.813036597Z" level=info msg="StartContainer for \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\"" Dec 13 14:24:23.843683 systemd[1]: Started cri-containerd-f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce.scope. Dec 13 14:24:23.876674 systemd[1]: cri-containerd-f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce.scope: Deactivated successfully. Dec 13 14:24:24.023925 env[1205]: time="2024-12-13T14:24:24.023827798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:24.024586 env[1205]: time="2024-12-13T14:24:24.024545539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:24:24.026777 env[1205]: time="2024-12-13T14:24:24.026733938Z" level=info msg="CreateContainer within sandbox \"39895f2c552327198173f209252b9790122b8d5c076dbde1f79440aa6461f912\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:24:24.163568 env[1205]: time="2024-12-13T14:24:24.163504294Z" level=info msg="StartContainer for \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\" returns successfully" Dec 13 14:24:24.180003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce-rootfs.mount: Deactivated successfully. Dec 13 14:24:24.332515 kubelet[1408]: E1213 14:24:24.332364 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:24.454267 env[1205]: time="2024-12-13T14:24:24.454202057Z" level=info msg="shim disconnected" id=f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce Dec 13 14:24:24.454267 env[1205]: time="2024-12-13T14:24:24.454256390Z" level=warning msg="cleaning up after shim disconnected" id=f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce namespace=k8s.io Dec 13 14:24:24.454267 env[1205]: time="2024-12-13T14:24:24.454266159Z" level=info msg="cleaning up dead shim" Dec 13 14:24:24.469312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134434935.mount: Deactivated successfully. Dec 13 14:24:24.471729 env[1205]: time="2024-12-13T14:24:24.471670679Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:24:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1760 runtime=io.containerd.runc.v2\n" Dec 13 14:24:24.481492 env[1205]: time="2024-12-13T14:24:24.481419677Z" level=info msg="CreateContainer within sandbox \"39895f2c552327198173f209252b9790122b8d5c076dbde1f79440aa6461f912\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04158cbdf865f5f9843418c81cbca8a170f1bb04dff31fd1afa6e38b0ed01931\"" Dec 13 14:24:24.481955 env[1205]: time="2024-12-13T14:24:24.481924542Z" level=info msg="StartContainer for \"04158cbdf865f5f9843418c81cbca8a170f1bb04dff31fd1afa6e38b0ed01931\"" Dec 13 14:24:24.503499 systemd[1]: Started cri-containerd-04158cbdf865f5f9843418c81cbca8a170f1bb04dff31fd1afa6e38b0ed01931.scope. Dec 13 14:24:24.548652 env[1205]: time="2024-12-13T14:24:24.545701064Z" level=info msg="StartContainer for \"04158cbdf865f5f9843418c81cbca8a170f1bb04dff31fd1afa6e38b0ed01931\" returns successfully" Dec 13 14:24:24.553274 kubelet[1408]: E1213 14:24:24.553226 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:24.557447 env[1205]: time="2024-12-13T14:24:24.557370049Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:24:24.581904 env[1205]: time="2024-12-13T14:24:24.581422155Z" level=info msg="CreateContainer within sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\"" Dec 13 14:24:24.582823 env[1205]: time="2024-12-13T14:24:24.582587100Z" level=info msg="StartContainer for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\"" Dec 13 14:24:24.608879 systemd[1]: Started cri-containerd-f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697.scope. Dec 13 14:24:24.742498 env[1205]: time="2024-12-13T14:24:24.742419351Z" level=info msg="StartContainer for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" returns successfully" Dec 13 14:24:24.898751 kubelet[1408]: I1213 14:24:24.897828 1408 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:24:25.168658 kernel: Initializing XFRM netlink socket Dec 13 14:24:25.333103 kubelet[1408]: E1213 14:24:25.333045 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:25.557878 kubelet[1408]: E1213 14:24:25.557840 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:25.558209 kubelet[1408]: E1213 14:24:25.557934 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:25.570885 kubelet[1408]: I1213 14:24:25.570019 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xqbjp" podStartSLOduration=5.499324696 podStartE2EDuration="26.569987585s" podCreationTimestamp="2024-12-13 14:23:59 +0000 UTC" firstStartedPulling="2024-12-13 14:24:02.954823927 +0000 UTC m=+4.367157621" lastFinishedPulling="2024-12-13 14:24:24.025486816 +0000 UTC m=+25.437820510" observedRunningTime="2024-12-13 14:24:25.569059525 +0000 UTC m=+26.981393239" watchObservedRunningTime="2024-12-13 14:24:25.569987585 +0000 UTC m=+26.982321289" Dec 13 14:24:26.102060 kubelet[1408]: I1213 14:24:26.101962 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4vvxg" podStartSLOduration=11.780132027 podStartE2EDuration="27.10192652s" podCreationTimestamp="2024-12-13 14:23:59 +0000 UTC" firstStartedPulling="2024-12-13 14:24:02.921132152 +0000 UTC m=+4.333465847" lastFinishedPulling="2024-12-13 14:24:18.242926646 +0000 UTC m=+19.655260340" observedRunningTime="2024-12-13 14:24:25.589463438 +0000 UTC m=+27.001797152" watchObservedRunningTime="2024-12-13 14:24:26.10192652 +0000 UTC m=+27.514260214" Dec 13 14:24:26.109808 systemd[1]: Created slice kubepods-besteffort-pod1f27d2f3_7ba3_45e7_98e2_5fbabceef4c7.slice. Dec 13 14:24:26.222314 kubelet[1408]: I1213 14:24:26.222226 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvfq\" (UniqueName: \"kubernetes.io/projected/1f27d2f3-7ba3-45e7-98e2-5fbabceef4c7-kube-api-access-kmvfq\") pod \"nginx-deployment-8587fbcb89-lckjx\" (UID: \"1f27d2f3-7ba3-45e7-98e2-5fbabceef4c7\") " pod="default/nginx-deployment-8587fbcb89-lckjx" Dec 13 14:24:26.333569 kubelet[1408]: E1213 14:24:26.333498 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:26.414271 env[1205]: time="2024-12-13T14:24:26.414112008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lckjx,Uid:1f27d2f3-7ba3-45e7-98e2-5fbabceef4c7,Namespace:default,Attempt:0,}" Dec 13 14:24:26.559952 kubelet[1408]: E1213 14:24:26.559905 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:26.560138 kubelet[1408]: E1213 14:24:26.560106 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:26.872294 systemd-networkd[1028]: cilium_host: Link UP Dec 13 14:24:26.872519 systemd-networkd[1028]: cilium_net: Link UP Dec 13 14:24:26.872523 systemd-networkd[1028]: cilium_net: Gained carrier Dec 13 14:24:26.872730 systemd-networkd[1028]: cilium_host: Gained carrier Dec 13 14:24:26.875689 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:24:26.875807 systemd-networkd[1028]: cilium_host: Gained IPv6LL Dec 13 14:24:26.959593 systemd-networkd[1028]: cilium_vxlan: Link UP Dec 13 14:24:26.959604 systemd-networkd[1028]: cilium_vxlan: Gained carrier Dec 13 14:24:27.114829 systemd-networkd[1028]: cilium_net: Gained IPv6LL Dec 13 14:24:27.212662 kernel: NET: Registered PF_ALG protocol family Dec 13 14:24:27.333923 kubelet[1408]: E1213 14:24:27.333851 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:27.773852 systemd-networkd[1028]: lxc_health: Link UP Dec 13 14:24:27.782318 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 14:24:27.782736 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:24:27.990651 kernel: eth0: renamed from tmpa568d Dec 13 14:24:27.998369 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:24:27.998468 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca3635e608015: link becomes ready Dec 13 14:24:27.998741 systemd-networkd[1028]: lxca3635e608015: Link UP Dec 13 14:24:27.999396 systemd-networkd[1028]: lxca3635e608015: Gained carrier Dec 13 14:24:28.334421 kubelet[1408]: E1213 14:24:28.334339 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:28.970847 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Dec 13 14:24:29.034822 systemd-networkd[1028]: lxca3635e608015: Gained IPv6LL Dec 13 14:24:29.291771 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 14:24:29.335121 kubelet[1408]: E1213 14:24:29.335061 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:29.651985 kubelet[1408]: E1213 14:24:29.651676 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:30.335438 kubelet[1408]: E1213 14:24:30.335374 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:30.566593 kubelet[1408]: E1213 14:24:30.566553 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:31.336576 kubelet[1408]: E1213 14:24:31.336486 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:31.568022 kubelet[1408]: E1213 14:24:31.567965 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:24:31.794270 env[1205]: time="2024-12-13T14:24:31.794174937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:31.794270 env[1205]: time="2024-12-13T14:24:31.794211407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:31.794270 env[1205]: time="2024-12-13T14:24:31.794220844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:31.794711 env[1205]: time="2024-12-13T14:24:31.794370458Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a568dcbcf8cb5c720af51412750039018e25816868f679649651abcabff601ea pid=2458 runtime=io.containerd.runc.v2 Dec 13 14:24:31.813317 systemd[1]: Started cri-containerd-a568dcbcf8cb5c720af51412750039018e25816868f679649651abcabff601ea.scope. Dec 13 14:24:31.829534 systemd-resolved[1160]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:24:31.908886 env[1205]: time="2024-12-13T14:24:31.908827497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lckjx,Uid:1f27d2f3-7ba3-45e7-98e2-5fbabceef4c7,Namespace:default,Attempt:0,} returns sandbox id \"a568dcbcf8cb5c720af51412750039018e25816868f679649651abcabff601ea\"" Dec 13 14:24:31.910779 env[1205]: time="2024-12-13T14:24:31.910749664Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:24:32.336884 kubelet[1408]: E1213 14:24:32.336821 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:33.337733 kubelet[1408]: E1213 14:24:33.337666 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:33.517321 update_engine[1197]: I1213 14:24:33.517213 1197 update_attempter.cc:509] Updating boot flags... Dec 13 14:24:34.338828 kubelet[1408]: E1213 14:24:34.338755 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:35.216316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3446715001.mount: Deactivated successfully. Dec 13 14:24:35.339514 kubelet[1408]: E1213 14:24:35.339413 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:36.339930 kubelet[1408]: E1213 14:24:36.339807 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:37.341006 kubelet[1408]: E1213 14:24:37.340924 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:37.639103 env[1205]: time="2024-12-13T14:24:37.638934943Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:37.641123 env[1205]: time="2024-12-13T14:24:37.641075809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:37.643067 env[1205]: time="2024-12-13T14:24:37.643017470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:37.645546 env[1205]: time="2024-12-13T14:24:37.645497387Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:37.646223 env[1205]: time="2024-12-13T14:24:37.646169899Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:24:37.648796 env[1205]: time="2024-12-13T14:24:37.648756208Z" level=info msg="CreateContainer within sandbox \"a568dcbcf8cb5c720af51412750039018e25816868f679649651abcabff601ea\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:24:37.695132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891584853.mount: Deactivated successfully. Dec 13 14:24:37.708268 env[1205]: time="2024-12-13T14:24:37.708190924Z" level=info msg="CreateContainer within sandbox \"a568dcbcf8cb5c720af51412750039018e25816868f679649651abcabff601ea\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"645fca69af4b5dde92f888a65cda2ed16efdeb16cbcb75608237bab5fed0455f\"" Dec 13 14:24:37.709095 env[1205]: time="2024-12-13T14:24:37.709052622Z" level=info msg="StartContainer for \"645fca69af4b5dde92f888a65cda2ed16efdeb16cbcb75608237bab5fed0455f\"" Dec 13 14:24:37.769311 systemd[1]: Started cri-containerd-645fca69af4b5dde92f888a65cda2ed16efdeb16cbcb75608237bab5fed0455f.scope. Dec 13 14:24:37.829904 env[1205]: time="2024-12-13T14:24:37.829838398Z" level=info msg="StartContainer for \"645fca69af4b5dde92f888a65cda2ed16efdeb16cbcb75608237bab5fed0455f\" returns successfully" Dec 13 14:24:38.341885 kubelet[1408]: E1213 14:24:38.341798 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:39.315822 kubelet[1408]: E1213 14:24:39.315711 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:39.342455 kubelet[1408]: E1213 14:24:39.342382 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:40.342590 kubelet[1408]: E1213 14:24:40.342518 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:41.343553 kubelet[1408]: E1213 14:24:41.343495 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:42.344381 kubelet[1408]: E1213 14:24:42.344272 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:43.344936 kubelet[1408]: E1213 14:24:43.344851 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:43.966830 kubelet[1408]: I1213 14:24:43.966738 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-lckjx" podStartSLOduration=12.229600356 podStartE2EDuration="17.966710038s" podCreationTimestamp="2024-12-13 14:24:26 +0000 UTC" firstStartedPulling="2024-12-13 14:24:31.910308807 +0000 UTC m=+33.322642491" lastFinishedPulling="2024-12-13 14:24:37.647418489 +0000 UTC m=+39.059752173" observedRunningTime="2024-12-13 14:24:38.599082537 +0000 UTC m=+40.011416231" watchObservedRunningTime="2024-12-13 14:24:43.966710038 +0000 UTC m=+45.379043732" Dec 13 14:24:43.973040 systemd[1]: Created slice kubepods-besteffort-podd81d2675_51fe_44f0_a199_a09c1f6bdb1f.slice. Dec 13 14:24:44.135241 kubelet[1408]: I1213 14:24:44.135161 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76bf\" (UniqueName: \"kubernetes.io/projected/d81d2675-51fe-44f0-a199-a09c1f6bdb1f-kube-api-access-p76bf\") pod \"nfs-server-provisioner-0\" (UID: \"d81d2675-51fe-44f0-a199-a09c1f6bdb1f\") " pod="default/nfs-server-provisioner-0" Dec 13 14:24:44.135241 kubelet[1408]: I1213 14:24:44.135230 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d81d2675-51fe-44f0-a199-a09c1f6bdb1f-data\") pod \"nfs-server-provisioner-0\" (UID: \"d81d2675-51fe-44f0-a199-a09c1f6bdb1f\") " pod="default/nfs-server-provisioner-0" Dec 13 14:24:44.276350 env[1205]: time="2024-12-13T14:24:44.276297815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d81d2675-51fe-44f0-a199-a09c1f6bdb1f,Namespace:default,Attempt:0,}" Dec 13 14:24:44.345964 kubelet[1408]: E1213 14:24:44.345603 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:44.358920 systemd-networkd[1028]: lxc5ba7c29e9dc4: Link UP Dec 13 14:24:44.367652 kernel: eth0: renamed from tmp559c0 Dec 13 14:24:44.375800 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:24:44.375935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5ba7c29e9dc4: link becomes ready Dec 13 14:24:44.376414 systemd-networkd[1028]: lxc5ba7c29e9dc4: Gained carrier Dec 13 14:24:44.566896 env[1205]: time="2024-12-13T14:24:44.566667537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:24:44.566896 env[1205]: time="2024-12-13T14:24:44.566720516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:24:44.567151 env[1205]: time="2024-12-13T14:24:44.566747006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:24:44.567382 env[1205]: time="2024-12-13T14:24:44.567266576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/559c060d1914fe4738dd50bc90bab8fdf4d15fe2a0a39bd7d4312c9a035e1ce0 pid=2599 runtime=io.containerd.runc.v2 Dec 13 14:24:44.592385 systemd[1]: Started cri-containerd-559c060d1914fe4738dd50bc90bab8fdf4d15fe2a0a39bd7d4312c9a035e1ce0.scope. Dec 13 14:24:44.619816 systemd-resolved[1160]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:24:44.642830 env[1205]: time="2024-12-13T14:24:44.642784104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d81d2675-51fe-44f0-a199-a09c1f6bdb1f,Namespace:default,Attempt:0,} returns sandbox id \"559c060d1914fe4738dd50bc90bab8fdf4d15fe2a0a39bd7d4312c9a035e1ce0\"" Dec 13 14:24:44.644934 env[1205]: time="2024-12-13T14:24:44.644900443Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:24:45.247009 systemd[1]: run-containerd-runc-k8s.io-559c060d1914fe4738dd50bc90bab8fdf4d15fe2a0a39bd7d4312c9a035e1ce0-runc.sMiQtF.mount: Deactivated successfully. Dec 13 14:24:45.346256 kubelet[1408]: E1213 14:24:45.346170 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:45.610810 systemd-networkd[1028]: lxc5ba7c29e9dc4: Gained IPv6LL Dec 13 14:24:46.346808 kubelet[1408]: E1213 14:24:46.346732 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:47.346980 kubelet[1408]: E1213 14:24:47.346894 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:48.320823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3467078924.mount: Deactivated successfully. Dec 13 14:24:48.347869 kubelet[1408]: E1213 14:24:48.347817 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:49.348144 kubelet[1408]: E1213 14:24:49.348051 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:50.348398 kubelet[1408]: E1213 14:24:50.348333 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:51.251824 env[1205]: time="2024-12-13T14:24:51.251748633Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:51.253912 env[1205]: time="2024-12-13T14:24:51.253869884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:51.258503 env[1205]: time="2024-12-13T14:24:51.258429182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:51.259227 env[1205]: time="2024-12-13T14:24:51.259185966Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:24:51.260245 env[1205]: time="2024-12-13T14:24:51.260202869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:24:51.262183 env[1205]: time="2024-12-13T14:24:51.262122771Z" level=info msg="CreateContainer within sandbox \"559c060d1914fe4738dd50bc90bab8fdf4d15fe2a0a39bd7d4312c9a035e1ce0\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:24:51.276719 env[1205]: time="2024-12-13T14:24:51.276650033Z" level=info msg="CreateContainer within sandbox \"559c060d1914fe4738dd50bc90bab8fdf4d15fe2a0a39bd7d4312c9a035e1ce0\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4b3a3f93ab7aefc8be3ccb00145d113897abd8853f3bdf451bef955ede35cb38\"" Dec 13 14:24:51.277266 env[1205]: time="2024-12-13T14:24:51.277242166Z" level=info msg="StartContainer for \"4b3a3f93ab7aefc8be3ccb00145d113897abd8853f3bdf451bef955ede35cb38\"" Dec 13 14:24:51.293912 systemd[1]: Started cri-containerd-4b3a3f93ab7aefc8be3ccb00145d113897abd8853f3bdf451bef955ede35cb38.scope. Dec 13 14:24:51.329767 env[1205]: time="2024-12-13T14:24:51.329693938Z" level=info msg="StartContainer for \"4b3a3f93ab7aefc8be3ccb00145d113897abd8853f3bdf451bef955ede35cb38\" returns successfully" Dec 13 14:24:51.348787 kubelet[1408]: E1213 14:24:51.348724 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:52.349194 kubelet[1408]: E1213 14:24:52.349114 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:53.350307 kubelet[1408]: E1213 14:24:53.350236 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:54.351335 kubelet[1408]: E1213 14:24:54.351259 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:55.352439 kubelet[1408]: E1213 14:24:55.352334 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:56.352966 kubelet[1408]: E1213 14:24:56.352886 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:57.353728 kubelet[1408]: E1213 14:24:57.353648 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:58.354087 kubelet[1408]: E1213 14:24:58.354020 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:59.316294 kubelet[1408]: E1213 14:24:59.316204 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:24:59.354879 kubelet[1408]: E1213 14:24:59.354788 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:00.356021 kubelet[1408]: E1213 14:25:00.355923 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:01.356493 kubelet[1408]: E1213 14:25:01.356414 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:01.485815 kubelet[1408]: I1213 14:25:01.485704 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.869506935 podStartE2EDuration="18.485673018s" podCreationTimestamp="2024-12-13 14:24:43 +0000 UTC" firstStartedPulling="2024-12-13 14:24:44.644317644 +0000 UTC m=+46.056651338" lastFinishedPulling="2024-12-13 14:24:51.260483727 +0000 UTC m=+52.672817421" observedRunningTime="2024-12-13 14:24:51.638250194 +0000 UTC m=+53.050583888" watchObservedRunningTime="2024-12-13 14:25:01.485673018 +0000 UTC m=+62.898006712" Dec 13 14:25:01.494379 systemd[1]: Created slice kubepods-besteffort-podbc6361b2_e53e_4f78_855c_d8de57837d09.slice. Dec 13 14:25:01.631987 kubelet[1408]: I1213 14:25:01.631817 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b62129d2-479a-4c91-bd1a-9c2cd84606ad\" (UniqueName: \"kubernetes.io/nfs/bc6361b2-e53e-4f78-855c-d8de57837d09-pvc-b62129d2-479a-4c91-bd1a-9c2cd84606ad\") pod \"test-pod-1\" (UID: \"bc6361b2-e53e-4f78-855c-d8de57837d09\") " pod="default/test-pod-1" Dec 13 14:25:01.631987 kubelet[1408]: I1213 14:25:01.631880 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfmcf\" (UniqueName: \"kubernetes.io/projected/bc6361b2-e53e-4f78-855c-d8de57837d09-kube-api-access-wfmcf\") pod \"test-pod-1\" (UID: \"bc6361b2-e53e-4f78-855c-d8de57837d09\") " pod="default/test-pod-1" Dec 13 14:25:01.759982 kernel: FS-Cache: Loaded Dec 13 14:25:01.810034 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:25:01.810232 kernel: RPC: Registered udp transport module. Dec 13 14:25:01.810262 kernel: RPC: Registered tcp transport module. Dec 13 14:25:01.810894 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:25:01.881655 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:25:02.077989 kernel: NFS: Registering the id_resolver key type Dec 13 14:25:02.078201 kernel: Key type id_resolver registered Dec 13 14:25:02.078240 kernel: Key type id_legacy registered Dec 13 14:25:02.107225 nfsidmap[2725]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:25:02.111175 nfsidmap[2728]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:25:02.357229 kubelet[1408]: E1213 14:25:02.357076 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:02.397399 env[1205]: time="2024-12-13T14:25:02.397343897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bc6361b2-e53e-4f78-855c-d8de57837d09,Namespace:default,Attempt:0,}" Dec 13 14:25:02.528045 systemd-networkd[1028]: lxcf152f2038038: Link UP Dec 13 14:25:02.537638 kernel: eth0: renamed from tmpca179 Dec 13 14:25:02.546052 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:25:02.546136 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf152f2038038: link becomes ready Dec 13 14:25:02.548470 systemd-networkd[1028]: lxcf152f2038038: Gained carrier Dec 13 14:25:02.921418 env[1205]: time="2024-12-13T14:25:02.921335450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:02.921418 env[1205]: time="2024-12-13T14:25:02.921375766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:02.921418 env[1205]: time="2024-12-13T14:25:02.921386536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:02.921789 env[1205]: time="2024-12-13T14:25:02.921565934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca17905d7c9df5b9cb13b8367af1720c37642a9528b5a5245275307aa2c2223a pid=2762 runtime=io.containerd.runc.v2 Dec 13 14:25:02.941257 systemd[1]: Started cri-containerd-ca17905d7c9df5b9cb13b8367af1720c37642a9528b5a5245275307aa2c2223a.scope. Dec 13 14:25:02.952592 systemd-resolved[1160]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:25:02.977439 env[1205]: time="2024-12-13T14:25:02.977348461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bc6361b2-e53e-4f78-855c-d8de57837d09,Namespace:default,Attempt:0,} returns sandbox id \"ca17905d7c9df5b9cb13b8367af1720c37642a9528b5a5245275307aa2c2223a\"" Dec 13 14:25:02.979033 env[1205]: time="2024-12-13T14:25:02.978938548Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:25:03.357878 kubelet[1408]: E1213 14:25:03.357786 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:03.682002 env[1205]: time="2024-12-13T14:25:03.681840412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:03.699195 env[1205]: time="2024-12-13T14:25:03.699147416Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:03.711424 env[1205]: time="2024-12-13T14:25:03.711346578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:03.731045 env[1205]: time="2024-12-13T14:25:03.730994239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:03.731752 env[1205]: time="2024-12-13T14:25:03.731706407Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:25:03.734646 env[1205]: time="2024-12-13T14:25:03.734604891Z" level=info msg="CreateContainer within sandbox \"ca17905d7c9df5b9cb13b8367af1720c37642a9528b5a5245275307aa2c2223a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:25:03.892263 env[1205]: time="2024-12-13T14:25:03.892185038Z" level=info msg="CreateContainer within sandbox \"ca17905d7c9df5b9cb13b8367af1720c37642a9528b5a5245275307aa2c2223a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"bb6127cdf2600d2339120d4a7f9972a0f66470065f62e9cfbd7d5b783db7d435\"" Dec 13 14:25:03.892884 env[1205]: time="2024-12-13T14:25:03.892834357Z" level=info msg="StartContainer for \"bb6127cdf2600d2339120d4a7f9972a0f66470065f62e9cfbd7d5b783db7d435\"" Dec 13 14:25:03.909323 systemd[1]: Started cri-containerd-bb6127cdf2600d2339120d4a7f9972a0f66470065f62e9cfbd7d5b783db7d435.scope. Dec 13 14:25:03.955126 env[1205]: time="2024-12-13T14:25:03.954482351Z" level=info msg="StartContainer for \"bb6127cdf2600d2339120d4a7f9972a0f66470065f62e9cfbd7d5b783db7d435\" returns successfully" Dec 13 14:25:04.298867 systemd-networkd[1028]: lxcf152f2038038: Gained IPv6LL Dec 13 14:25:04.358573 kubelet[1408]: E1213 14:25:04.358508 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:04.737561 kubelet[1408]: I1213 14:25:04.737245 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.982736321 podStartE2EDuration="20.737211367s" podCreationTimestamp="2024-12-13 14:24:44 +0000 UTC" firstStartedPulling="2024-12-13 14:25:02.978673029 +0000 UTC m=+64.391006723" lastFinishedPulling="2024-12-13 14:25:03.733148074 +0000 UTC m=+65.145481769" observedRunningTime="2024-12-13 14:25:04.737138178 +0000 UTC m=+66.149471872" watchObservedRunningTime="2024-12-13 14:25:04.737211367 +0000 UTC m=+66.149545061" Dec 13 14:25:05.358930 kubelet[1408]: E1213 14:25:05.358839 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:06.359999 kubelet[1408]: E1213 14:25:06.359878 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:06.604351 systemd[1]: run-containerd-runc-k8s.io-f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697-runc.MmD8PR.mount: Deactivated successfully. Dec 13 14:25:06.624408 env[1205]: time="2024-12-13T14:25:06.624263235Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:25:06.629907 env[1205]: time="2024-12-13T14:25:06.629865634Z" level=info msg="StopContainer for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" with timeout 2 (s)" Dec 13 14:25:06.630119 env[1205]: time="2024-12-13T14:25:06.630092661Z" level=info msg="Stop container \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" with signal terminated" Dec 13 14:25:06.637024 systemd-networkd[1028]: lxc_health: Link DOWN Dec 13 14:25:06.637035 systemd-networkd[1028]: lxc_health: Lost carrier Dec 13 14:25:06.679162 systemd[1]: cri-containerd-f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697.scope: Deactivated successfully. Dec 13 14:25:06.679649 systemd[1]: cri-containerd-f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697.scope: Consumed 6.878s CPU time. Dec 13 14:25:06.699162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697-rootfs.mount: Deactivated successfully. Dec 13 14:25:06.752402 env[1205]: time="2024-12-13T14:25:06.752330831Z" level=info msg="shim disconnected" id=f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697 Dec 13 14:25:06.752710 env[1205]: time="2024-12-13T14:25:06.752402186Z" level=warning msg="cleaning up after shim disconnected" id=f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697 namespace=k8s.io Dec 13 14:25:06.752710 env[1205]: time="2024-12-13T14:25:06.752429857Z" level=info msg="cleaning up dead shim" Dec 13 14:25:06.760668 env[1205]: time="2024-12-13T14:25:06.760595460Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2891 runtime=io.containerd.runc.v2\n" Dec 13 14:25:06.767471 env[1205]: time="2024-12-13T14:25:06.767348840Z" level=info msg="StopContainer for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" returns successfully" Dec 13 14:25:06.768421 env[1205]: time="2024-12-13T14:25:06.768267665Z" level=info msg="StopPodSandbox for \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\"" Dec 13 14:25:06.768421 env[1205]: time="2024-12-13T14:25:06.768383071Z" level=info msg="Container to stop \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:25:06.768421 env[1205]: time="2024-12-13T14:25:06.768414691Z" level=info msg="Container to stop \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:25:06.768421 env[1205]: time="2024-12-13T14:25:06.768435130Z" level=info msg="Container to stop \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:25:06.770946 env[1205]: time="2024-12-13T14:25:06.768456310Z" level=info msg="Container to stop \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:25:06.770946 env[1205]: time="2024-12-13T14:25:06.768479383Z" level=info msg="Container to stop \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:25:06.770747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c-shm.mount: Deactivated successfully. Dec 13 14:25:06.775232 systemd[1]: cri-containerd-8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c.scope: Deactivated successfully. Dec 13 14:25:06.844201 env[1205]: time="2024-12-13T14:25:06.844131360Z" level=info msg="shim disconnected" id=8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c Dec 13 14:25:06.844201 env[1205]: time="2024-12-13T14:25:06.844206220Z" level=warning msg="cleaning up after shim disconnected" id=8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c namespace=k8s.io Dec 13 14:25:06.844201 env[1205]: time="2024-12-13T14:25:06.844215890Z" level=info msg="cleaning up dead shim" Dec 13 14:25:06.850856 env[1205]: time="2024-12-13T14:25:06.850809089Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2921 runtime=io.containerd.runc.v2\n" Dec 13 14:25:06.851162 env[1205]: time="2024-12-13T14:25:06.851125382Z" level=info msg="TearDown network for sandbox \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" successfully" Dec 13 14:25:06.851162 env[1205]: time="2024-12-13T14:25:06.851152092Z" level=info msg="StopPodSandbox for \"8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c\" returns successfully" Dec 13 14:25:06.967216 kubelet[1408]: I1213 14:25:06.967030 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.967216 kubelet[1408]: I1213 14:25:06.967080 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-kernel\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967216 kubelet[1408]: I1213 14:25:06.967182 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-xtables-lock\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967216 kubelet[1408]: I1213 14:25:06.967215 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hostproc\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967684 kubelet[1408]: I1213 14:25:06.967246 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hubble-tls\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967684 kubelet[1408]: I1213 14:25:06.967297 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-clustermesh-secrets\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967684 kubelet[1408]: I1213 14:25:06.967283 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.967869 kubelet[1408]: I1213 14:25:06.967321 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-bpf-maps\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967937 kubelet[1408]: I1213 14:25:06.967918 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-net\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.967997 kubelet[1408]: I1213 14:25:06.967759 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.968032 kubelet[1408]: I1213 14:25:06.967784 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.968063 kubelet[1408]: I1213 14:25:06.968049 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-config-path\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.968340 kubelet[1408]: I1213 14:25:06.968103 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.968688 kubelet[1408]: I1213 14:25:06.968595 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.969170 kubelet[1408]: I1213 14:25:06.968069 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-cgroup\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.969309 kubelet[1408]: I1213 14:25:06.969286 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-lib-modules\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.969482 kubelet[1408]: I1213 14:25:06.969464 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-etc-cni-netd\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.969722 kubelet[1408]: I1213 14:25:06.969702 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4jpq\" (UniqueName: \"kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-kube-api-access-n4jpq\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.969866 kubelet[1408]: I1213 14:25:06.969842 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-run\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.969985 kubelet[1408]: I1213 14:25:06.969963 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cni-path\") pod \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\" (UID: \"9e96f16a-d9c6-4056-b4c4-b4a6d2d38754\") " Dec 13 14:25:06.970134 kubelet[1408]: I1213 14:25:06.970112 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-cgroup\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:06.970247 kubelet[1408]: I1213 14:25:06.970227 1408 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-net\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:06.970350 kubelet[1408]: I1213 14:25:06.970331 1408 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-xtables-lock\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:06.970498 kubelet[1408]: I1213 14:25:06.970480 1408 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hostproc\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:06.970673 kubelet[1408]: I1213 14:25:06.970656 1408 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-host-proc-sys-kernel\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:06.970791 kubelet[1408]: I1213 14:25:06.970774 1408 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-bpf-maps\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:06.970898 kubelet[1408]: I1213 14:25:06.969420 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.971003 kubelet[1408]: I1213 14:25:06.969655 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.971113 kubelet[1408]: I1213 14:25:06.970245 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:25:06.971233 kubelet[1408]: I1213 14:25:06.970452 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.971347 kubelet[1408]: I1213 14:25:06.970591 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:06.971816 kubelet[1408]: I1213 14:25:06.971790 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:25:06.972071 kubelet[1408]: I1213 14:25:06.972042 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:25:06.973730 kubelet[1408]: I1213 14:25:06.973646 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-kube-api-access-n4jpq" (OuterVolumeSpecName: "kube-api-access-n4jpq") pod "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" (UID: "9e96f16a-d9c6-4056-b4c4-b4a6d2d38754"). InnerVolumeSpecName "kube-api-access-n4jpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:25:07.071344 kubelet[1408]: I1213 14:25:07.071284 1408 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-lib-modules\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071344 kubelet[1408]: I1213 14:25:07.071328 1408 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-etc-cni-netd\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071344 kubelet[1408]: I1213 14:25:07.071340 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-config-path\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071344 kubelet[1408]: I1213 14:25:07.071352 1408 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cni-path\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071344 kubelet[1408]: I1213 14:25:07.071360 1408 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n4jpq\" (UniqueName: \"kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-kube-api-access-n4jpq\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071344 kubelet[1408]: I1213 14:25:07.071368 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-cilium-run\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071843 kubelet[1408]: I1213 14:25:07.071376 1408 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-hubble-tls\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.071843 kubelet[1408]: I1213 14:25:07.071384 1408 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754-clustermesh-secrets\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:07.360331 kubelet[1408]: E1213 14:25:07.360265 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:07.491349 systemd[1]: Removed slice kubepods-burstable-pod9e96f16a_d9c6_4056_b4c4_b4a6d2d38754.slice. Dec 13 14:25:07.491433 systemd[1]: kubepods-burstable-pod9e96f16a_d9c6_4056_b4c4_b4a6d2d38754.slice: Consumed 7.074s CPU time. Dec 13 14:25:07.600040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a596e9ce4e23e2f233437481d113c83d9084805db87383a0792c596120bff8c-rootfs.mount: Deactivated successfully. Dec 13 14:25:07.600146 systemd[1]: var-lib-kubelet-pods-9e96f16a\x2dd9c6\x2d4056\x2db4c4\x2db4a6d2d38754-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn4jpq.mount: Deactivated successfully. Dec 13 14:25:07.600221 systemd[1]: var-lib-kubelet-pods-9e96f16a\x2dd9c6\x2d4056\x2db4c4\x2db4a6d2d38754-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:25:07.600290 systemd[1]: var-lib-kubelet-pods-9e96f16a\x2dd9c6\x2d4056\x2db4c4\x2db4a6d2d38754-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:25:07.648456 kubelet[1408]: I1213 14:25:07.648341 1408 scope.go:117] "RemoveContainer" containerID="f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697" Dec 13 14:25:07.650066 env[1205]: time="2024-12-13T14:25:07.650031280Z" level=info msg="RemoveContainer for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\"" Dec 13 14:25:07.775545 env[1205]: time="2024-12-13T14:25:07.775465994Z" level=info msg="RemoveContainer for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" returns successfully" Dec 13 14:25:07.775942 kubelet[1408]: I1213 14:25:07.775874 1408 scope.go:117] "RemoveContainer" containerID="f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce" Dec 13 14:25:07.777407 env[1205]: time="2024-12-13T14:25:07.777371221Z" level=info msg="RemoveContainer for \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\"" Dec 13 14:25:07.823427 env[1205]: time="2024-12-13T14:25:07.823354847Z" level=info msg="RemoveContainer for \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\" returns successfully" Dec 13 14:25:07.823797 kubelet[1408]: I1213 14:25:07.823752 1408 scope.go:117] "RemoveContainer" containerID="529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5" Dec 13 14:25:07.825013 env[1205]: time="2024-12-13T14:25:07.824970640Z" level=info msg="RemoveContainer for \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\"" Dec 13 14:25:07.862949 env[1205]: time="2024-12-13T14:25:07.862868533Z" level=info msg="RemoveContainer for \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\" returns successfully" Dec 13 14:25:07.863280 kubelet[1408]: I1213 14:25:07.863246 1408 scope.go:117] "RemoveContainer" containerID="b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4" Dec 13 14:25:07.865029 env[1205]: time="2024-12-13T14:25:07.864969047Z" level=info msg="RemoveContainer for \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\"" Dec 13 14:25:07.873387 env[1205]: time="2024-12-13T14:25:07.873321639Z" level=info msg="RemoveContainer for \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\" returns successfully" Dec 13 14:25:07.873741 kubelet[1408]: I1213 14:25:07.873693 1408 scope.go:117] "RemoveContainer" containerID="56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06" Dec 13 14:25:07.875158 env[1205]: time="2024-12-13T14:25:07.875098034Z" level=info msg="RemoveContainer for \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\"" Dec 13 14:25:07.888114 env[1205]: time="2024-12-13T14:25:07.888015546Z" level=info msg="RemoveContainer for \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\" returns successfully" Dec 13 14:25:07.888477 kubelet[1408]: I1213 14:25:07.888438 1408 scope.go:117] "RemoveContainer" containerID="f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697" Dec 13 14:25:07.888901 env[1205]: time="2024-12-13T14:25:07.888790381Z" level=error msg="ContainerStatus for \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\": not found" Dec 13 14:25:07.889060 kubelet[1408]: E1213 14:25:07.889031 1408 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\": not found" containerID="f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697" Dec 13 14:25:07.889207 kubelet[1408]: I1213 14:25:07.889077 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697"} err="failed to get container status \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0e6271b895a2b00fe6d59b9c0ce6ff08e1c7602da0fd07602e7c4be25aef697\": not found" Dec 13 14:25:07.889357 kubelet[1408]: I1213 14:25:07.889209 1408 scope.go:117] "RemoveContainer" containerID="f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce" Dec 13 14:25:07.889458 env[1205]: time="2024-12-13T14:25:07.889397952Z" level=error msg="ContainerStatus for \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\": not found" Dec 13 14:25:07.889718 kubelet[1408]: E1213 14:25:07.889668 1408 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\": not found" containerID="f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce" Dec 13 14:25:07.889845 kubelet[1408]: I1213 14:25:07.889754 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce"} err="failed to get container status \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\": rpc error: code = NotFound desc = an error occurred when try to find container \"f88c9a14f968e515984a33f046e0ba59fb1be197b33fca5c1aebc0a147abffce\": not found" Dec 13 14:25:07.889845 kubelet[1408]: I1213 14:25:07.889796 1408 scope.go:117] "RemoveContainer" containerID="529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5" Dec 13 14:25:07.890132 env[1205]: time="2024-12-13T14:25:07.890080233Z" level=error msg="ContainerStatus for \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\": not found" Dec 13 14:25:07.890262 kubelet[1408]: E1213 14:25:07.890238 1408 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\": not found" containerID="529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5" Dec 13 14:25:07.890312 kubelet[1408]: I1213 14:25:07.890267 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5"} err="failed to get container status \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"529a8e1ee1e1c38eb1324ef552f58b3c0d2c8a47b8280bf4d0435f3c718c13c5\": not found" Dec 13 14:25:07.890312 kubelet[1408]: I1213 14:25:07.890288 1408 scope.go:117] "RemoveContainer" containerID="b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4" Dec 13 14:25:07.890763 env[1205]: time="2024-12-13T14:25:07.890670010Z" level=error msg="ContainerStatus for \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\": not found" Dec 13 14:25:07.890904 kubelet[1408]: E1213 14:25:07.890866 1408 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\": not found" containerID="b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4" Dec 13 14:25:07.890904 kubelet[1408]: I1213 14:25:07.890893 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4"} err="failed to get container status \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"b20e029aa84f05f9178437e9be6ada9ef95420326cadf05dd97aafc1fed087b4\": not found" Dec 13 14:25:07.891035 kubelet[1408]: I1213 14:25:07.890914 1408 scope.go:117] "RemoveContainer" containerID="56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06" Dec 13 14:25:07.891196 env[1205]: time="2024-12-13T14:25:07.891117621Z" level=error msg="ContainerStatus for \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\": not found" Dec 13 14:25:07.891411 kubelet[1408]: E1213 14:25:07.891262 1408 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\": not found" containerID="56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06" Dec 13 14:25:07.891411 kubelet[1408]: I1213 14:25:07.891283 1408 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06"} err="failed to get container status \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\": rpc error: code = NotFound desc = an error occurred when try to find container \"56e6f84fa3811b1913df1ccc1512bb62775a4297948359815425870f477e5b06\": not found" Dec 13 14:25:08.361374 kubelet[1408]: E1213 14:25:08.361315 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:09.359873 kubelet[1408]: E1213 14:25:09.359805 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" containerName="mount-cgroup" Dec 13 14:25:09.359873 kubelet[1408]: E1213 14:25:09.359839 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" containerName="apply-sysctl-overwrites" Dec 13 14:25:09.359873 kubelet[1408]: E1213 14:25:09.359845 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" containerName="mount-bpf-fs" Dec 13 14:25:09.359873 kubelet[1408]: E1213 14:25:09.359850 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" containerName="clean-cilium-state" Dec 13 14:25:09.359873 kubelet[1408]: E1213 14:25:09.359855 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" containerName="cilium-agent" Dec 13 14:25:09.359873 kubelet[1408]: I1213 14:25:09.359880 1408 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" containerName="cilium-agent" Dec 13 14:25:09.361504 kubelet[1408]: E1213 14:25:09.361478 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:09.365304 systemd[1]: Created slice kubepods-besteffort-podd2c6e90f_b0c6_4bec_9ed1_6ef1a78c01e4.slice. Dec 13 14:25:09.369561 systemd[1]: Created slice kubepods-burstable-pod1b8babe2_42d3_48e4_ba74_480679439fd8.slice. Dec 13 14:25:09.382484 kubelet[1408]: I1213 14:25:09.382443 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qhsg\" (UniqueName: \"kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-kube-api-access-5qhsg\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382670 kubelet[1408]: I1213 14:25:09.382496 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-bpf-maps\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382670 kubelet[1408]: I1213 14:25:09.382544 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cni-path\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382670 kubelet[1408]: I1213 14:25:09.382594 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-etc-cni-netd\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382670 kubelet[1408]: I1213 14:25:09.382642 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-net\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382851 kubelet[1408]: I1213 14:25:09.382671 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkhxz\" (UniqueName: \"kubernetes.io/projected/d2c6e90f-b0c6-4bec-9ed1-6ef1a78c01e4-kube-api-access-mkhxz\") pod \"cilium-operator-5d85765b45-ff4zk\" (UID: \"d2c6e90f-b0c6-4bec-9ed1-6ef1a78c01e4\") " pod="kube-system/cilium-operator-5d85765b45-ff4zk" Dec 13 14:25:09.382851 kubelet[1408]: I1213 14:25:09.382695 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-hostproc\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382851 kubelet[1408]: I1213 14:25:09.382730 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-xtables-lock\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382851 kubelet[1408]: I1213 14:25:09.382755 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-ipsec-secrets\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.382851 kubelet[1408]: I1213 14:25:09.382777 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-run\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.383018 kubelet[1408]: I1213 14:25:09.382800 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-cgroup\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.383018 kubelet[1408]: I1213 14:25:09.382823 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-clustermesh-secrets\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.383018 kubelet[1408]: I1213 14:25:09.382844 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-config-path\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.383018 kubelet[1408]: I1213 14:25:09.382863 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-hubble-tls\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.383018 kubelet[1408]: I1213 14:25:09.382884 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2c6e90f-b0c6-4bec-9ed1-6ef1a78c01e4-cilium-config-path\") pod \"cilium-operator-5d85765b45-ff4zk\" (UID: \"d2c6e90f-b0c6-4bec-9ed1-6ef1a78c01e4\") " pod="kube-system/cilium-operator-5d85765b45-ff4zk" Dec 13 14:25:09.383166 kubelet[1408]: I1213 14:25:09.382904 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-lib-modules\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.383166 kubelet[1408]: I1213 14:25:09.382925 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-kernel\") pod \"cilium-6p9z6\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " pod="kube-system/cilium-6p9z6" Dec 13 14:25:09.399667 kubelet[1408]: E1213 14:25:09.399596 1408 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:25:09.492593 kubelet[1408]: I1213 14:25:09.492535 1408 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e96f16a-d9c6-4056-b4c4-b4a6d2d38754" path="/var/lib/kubelet/pods/9e96f16a-d9c6-4056-b4c4-b4a6d2d38754/volumes" Dec 13 14:25:09.667490 kubelet[1408]: E1213 14:25:09.667317 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:09.667948 env[1205]: time="2024-12-13T14:25:09.667904854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ff4zk,Uid:d2c6e90f-b0c6-4bec-9ed1-6ef1a78c01e4,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:09.680811 kubelet[1408]: E1213 14:25:09.680773 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:09.681313 env[1205]: time="2024-12-13T14:25:09.681261298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6p9z6,Uid:1b8babe2-42d3-48e4-ba74-480679439fd8,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:09.749709 env[1205]: time="2024-12-13T14:25:09.749562887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:09.750024 env[1205]: time="2024-12-13T14:25:09.749975582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:09.750250 env[1205]: time="2024-12-13T14:25:09.750186057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:09.750769 env[1205]: time="2024-12-13T14:25:09.750679934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1eba7e76068df9096d77c515d3404a7ef279926f586b57939f38d3421c0a9ab pid=2949 runtime=io.containerd.runc.v2 Dec 13 14:25:09.752807 env[1205]: time="2024-12-13T14:25:09.752687313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:09.752931 env[1205]: time="2024-12-13T14:25:09.752786088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:09.752931 env[1205]: time="2024-12-13T14:25:09.752798621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:09.753049 env[1205]: time="2024-12-13T14:25:09.752963722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c pid=2966 runtime=io.containerd.runc.v2 Dec 13 14:25:09.764131 systemd[1]: Started cri-containerd-c1eba7e76068df9096d77c515d3404a7ef279926f586b57939f38d3421c0a9ab.scope. Dec 13 14:25:09.766496 systemd[1]: Started cri-containerd-2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c.scope. Dec 13 14:25:09.788785 env[1205]: time="2024-12-13T14:25:09.788742527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6p9z6,Uid:1b8babe2-42d3-48e4-ba74-480679439fd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c\"" Dec 13 14:25:09.790221 kubelet[1408]: E1213 14:25:09.789712 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:09.791855 env[1205]: time="2024-12-13T14:25:09.791810025Z" level=info msg="CreateContainer within sandbox \"2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:25:09.808488 env[1205]: time="2024-12-13T14:25:09.808433072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ff4zk,Uid:d2c6e90f-b0c6-4bec-9ed1-6ef1a78c01e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1eba7e76068df9096d77c515d3404a7ef279926f586b57939f38d3421c0a9ab\"" Dec 13 14:25:09.809315 kubelet[1408]: E1213 14:25:09.809291 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:09.810121 env[1205]: time="2024-12-13T14:25:09.810093058Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:25:09.810594 env[1205]: time="2024-12-13T14:25:09.810566837Z" level=info msg="CreateContainer within sandbox \"2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\"" Dec 13 14:25:09.810896 env[1205]: time="2024-12-13T14:25:09.810872811Z" level=info msg="StartContainer for \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\"" Dec 13 14:25:09.825676 systemd[1]: Started cri-containerd-54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214.scope. Dec 13 14:25:09.835643 systemd[1]: cri-containerd-54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214.scope: Deactivated successfully. Dec 13 14:25:09.835883 systemd[1]: Stopped cri-containerd-54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214.scope. Dec 13 14:25:09.855083 env[1205]: time="2024-12-13T14:25:09.855023824Z" level=info msg="shim disconnected" id=54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214 Dec 13 14:25:09.855083 env[1205]: time="2024-12-13T14:25:09.855079819Z" level=warning msg="cleaning up after shim disconnected" id=54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214 namespace=k8s.io Dec 13 14:25:09.855083 env[1205]: time="2024-12-13T14:25:09.855089627Z" level=info msg="cleaning up dead shim" Dec 13 14:25:09.862009 env[1205]: time="2024-12-13T14:25:09.861943403Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3049 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:25:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:25:09.862379 env[1205]: time="2024-12-13T14:25:09.862247353Z" level=error msg="copy shim log" error="read /proc/self/fd/85: file already closed" Dec 13 14:25:09.862656 env[1205]: time="2024-12-13T14:25:09.862579056Z" level=error msg="Failed to pipe stderr of container \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\"" error="reading from a closed fifo" Dec 13 14:25:09.862812 env[1205]: time="2024-12-13T14:25:09.862776968Z" level=error msg="Failed to pipe stdout of container \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\"" error="reading from a closed fifo" Dec 13 14:25:09.865850 env[1205]: time="2024-12-13T14:25:09.865796336Z" level=error msg="StartContainer for \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:25:09.866063 kubelet[1408]: E1213 14:25:09.866026 1408 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214" Dec 13 14:25:09.867134 kubelet[1408]: E1213 14:25:09.867101 1408 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:25:09.867134 kubelet[1408]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:25:09.867134 kubelet[1408]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:25:09.867134 kubelet[1408]: rm /hostbin/cilium-mount Dec 13 14:25:09.867260 kubelet[1408]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qhsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-6p9z6_kube-system(1b8babe2-42d3-48e4-ba74-480679439fd8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:25:09.867260 kubelet[1408]: > logger="UnhandledError" Dec 13 14:25:09.868277 kubelet[1408]: E1213 14:25:09.868239 1408 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-6p9z6" podUID="1b8babe2-42d3-48e4-ba74-480679439fd8" Dec 13 14:25:10.362456 kubelet[1408]: E1213 14:25:10.362383 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:10.656258 env[1205]: time="2024-12-13T14:25:10.656115160Z" level=info msg="StopPodSandbox for \"2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c\"" Dec 13 14:25:10.656258 env[1205]: time="2024-12-13T14:25:10.656215228Z" level=info msg="Container to stop \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:25:10.658200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c-shm.mount: Deactivated successfully. Dec 13 14:25:10.662151 systemd[1]: cri-containerd-2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c.scope: Deactivated successfully. Dec 13 14:25:10.682010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c-rootfs.mount: Deactivated successfully. Dec 13 14:25:10.687205 env[1205]: time="2024-12-13T14:25:10.687143614Z" level=info msg="shim disconnected" id=2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c Dec 13 14:25:10.687205 env[1205]: time="2024-12-13T14:25:10.687203939Z" level=warning msg="cleaning up after shim disconnected" id=2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c namespace=k8s.io Dec 13 14:25:10.687604 env[1205]: time="2024-12-13T14:25:10.687216332Z" level=info msg="cleaning up dead shim" Dec 13 14:25:10.694181 env[1205]: time="2024-12-13T14:25:10.694131723Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3080 runtime=io.containerd.runc.v2\n" Dec 13 14:25:10.694492 env[1205]: time="2024-12-13T14:25:10.694454068Z" level=info msg="TearDown network for sandbox \"2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c\" successfully" Dec 13 14:25:10.694492 env[1205]: time="2024-12-13T14:25:10.694483674Z" level=info msg="StopPodSandbox for \"2a4deae08b28b93161e45dc8c795fff588d1d6efafa4924e814852879dce270c\" returns successfully" Dec 13 14:25:10.794052 kubelet[1408]: I1213 14:25:10.793989 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-cgroup\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794052 kubelet[1408]: I1213 14:25:10.794040 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cni-path\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794052 kubelet[1408]: I1213 14:25:10.794063 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-hostproc\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794094 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-clustermesh-secrets\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794112 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-lib-modules\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794103 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794136 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-bpf-maps\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794189 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794226 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cni-path" (OuterVolumeSpecName: "cni-path") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794232 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-net\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794242 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-hostproc" (OuterVolumeSpecName: "hostproc") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794269 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-ipsec-secrets\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794298 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-run\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794323 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-etc-cni-netd\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794343 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-xtables-lock\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794377 kubelet[1408]: I1213 14:25:10.794365 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-hubble-tls\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794389 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qhsg\" (UniqueName: \"kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-kube-api-access-5qhsg\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794408 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-config-path\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794425 1408 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-kernel\") pod \"1b8babe2-42d3-48e4-ba74-480679439fd8\" (UID: \"1b8babe2-42d3-48e4-ba74-480679439fd8\") " Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794485 1408 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-hostproc\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794500 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-cgroup\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794511 1408 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cni-path\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794522 1408 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-bpf-maps\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794558 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794578 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794644 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794699 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794723 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.794871 kubelet[1408]: I1213 14:25:10.794746 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:25:10.799576 kubelet[1408]: I1213 14:25:10.797690 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:25:10.799576 kubelet[1408]: I1213 14:25:10.798678 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-kube-api-access-5qhsg" (OuterVolumeSpecName: "kube-api-access-5qhsg") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "kube-api-access-5qhsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:25:10.798779 systemd[1]: var-lib-kubelet-pods-1b8babe2\x2d42d3\x2d48e4\x2dba74\x2d480679439fd8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:25:10.800450 kubelet[1408]: I1213 14:25:10.800414 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:25:10.800795 kubelet[1408]: I1213 14:25:10.800769 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:25:10.801019 kubelet[1408]: I1213 14:25:10.800985 1408 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1b8babe2-42d3-48e4-ba74-480679439fd8" (UID: "1b8babe2-42d3-48e4-ba74-480679439fd8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:25:10.801247 systemd[1]: var-lib-kubelet-pods-1b8babe2\x2d42d3\x2d48e4\x2dba74\x2d480679439fd8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5qhsg.mount: Deactivated successfully. Dec 13 14:25:10.801351 systemd[1]: var-lib-kubelet-pods-1b8babe2\x2d42d3\x2d48e4\x2dba74\x2d480679439fd8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895005 1408 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-etc-cni-netd\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895059 1408 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-xtables-lock\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895083 1408 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-hubble-tls\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895094 1408 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5qhsg\" (UniqueName: \"kubernetes.io/projected/1b8babe2-42d3-48e4-ba74-480679439fd8-kube-api-access-5qhsg\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895108 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-config-path\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895117 1408 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-kernel\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895130 1408 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-clustermesh-secrets\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895137 1408 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-lib-modules\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895145 1408 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-host-proc-sys-net\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895153 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-ipsec-secrets\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.895228 kubelet[1408]: I1213 14:25:10.895161 1408 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b8babe2-42d3-48e4-ba74-480679439fd8-cilium-run\") on node \"10.0.0.89\" DevicePath \"\"" Dec 13 14:25:10.935691 kubelet[1408]: I1213 14:25:10.934087 1408 setters.go:600] "Node became not ready" node="10.0.0.89" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:25:10Z","lastTransitionTime":"2024-12-13T14:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:25:11.363416 kubelet[1408]: E1213 14:25:11.363362 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:11.488865 systemd[1]: var-lib-kubelet-pods-1b8babe2\x2d42d3\x2d48e4\x2dba74\x2d480679439fd8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:25:11.490704 systemd[1]: Removed slice kubepods-burstable-pod1b8babe2_42d3_48e4_ba74_480679439fd8.slice. Dec 13 14:25:11.659922 kubelet[1408]: I1213 14:25:11.659787 1408 scope.go:117] "RemoveContainer" containerID="54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214" Dec 13 14:25:11.660733 env[1205]: time="2024-12-13T14:25:11.660692747Z" level=info msg="RemoveContainer for \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\"" Dec 13 14:25:11.664567 env[1205]: time="2024-12-13T14:25:11.664516235Z" level=info msg="RemoveContainer for \"54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214\" returns successfully" Dec 13 14:25:11.703306 kubelet[1408]: E1213 14:25:11.703241 1408 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b8babe2-42d3-48e4-ba74-480679439fd8" containerName="mount-cgroup" Dec 13 14:25:11.703306 kubelet[1408]: I1213 14:25:11.703298 1408 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8babe2-42d3-48e4-ba74-480679439fd8" containerName="mount-cgroup" Dec 13 14:25:11.710602 systemd[1]: Created slice kubepods-burstable-pod959cf2fb_0398_48a7_b04c_691e7e1649e3.slice. Dec 13 14:25:11.801879 kubelet[1408]: I1213 14:25:11.801828 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/959cf2fb-0398-48a7-b04c-691e7e1649e3-cilium-ipsec-secrets\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.801879 kubelet[1408]: I1213 14:25:11.801876 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-hostproc\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.801896 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-lib-modules\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.801910 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/959cf2fb-0398-48a7-b04c-691e7e1649e3-clustermesh-secrets\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.801926 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/959cf2fb-0398-48a7-b04c-691e7e1649e3-cilium-config-path\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.801958 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-host-proc-sys-net\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802002 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-cni-path\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802042 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-cilium-run\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802068 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-bpf-maps\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802084 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-cilium-cgroup\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802100 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r2tc\" (UniqueName: \"kubernetes.io/projected/959cf2fb-0398-48a7-b04c-691e7e1649e3-kube-api-access-7r2tc\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802118 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-etc-cni-netd\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802150 kubelet[1408]: I1213 14:25:11.802137 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-xtables-lock\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802491 kubelet[1408]: I1213 14:25:11.802156 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/959cf2fb-0398-48a7-b04c-691e7e1649e3-host-proc-sys-kernel\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:11.802491 kubelet[1408]: I1213 14:25:11.802181 1408 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/959cf2fb-0398-48a7-b04c-691e7e1649e3-hubble-tls\") pod \"cilium-2d6xw\" (UID: \"959cf2fb-0398-48a7-b04c-691e7e1649e3\") " pod="kube-system/cilium-2d6xw" Dec 13 14:25:12.022825 kubelet[1408]: E1213 14:25:12.022784 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:12.023463 env[1205]: time="2024-12-13T14:25:12.023379047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2d6xw,Uid:959cf2fb-0398-48a7-b04c-691e7e1649e3,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:12.194637 env[1205]: time="2024-12-13T14:25:12.194501458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:12.194637 env[1205]: time="2024-12-13T14:25:12.194571068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:12.194637 env[1205]: time="2024-12-13T14:25:12.194585596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:12.194890 env[1205]: time="2024-12-13T14:25:12.194823964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af pid=3107 runtime=io.containerd.runc.v2 Dec 13 14:25:12.207503 systemd[1]: Started cri-containerd-2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af.scope. Dec 13 14:25:12.231451 env[1205]: time="2024-12-13T14:25:12.230396279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2d6xw,Uid:959cf2fb-0398-48a7-b04c-691e7e1649e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\"" Dec 13 14:25:12.231659 kubelet[1408]: E1213 14:25:12.231521 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:12.233976 env[1205]: time="2024-12-13T14:25:12.233926446Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:25:12.363897 kubelet[1408]: E1213 14:25:12.363747 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:12.961415 kubelet[1408]: W1213 14:25:12.961336 1408 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b8babe2_42d3_48e4_ba74_480679439fd8.slice/cri-containerd-54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214.scope WatchSource:0}: container "54928eaa93ec3cf0e1c2855bfcd01db28b9d7fd3b382f42ceaa4389cdfa6c214" in namespace "k8s.io": not found Dec 13 14:25:13.081854 env[1205]: time="2024-12-13T14:25:13.081771077Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5\"" Dec 13 14:25:13.082650 env[1205]: time="2024-12-13T14:25:13.082587861Z" level=info msg="StartContainer for \"7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5\"" Dec 13 14:25:13.100536 systemd[1]: Started cri-containerd-7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5.scope. Dec 13 14:25:13.193944 systemd[1]: cri-containerd-7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5.scope: Deactivated successfully. Dec 13 14:25:13.314965 env[1205]: time="2024-12-13T14:25:13.314900589Z" level=info msg="StartContainer for \"7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5\" returns successfully" Dec 13 14:25:13.364316 kubelet[1408]: E1213 14:25:13.364242 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:13.488096 kubelet[1408]: I1213 14:25:13.488041 1408 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8babe2-42d3-48e4-ba74-480679439fd8" path="/var/lib/kubelet/pods/1b8babe2-42d3-48e4-ba74-480679439fd8/volumes" Dec 13 14:25:13.489284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5-rootfs.mount: Deactivated successfully. Dec 13 14:25:13.603551 env[1205]: time="2024-12-13T14:25:13.603401502Z" level=info msg="shim disconnected" id=7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5 Dec 13 14:25:13.603551 env[1205]: time="2024-12-13T14:25:13.603456345Z" level=warning msg="cleaning up after shim disconnected" id=7172f1d32c0ba2b1e5a65b0df159b09e41a633d6b0066a079d23bf255d0a2bb5 namespace=k8s.io Dec 13 14:25:13.603551 env[1205]: time="2024-12-13T14:25:13.603465262Z" level=info msg="cleaning up dead shim" Dec 13 14:25:13.611369 env[1205]: time="2024-12-13T14:25:13.611302360Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3193 runtime=io.containerd.runc.v2\n" Dec 13 14:25:13.722129 kubelet[1408]: E1213 14:25:13.722093 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:13.724084 env[1205]: time="2024-12-13T14:25:13.724042513Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:25:13.951852 env[1205]: time="2024-12-13T14:25:13.951675118Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b\"" Dec 13 14:25:13.952929 env[1205]: time="2024-12-13T14:25:13.952881341Z" level=info msg="StartContainer for \"d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b\"" Dec 13 14:25:13.969643 systemd[1]: Started cri-containerd-d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b.scope. Dec 13 14:25:14.001908 systemd[1]: cri-containerd-d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b.scope: Deactivated successfully. Dec 13 14:25:14.276803 env[1205]: time="2024-12-13T14:25:14.276723248Z" level=info msg="StartContainer for \"d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b\" returns successfully" Dec 13 14:25:14.364983 kubelet[1408]: E1213 14:25:14.364912 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:14.401272 kubelet[1408]: E1213 14:25:14.401212 1408 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:25:14.489942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b-rootfs.mount: Deactivated successfully. Dec 13 14:25:14.553015 env[1205]: time="2024-12-13T14:25:14.552787194Z" level=info msg="shim disconnected" id=d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b Dec 13 14:25:14.553015 env[1205]: time="2024-12-13T14:25:14.552853057Z" level=warning msg="cleaning up after shim disconnected" id=d404a5034d765299c2824c34107e85672e36c964484693602d586533ce1d378b namespace=k8s.io Dec 13 14:25:14.553015 env[1205]: time="2024-12-13T14:25:14.552869278Z" level=info msg="cleaning up dead shim" Dec 13 14:25:14.560164 env[1205]: time="2024-12-13T14:25:14.560113964Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3254 runtime=io.containerd.runc.v2\n" Dec 13 14:25:14.725252 kubelet[1408]: E1213 14:25:14.725146 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:14.726970 env[1205]: time="2024-12-13T14:25:14.726927207Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:25:14.799416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508383804.mount: Deactivated successfully. Dec 13 14:25:14.826937 env[1205]: time="2024-12-13T14:25:14.826757009Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0\"" Dec 13 14:25:14.827340 env[1205]: time="2024-12-13T14:25:14.827304888Z" level=info msg="StartContainer for \"ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0\"" Dec 13 14:25:14.842800 systemd[1]: Started cri-containerd-ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0.scope. Dec 13 14:25:14.871769 env[1205]: time="2024-12-13T14:25:14.871535928Z" level=info msg="StartContainer for \"ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0\" returns successfully" Dec 13 14:25:14.875940 systemd[1]: cri-containerd-ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0.scope: Deactivated successfully. Dec 13 14:25:14.908065 env[1205]: time="2024-12-13T14:25:14.907989120Z" level=info msg="shim disconnected" id=ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0 Dec 13 14:25:14.908065 env[1205]: time="2024-12-13T14:25:14.908040336Z" level=warning msg="cleaning up after shim disconnected" id=ee90b778ea260cd8906b910852d3475093bf8728d0770e0d5f2c1f3fc258c4b0 namespace=k8s.io Dec 13 14:25:14.908065 env[1205]: time="2024-12-13T14:25:14.908049243Z" level=info msg="cleaning up dead shim" Dec 13 14:25:14.915135 env[1205]: time="2024-12-13T14:25:14.915073305Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3312 runtime=io.containerd.runc.v2\n" Dec 13 14:25:15.366166 kubelet[1408]: E1213 14:25:15.366078 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:15.567204 env[1205]: time="2024-12-13T14:25:15.567113837Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:15.569438 env[1205]: time="2024-12-13T14:25:15.569228395Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:15.570966 env[1205]: time="2024-12-13T14:25:15.570907787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:15.571776 env[1205]: time="2024-12-13T14:25:15.571732134Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:25:15.574498 env[1205]: time="2024-12-13T14:25:15.574394190Z" level=info msg="CreateContainer within sandbox \"c1eba7e76068df9096d77c515d3404a7ef279926f586b57939f38d3421c0a9ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:25:15.591046 env[1205]: time="2024-12-13T14:25:15.590958946Z" level=info msg="CreateContainer within sandbox \"c1eba7e76068df9096d77c515d3404a7ef279926f586b57939f38d3421c0a9ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c626fb25182b7c13b5bc5073756e5316effd5c87cae50e976fb2312773f719c9\"" Dec 13 14:25:15.591819 env[1205]: time="2024-12-13T14:25:15.591741625Z" level=info msg="StartContainer for \"c626fb25182b7c13b5bc5073756e5316effd5c87cae50e976fb2312773f719c9\"" Dec 13 14:25:15.612287 systemd[1]: Started cri-containerd-c626fb25182b7c13b5bc5073756e5316effd5c87cae50e976fb2312773f719c9.scope. Dec 13 14:25:15.804968 env[1205]: time="2024-12-13T14:25:15.804888725Z" level=info msg="StartContainer for \"c626fb25182b7c13b5bc5073756e5316effd5c87cae50e976fb2312773f719c9\" returns successfully" Dec 13 14:25:15.809460 kubelet[1408]: E1213 14:25:15.809414 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:15.811379 env[1205]: time="2024-12-13T14:25:15.811323781Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:25:15.855764 env[1205]: time="2024-12-13T14:25:15.855599921Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888\"" Dec 13 14:25:15.856642 env[1205]: time="2024-12-13T14:25:15.856596311Z" level=info msg="StartContainer for \"e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888\"" Dec 13 14:25:15.877267 systemd[1]: Started cri-containerd-e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888.scope. Dec 13 14:25:15.934661 systemd[1]: cri-containerd-e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888.scope: Deactivated successfully. Dec 13 14:25:16.099945 env[1205]: time="2024-12-13T14:25:16.099765535Z" level=info msg="StartContainer for \"e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888\" returns successfully" Dec 13 14:25:16.264309 env[1205]: time="2024-12-13T14:25:16.264235766Z" level=info msg="shim disconnected" id=e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888 Dec 13 14:25:16.264309 env[1205]: time="2024-12-13T14:25:16.264308343Z" level=warning msg="cleaning up after shim disconnected" id=e815646182d40d1a662b2e5c3b8532cd86279f11454bd77104957e4a08b8d888 namespace=k8s.io Dec 13 14:25:16.264643 env[1205]: time="2024-12-13T14:25:16.264325154Z" level=info msg="cleaning up dead shim" Dec 13 14:25:16.272018 env[1205]: time="2024-12-13T14:25:16.271968868Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3403 runtime=io.containerd.runc.v2\n" Dec 13 14:25:16.366898 kubelet[1408]: E1213 14:25:16.366736 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:16.815684 kubelet[1408]: E1213 14:25:16.815643 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:16.815920 kubelet[1408]: E1213 14:25:16.815745 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:16.817933 env[1205]: time="2024-12-13T14:25:16.817881362Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:25:16.846254 kubelet[1408]: I1213 14:25:16.846180 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ff4zk" podStartSLOduration=2.083225842 podStartE2EDuration="7.846155891s" podCreationTimestamp="2024-12-13 14:25:09 +0000 UTC" firstStartedPulling="2024-12-13 14:25:09.809835153 +0000 UTC m=+71.222168847" lastFinishedPulling="2024-12-13 14:25:15.572765192 +0000 UTC m=+76.985098896" observedRunningTime="2024-12-13 14:25:16.846026999 +0000 UTC m=+78.258360703" watchObservedRunningTime="2024-12-13 14:25:16.846155891 +0000 UTC m=+78.258489585" Dec 13 14:25:16.849853 env[1205]: time="2024-12-13T14:25:16.849788538Z" level=info msg="CreateContainer within sandbox \"2649158cec22084045719cb1103662a9281ea557f1d8e59c769518e743a905af\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492\"" Dec 13 14:25:16.850379 env[1205]: time="2024-12-13T14:25:16.850341616Z" level=info msg="StartContainer for \"d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492\"" Dec 13 14:25:16.867289 systemd[1]: Started cri-containerd-d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492.scope. Dec 13 14:25:16.896981 env[1205]: time="2024-12-13T14:25:16.896908190Z" level=info msg="StartContainer for \"d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492\" returns successfully" Dec 13 14:25:17.367221 kubelet[1408]: E1213 14:25:17.367152 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:17.660659 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:25:17.823092 kubelet[1408]: E1213 14:25:17.823045 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:17.823276 kubelet[1408]: E1213 14:25:17.823176 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:18.022303 kubelet[1408]: I1213 14:25:18.022238 1408 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2d6xw" podStartSLOduration=7.022212523 podStartE2EDuration="7.022212523s" podCreationTimestamp="2024-12-13 14:25:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:25:18.022076978 +0000 UTC m=+79.434410702" watchObservedRunningTime="2024-12-13 14:25:18.022212523 +0000 UTC m=+79.434546217" Dec 13 14:25:18.367995 kubelet[1408]: E1213 14:25:18.367907 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:18.825646 kubelet[1408]: E1213 14:25:18.825568 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:19.316053 kubelet[1408]: E1213 14:25:19.315981 1408 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:19.369143 kubelet[1408]: E1213 14:25:19.369063 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:19.828053 kubelet[1408]: E1213 14:25:19.828004 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:20.369496 kubelet[1408]: E1213 14:25:20.369425 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:20.581687 systemd-networkd[1028]: lxc_health: Link UP Dec 13 14:25:20.594713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:25:20.594892 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 14:25:20.681953 systemd[1]: run-containerd-runc-k8s.io-d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492-runc.XFMZsX.mount: Deactivated successfully. Dec 13 14:25:20.823242 kubelet[1408]: E1213 14:25:20.823193 1408 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58788->127.0.0.1:34241: write tcp 127.0.0.1:58788->127.0.0.1:34241: write: connection reset by peer Dec 13 14:25:21.370130 kubelet[1408]: E1213 14:25:21.370065 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:22.024580 kubelet[1408]: E1213 14:25:22.024520 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:22.101336 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 14:25:22.370518 kubelet[1408]: E1213 14:25:22.370347 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:22.833801 kubelet[1408]: E1213 14:25:22.833765 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:22.941320 systemd[1]: run-containerd-runc-k8s.io-d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492-runc.AGYenk.mount: Deactivated successfully. Dec 13 14:25:23.371543 kubelet[1408]: E1213 14:25:23.371464 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:23.835767 kubelet[1408]: E1213 14:25:23.835710 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:24.371940 kubelet[1408]: E1213 14:25:24.371867 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:25.063065 systemd[1]: run-containerd-runc-k8s.io-d5aef162a0de4637eb595ff5c8f329170146b182fbda412caf6b3bcb65bb6492-runc.95ICEB.mount: Deactivated successfully. Dec 13 14:25:25.372500 kubelet[1408]: E1213 14:25:25.372295 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:26.373508 kubelet[1408]: E1213 14:25:26.373315 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:27.374092 kubelet[1408]: E1213 14:25:27.374018 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:27.485267 kubelet[1408]: E1213 14:25:27.485211 1408 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:28.375034 kubelet[1408]: E1213 14:25:28.374883 1408 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"