Dec 13 14:18:02.182431 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:18:02.182462 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:02.182471 kernel: BIOS-provided physical RAM map: Dec 13 14:18:02.182476 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:18:02.182482 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:18:02.182487 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:18:02.182494 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 14:18:02.182500 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 14:18:02.182507 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:18:02.182513 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:18:02.182518 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:18:02.182524 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:18:02.182529 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 14:18:02.182535 kernel: NX (Execute Disable) protection: active Dec 13 14:18:02.182543 kernel: SMBIOS 2.8 present. Dec 13 14:18:02.182550 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 14:18:02.182556 kernel: Hypervisor detected: KVM Dec 13 14:18:02.182562 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:18:02.182567 kernel: kvm-clock: cpu 0, msr 1b19a001, primary cpu clock Dec 13 14:18:02.182573 kernel: kvm-clock: using sched offset of 3574614979 cycles Dec 13 14:18:02.182580 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:18:02.182589 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:18:02.182596 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:18:02.182604 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:18:02.182610 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 14:18:02.182616 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:18:02.182622 kernel: Using GB pages for direct mapping Dec 13 14:18:02.182629 kernel: ACPI: Early table checksum verification disabled Dec 13 14:18:02.182635 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 14:18:02.182641 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182647 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182653 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182661 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 14:18:02.182667 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182673 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182679 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182686 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:02.182692 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 14:18:02.182718 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 14:18:02.182727 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 14:18:02.182741 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 14:18:02.182749 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 14:18:02.182755 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 14:18:02.182762 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 14:18:02.182768 kernel: No NUMA configuration found Dec 13 14:18:02.182775 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 14:18:02.182783 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 14:18:02.182790 kernel: Zone ranges: Dec 13 14:18:02.182798 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:18:02.182807 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 14:18:02.182815 kernel: Normal empty Dec 13 14:18:02.182823 kernel: Movable zone start for each node Dec 13 14:18:02.182831 kernel: Early memory node ranges Dec 13 14:18:02.182837 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:18:02.182844 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 14:18:02.182851 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 14:18:02.182863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:18:02.182869 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:18:02.182876 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 14:18:02.182882 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:18:02.182889 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:18:02.182896 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:18:02.182902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:18:02.182909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:18:02.182915 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:18:02.182923 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:18:02.183465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:18:02.183480 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:18:02.183491 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:18:02.183498 kernel: TSC deadline timer available Dec 13 14:18:02.183507 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:18:02.183515 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:18:02.183524 kernel: kvm-guest: setup PV sched yield Dec 13 14:18:02.183533 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:18:02.183544 kernel: Booting paravirtualized kernel on KVM Dec 13 14:18:02.183551 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:18:02.183557 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:18:02.183564 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:18:02.183571 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:18:02.183577 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:18:02.183583 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:18:02.183590 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 14:18:02.183596 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:18:02.183604 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:18:02.183611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 14:18:02.183617 kernel: Policy zone: DMA32 Dec 13 14:18:02.183625 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:02.183632 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:18:02.183639 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:18:02.183646 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:18:02.183652 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:18:02.183660 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 134796K reserved, 0K cma-reserved) Dec 13 14:18:02.183667 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:18:02.183674 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:18:02.183680 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:18:02.183687 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:18:02.183694 kernel: rcu: RCU event tracing is enabled. Dec 13 14:18:02.183714 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:18:02.183721 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:18:02.183727 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:18:02.183735 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:18:02.183742 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:18:02.183749 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:18:02.183756 kernel: random: crng init done Dec 13 14:18:02.183762 kernel: Console: colour VGA+ 80x25 Dec 13 14:18:02.183769 kernel: printk: console [ttyS0] enabled Dec 13 14:18:02.183775 kernel: ACPI: Core revision 20210730 Dec 13 14:18:02.183782 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:18:02.183788 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:18:02.183796 kernel: x2apic enabled Dec 13 14:18:02.183803 kernel: Switched APIC routing to physical x2apic. Dec 13 14:18:02.183809 kernel: kvm-guest: setup PV IPIs Dec 13 14:18:02.183816 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:18:02.183823 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:18:02.183834 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:18:02.183841 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:18:02.183848 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:18:02.183854 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:18:02.183867 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:18:02.183874 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:18:02.183881 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:18:02.183889 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:18:02.183896 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:18:02.183903 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:18:02.183910 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:18:02.183917 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:18:02.183924 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:18:02.183932 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:18:02.183939 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:18:02.183946 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:18:02.183953 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:18:02.183960 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:18:02.183967 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:18:02.183974 kernel: LSM: Security Framework initializing Dec 13 14:18:02.183982 kernel: SELinux: Initializing. Dec 13 14:18:02.183989 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:18:02.183996 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:18:02.184003 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:18:02.184010 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:18:02.184016 kernel: ... version: 0 Dec 13 14:18:02.184023 kernel: ... bit width: 48 Dec 13 14:18:02.184030 kernel: ... generic registers: 6 Dec 13 14:18:02.184037 kernel: ... value mask: 0000ffffffffffff Dec 13 14:18:02.184045 kernel: ... max period: 00007fffffffffff Dec 13 14:18:02.184052 kernel: ... fixed-purpose events: 0 Dec 13 14:18:02.184059 kernel: ... event mask: 000000000000003f Dec 13 14:18:02.184068 kernel: signal: max sigframe size: 1776 Dec 13 14:18:02.184077 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:18:02.184086 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:18:02.184095 kernel: x86: Booting SMP configuration: Dec 13 14:18:02.184104 kernel: .... node #0, CPUs: #1 Dec 13 14:18:02.184113 kernel: kvm-clock: cpu 1, msr 1b19a041, secondary cpu clock Dec 13 14:18:02.184120 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:18:02.184129 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 14:18:02.184135 kernel: #2 Dec 13 14:18:02.184143 kernel: kvm-clock: cpu 2, msr 1b19a081, secondary cpu clock Dec 13 14:18:02.184149 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:18:02.184156 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 14:18:02.184163 kernel: #3 Dec 13 14:18:02.184170 kernel: kvm-clock: cpu 3, msr 1b19a0c1, secondary cpu clock Dec 13 14:18:02.184178 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:18:02.184191 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 14:18:02.184203 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:18:02.184211 kernel: smpboot: Max logical packages: 1 Dec 13 14:18:02.184220 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:18:02.184227 kernel: devtmpfs: initialized Dec 13 14:18:02.184234 kernel: x86/mm: Memory block size: 128MB Dec 13 14:18:02.184241 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:18:02.184248 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:18:02.184255 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:18:02.184262 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:18:02.184271 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:18:02.184278 kernel: audit: type=2000 audit(1734099480.789:1): state=initialized audit_enabled=0 res=1 Dec 13 14:18:02.184294 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:18:02.184301 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:18:02.184309 kernel: cpuidle: using governor menu Dec 13 14:18:02.184322 kernel: ACPI: bus type PCI registered Dec 13 14:18:02.184333 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:18:02.184343 kernel: dca service started, version 1.12.1 Dec 13 14:18:02.184351 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:18:02.184363 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:18:02.184372 kernel: PCI: Using configuration type 1 for base access Dec 13 14:18:02.184381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:18:02.184389 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:18:02.184396 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:18:02.184403 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:18:02.184410 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:18:02.184417 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:18:02.184424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:18:02.184432 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:18:02.184439 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:18:02.184446 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:18:02.184453 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:18:02.184460 kernel: ACPI: Interpreter enabled Dec 13 14:18:02.184466 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:18:02.184473 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:18:02.184480 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:18:02.184487 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:18:02.184495 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:18:02.184664 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:18:02.184763 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:18:02.184840 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:18:02.184852 kernel: PCI host bridge to bus 0000:00 Dec 13 14:18:02.184962 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:18:02.185068 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:18:02.185157 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:18:02.185227 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:18:02.185315 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:18:02.185383 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 14:18:02.185479 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:18:02.185592 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:18:02.185689 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:18:02.185788 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 14:18:02.185900 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 14:18:02.185989 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 14:18:02.186064 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:18:02.186156 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:18:02.186234 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:18:02.186330 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 14:18:02.186406 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 14:18:02.186509 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:18:02.186586 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:18:02.186671 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 14:18:02.186788 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 14:18:02.186890 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:18:02.186971 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 14:18:02.187048 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 14:18:02.187128 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 14:18:02.187203 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 14:18:02.187305 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:18:02.187385 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:18:02.187498 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:18:02.187580 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 14:18:02.187655 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 14:18:02.187793 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:18:02.187886 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:18:02.191584 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:18:02.191595 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:18:02.191603 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:18:02.191613 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:18:02.191621 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:18:02.191628 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:18:02.191635 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:18:02.191642 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:18:02.191649 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:18:02.191656 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:18:02.191663 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:18:02.191670 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:18:02.191678 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:18:02.191685 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:18:02.191692 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:18:02.191712 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:18:02.191719 kernel: iommu: Default domain type: Translated Dec 13 14:18:02.191746 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:18:02.191843 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:18:02.191920 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:18:02.191997 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:18:02.192007 kernel: vgaarb: loaded Dec 13 14:18:02.192015 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:18:02.192022 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:18:02.192030 kernel: PTP clock support registered Dec 13 14:18:02.192038 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:18:02.192047 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:18:02.192056 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:18:02.192066 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 14:18:02.192078 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:18:02.192088 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:18:02.192096 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:18:02.192103 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:18:02.192110 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:18:02.192117 kernel: pnp: PnP ACPI init Dec 13 14:18:02.192229 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:18:02.192241 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:18:02.192249 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:18:02.192259 kernel: NET: Registered PF_INET protocol family Dec 13 14:18:02.192266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:18:02.192274 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:18:02.192281 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:18:02.192299 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:18:02.192307 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:18:02.192315 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:18:02.192322 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:18:02.192331 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:18:02.192338 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:18:02.192345 kernel: NET: Registered PF_XDP protocol family Dec 13 14:18:02.192417 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:18:02.192484 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:18:02.192550 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:18:02.192617 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:18:02.192683 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:18:02.192789 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 14:18:02.192809 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:18:02.192818 kernel: Initialise system trusted keyrings Dec 13 14:18:02.192827 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:18:02.192837 kernel: Key type asymmetric registered Dec 13 14:18:02.192845 kernel: Asymmetric key parser 'x509' registered Dec 13 14:18:02.192854 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:18:02.192863 kernel: io scheduler mq-deadline registered Dec 13 14:18:02.192870 kernel: io scheduler kyber registered Dec 13 14:18:02.192877 kernel: io scheduler bfq registered Dec 13 14:18:02.192886 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:18:02.192894 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:18:02.192902 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:18:02.192909 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:18:02.192916 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:18:02.192923 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:18:02.192930 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:18:02.192938 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:18:02.192946 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:18:02.193074 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:18:02.193093 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:18:02.193186 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:18:02.193258 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:18:01 UTC (1734099481) Dec 13 14:18:02.193343 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:18:02.193353 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:18:02.193360 kernel: Segment Routing with IPv6 Dec 13 14:18:02.193367 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:18:02.193378 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:18:02.193385 kernel: Key type dns_resolver registered Dec 13 14:18:02.193392 kernel: IPI shorthand broadcast: enabled Dec 13 14:18:02.193399 kernel: sched_clock: Marking stable (505075799, 110771343)->(641353778, -25506636) Dec 13 14:18:02.193406 kernel: registered taskstats version 1 Dec 13 14:18:02.193413 kernel: Loading compiled-in X.509 certificates Dec 13 14:18:02.193420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:18:02.193428 kernel: Key type .fscrypt registered Dec 13 14:18:02.193434 kernel: Key type fscrypt-provisioning registered Dec 13 14:18:02.193443 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:18:02.193450 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:18:02.193458 kernel: ima: No architecture policies found Dec 13 14:18:02.193464 kernel: clk: Disabling unused clocks Dec 13 14:18:02.193472 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:18:02.193479 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:18:02.193486 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:18:02.193493 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:18:02.193502 kernel: Run /init as init process Dec 13 14:18:02.193509 kernel: with arguments: Dec 13 14:18:02.193516 kernel: /init Dec 13 14:18:02.193523 kernel: with environment: Dec 13 14:18:02.193530 kernel: HOME=/ Dec 13 14:18:02.193537 kernel: TERM=linux Dec 13 14:18:02.193544 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:18:02.193554 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:18:02.193565 systemd[1]: Detected virtualization kvm. Dec 13 14:18:02.193572 systemd[1]: Detected architecture x86-64. Dec 13 14:18:02.193580 systemd[1]: Running in initrd. Dec 13 14:18:02.193587 systemd[1]: No hostname configured, using default hostname. Dec 13 14:18:02.193594 systemd[1]: Hostname set to . Dec 13 14:18:02.193602 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:18:02.193610 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:18:02.193617 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:18:02.193625 systemd[1]: Reached target cryptsetup.target. Dec 13 14:18:02.193634 systemd[1]: Reached target paths.target. Dec 13 14:18:02.193649 systemd[1]: Reached target slices.target. Dec 13 14:18:02.193658 systemd[1]: Reached target swap.target. Dec 13 14:18:02.193666 systemd[1]: Reached target timers.target. Dec 13 14:18:02.193674 systemd[1]: Listening on iscsid.socket. Dec 13 14:18:02.193683 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:18:02.193691 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:18:02.193714 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:18:02.193739 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:18:02.193746 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:18:02.193754 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:18:02.193764 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:18:02.193772 systemd[1]: Reached target sockets.target. Dec 13 14:18:02.193780 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:18:02.193789 systemd[1]: Finished network-cleanup.service. Dec 13 14:18:02.193797 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:18:02.193805 systemd[1]: Starting systemd-journald.service... Dec 13 14:18:02.193813 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:18:02.193821 systemd[1]: Starting systemd-resolved.service... Dec 13 14:18:02.193829 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:18:02.193837 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:18:02.193845 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:18:02.193853 kernel: audit: type=1130 audit(1734099482.181:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.193862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:18:02.193870 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:18:02.193882 systemd-journald[198]: Journal started Dec 13 14:18:02.193927 systemd-journald[198]: Runtime Journal (/run/log/journal/6fc9281581ff4d84bf00662413da7081) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:18:02.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.195095 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 14:18:02.228577 systemd[1]: Started systemd-journald.service. Dec 13 14:18:02.228605 kernel: audit: type=1130 audit(1734099482.222:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.212237 systemd-resolved[200]: Positive Trust Anchors: Dec 13 14:18:02.233186 kernel: audit: type=1130 audit(1734099482.228:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.212246 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:18:02.238529 kernel: audit: type=1130 audit(1734099482.233:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.212273 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:18:02.242719 kernel: audit: type=1130 audit(1734099482.238:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.214828 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 14:18:02.229872 systemd[1]: Started systemd-resolved.service. Dec 13 14:18:02.234254 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:18:02.239711 systemd[1]: Reached target nss-lookup.target. Dec 13 14:18:02.243883 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:18:02.264507 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:18:02.269209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:18:02.269226 kernel: audit: type=1130 audit(1734099482.263:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.265432 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:18:02.273845 dracut-cmdline[215]: dracut-dracut-053 Dec 13 14:18:02.275535 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:02.283428 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 14:18:02.284818 kernel: Bridge firewalling registered Dec 13 14:18:02.307737 kernel: SCSI subsystem initialized Dec 13 14:18:02.322378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:18:02.322414 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:18:02.323880 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:18:02.324731 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:18:02.327895 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 14:18:02.329890 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:18:02.331832 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:18:02.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.337902 kernel: audit: type=1130 audit(1734099482.329:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.344533 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:18:02.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.349738 kernel: audit: type=1130 audit(1734099482.343:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.350727 kernel: iscsi: registered transport (tcp) Dec 13 14:18:02.376842 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:18:02.376930 kernel: QLogic iSCSI HBA Driver Dec 13 14:18:02.410115 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:18:02.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.411539 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:18:02.416224 kernel: audit: type=1130 audit(1734099482.409:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.467752 kernel: raid6: avx2x4 gen() 28196 MB/s Dec 13 14:18:02.484758 kernel: raid6: avx2x4 xor() 6632 MB/s Dec 13 14:18:02.501755 kernel: raid6: avx2x2 gen() 31545 MB/s Dec 13 14:18:02.518753 kernel: raid6: avx2x2 xor() 18999 MB/s Dec 13 14:18:02.535753 kernel: raid6: avx2x1 gen() 19595 MB/s Dec 13 14:18:02.552749 kernel: raid6: avx2x1 xor() 12244 MB/s Dec 13 14:18:02.569763 kernel: raid6: sse2x4 gen() 12773 MB/s Dec 13 14:18:02.586750 kernel: raid6: sse2x4 xor() 6294 MB/s Dec 13 14:18:02.603761 kernel: raid6: sse2x2 gen() 14228 MB/s Dec 13 14:18:02.620753 kernel: raid6: sse2x2 xor() 8370 MB/s Dec 13 14:18:02.637751 kernel: raid6: sse2x1 gen() 9203 MB/s Dec 13 14:18:02.655314 kernel: raid6: sse2x1 xor() 6492 MB/s Dec 13 14:18:02.655385 kernel: raid6: using algorithm avx2x2 gen() 31545 MB/s Dec 13 14:18:02.655394 kernel: raid6: .... xor() 18999 MB/s, rmw enabled Dec 13 14:18:02.656984 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:18:02.669735 kernel: xor: automatically using best checksumming function avx Dec 13 14:18:02.762734 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:18:02.772070 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:18:02.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.773000 audit: BPF prog-id=7 op=LOAD Dec 13 14:18:02.773000 audit: BPF prog-id=8 op=LOAD Dec 13 14:18:02.774288 systemd[1]: Starting systemd-udevd.service... Dec 13 14:18:02.786685 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 14:18:02.791001 systemd[1]: Started systemd-udevd.service. Dec 13 14:18:02.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.793658 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:18:02.805677 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 14:18:02.833113 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:18:02.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.835084 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:18:02.870977 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:18:02.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:02.903749 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:18:02.909358 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:18:02.909372 kernel: GPT:9289727 != 19775487 Dec 13 14:18:02.909380 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:18:02.909389 kernel: GPT:9289727 != 19775487 Dec 13 14:18:02.909397 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:18:02.909406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:02.909420 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:18:02.919729 kernel: libata version 3.00 loaded. Dec 13 14:18:02.928830 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:18:02.928865 kernel: AES CTR mode by8 optimization enabled Dec 13 14:18:02.928878 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:18:03.154081 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:18:03.154114 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:18:03.154276 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:18:03.154416 kernel: scsi host0: ahci Dec 13 14:18:03.154584 kernel: scsi host1: ahci Dec 13 14:18:03.154771 kernel: scsi host2: ahci Dec 13 14:18:03.154924 kernel: scsi host3: ahci Dec 13 14:18:03.155071 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Dec 13 14:18:03.155086 kernel: scsi host4: ahci Dec 13 14:18:03.155219 kernel: scsi host5: ahci Dec 13 14:18:03.155350 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 14:18:03.155364 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 14:18:03.155381 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 14:18:03.155393 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 14:18:03.155408 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 14:18:03.155420 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 14:18:02.942963 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:18:03.146292 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:18:03.211229 disk-uuid[485]: Primary Header is updated. Dec 13 14:18:03.211229 disk-uuid[485]: Secondary Entries is updated. Dec 13 14:18:03.211229 disk-uuid[485]: Secondary Header is updated. Dec 13 14:18:03.148999 systemd[1]: Starting disk-uuid.service... Dec 13 14:18:03.153626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:18:03.227307 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:18:03.238553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:18:03.468893 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:03.469021 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:03.469036 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:03.469048 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:18:03.470735 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:03.471741 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:03.472733 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:18:03.475590 kernel: ata3.00: applying bridge limits Dec 13 14:18:03.475616 kernel: ata3.00: configured for UDMA/100 Dec 13 14:18:03.477726 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:18:03.535866 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:18:03.553560 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:18:03.553577 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:18:04.174735 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:04.174857 disk-uuid[524]: The operation has completed successfully. Dec 13 14:18:04.283198 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:18:04.283298 systemd[1]: Finished disk-uuid.service. Dec 13 14:18:04.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.286424 systemd[1]: Starting verity-setup.service... Dec 13 14:18:04.299725 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:18:04.319621 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:18:04.320987 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:18:04.323089 systemd[1]: Finished verity-setup.service. Dec 13 14:18:04.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.387504 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:18:04.389042 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:18:04.388440 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:18:04.389290 systemd[1]: Starting ignition-setup.service... Dec 13 14:18:04.391887 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:18:04.401868 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:18:04.401914 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:18:04.401929 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:18:04.411601 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:18:04.420744 systemd[1]: Finished ignition-setup.service. Dec 13 14:18:04.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.421623 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:18:04.464588 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:18:04.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.466000 audit: BPF prog-id=9 op=LOAD Dec 13 14:18:04.467554 systemd[1]: Starting systemd-networkd.service... Dec 13 14:18:04.491452 systemd-networkd[719]: lo: Link UP Dec 13 14:18:04.491462 systemd-networkd[719]: lo: Gained carrier Dec 13 14:18:04.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.492139 systemd-networkd[719]: Enumeration completed Dec 13 14:18:04.492396 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:18:04.492852 systemd[1]: Started systemd-networkd.service. Dec 13 14:18:04.493992 systemd-networkd[719]: eth0: Link UP Dec 13 14:18:04.493997 systemd-networkd[719]: eth0: Gained carrier Dec 13 14:18:04.494189 systemd[1]: Reached target network.target. Dec 13 14:18:04.497542 systemd[1]: Starting iscsiuio.service... Dec 13 14:18:04.579640 ignition[650]: Ignition 2.14.0 Dec 13 14:18:04.579652 ignition[650]: Stage: fetch-offline Dec 13 14:18:04.579763 ignition[650]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:04.579773 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:04.579974 ignition[650]: parsed url from cmdline: "" Dec 13 14:18:04.579977 ignition[650]: no config URL provided Dec 13 14:18:04.579982 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:18:04.579989 ignition[650]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:18:04.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.590015 systemd[1]: Started iscsiuio.service. Dec 13 14:18:04.580014 ignition[650]: op(1): [started] loading QEMU firmware config module Dec 13 14:18:04.595843 systemd[1]: Starting iscsid.service... Dec 13 14:18:04.580021 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:18:04.586593 ignition[650]: op(1): [finished] loading QEMU firmware config module Dec 13 14:18:04.601574 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:18:04.601574 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:18:04.601574 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:18:04.601574 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:18:04.601574 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:18:04.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.612609 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:18:04.604005 systemd[1]: Started iscsid.service. Dec 13 14:18:04.610173 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:18:04.621897 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:18:04.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.622053 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:18:04.633291 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:18:04.634839 systemd[1]: Reached target remote-fs.target. Dec 13 14:18:04.637027 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:18:04.640593 ignition[650]: parsing config with SHA512: 95a0a257fbabad1b3f83607c68d020094756e4659c977e0fd8e7df34aa9df7882340787ee1db5f822212a299cd5e8e24199e5f6d654bddc0ae44b6c8630d5b2d Dec 13 14:18:04.645862 systemd-networkd[719]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:18:04.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.645985 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:18:04.653938 unknown[650]: fetched base config from "system" Dec 13 14:18:04.653966 unknown[650]: fetched user config from "qemu" Dec 13 14:18:04.654611 ignition[650]: fetch-offline: fetch-offline passed Dec 13 14:18:04.656092 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:18:04.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.654800 ignition[650]: Ignition finished successfully Dec 13 14:18:04.657947 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:18:04.659012 systemd[1]: Starting ignition-kargs.service... Dec 13 14:18:04.678426 ignition[740]: Ignition 2.14.0 Dec 13 14:18:04.678449 ignition[740]: Stage: kargs Dec 13 14:18:04.678593 ignition[740]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:04.678608 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:04.683272 ignition[740]: kargs: kargs passed Dec 13 14:18:04.683329 ignition[740]: Ignition finished successfully Dec 13 14:18:04.685736 systemd[1]: Finished ignition-kargs.service. Dec 13 14:18:04.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.687633 systemd[1]: Starting ignition-disks.service... Dec 13 14:18:04.698459 ignition[746]: Ignition 2.14.0 Dec 13 14:18:04.698486 ignition[746]: Stage: disks Dec 13 14:18:04.698633 ignition[746]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:04.698652 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:04.703117 ignition[746]: disks: disks passed Dec 13 14:18:04.703166 ignition[746]: Ignition finished successfully Dec 13 14:18:04.705301 systemd[1]: Finished ignition-disks.service. Dec 13 14:18:04.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.705552 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:18:04.707781 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:18:04.707838 systemd[1]: Reached target local-fs.target. Dec 13 14:18:04.708025 systemd[1]: Reached target sysinit.target. Dec 13 14:18:04.708179 systemd[1]: Reached target basic.target. Dec 13 14:18:04.709396 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:18:04.722615 systemd-fsck[754]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:18:04.729562 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:18:04.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.733200 systemd[1]: Mounting sysroot.mount... Dec 13 14:18:04.743759 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:18:04.744806 systemd[1]: Mounted sysroot.mount. Dec 13 14:18:04.746651 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:18:04.749035 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:18:04.749495 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:18:04.749551 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:18:04.749590 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:18:04.753333 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:18:04.756383 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:18:04.761379 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:18:04.766763 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:18:04.770010 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:18:04.773055 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:18:04.803015 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:18:04.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.804835 systemd[1]: Starting ignition-mount.service... Dec 13 14:18:04.806375 systemd[1]: Starting sysroot-boot.service... Dec 13 14:18:04.810882 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:18:04.825432 systemd[1]: Finished sysroot-boot.service. Dec 13 14:18:04.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:04.828153 ignition[806]: INFO : Ignition 2.14.0 Dec 13 14:18:04.828153 ignition[806]: INFO : Stage: mount Dec 13 14:18:04.828153 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:04.828153 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:04.828153 ignition[806]: INFO : mount: mount passed Dec 13 14:18:04.828153 ignition[806]: INFO : Ignition finished successfully Dec 13 14:18:04.827292 systemd[1]: Finished ignition-mount.service. Dec 13 14:18:05.331308 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:18:05.340858 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Dec 13 14:18:05.340914 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:18:05.340935 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:18:05.342468 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:18:05.346030 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:18:05.347977 systemd[1]: Starting ignition-files.service... Dec 13 14:18:05.368985 ignition[835]: INFO : Ignition 2.14.0 Dec 13 14:18:05.370149 ignition[835]: INFO : Stage: files Dec 13 14:18:05.370907 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:05.370907 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:05.373370 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:18:05.373370 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:18:05.373370 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:18:05.377748 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:18:05.377748 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:18:05.377748 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:18:05.377748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:18:05.377748 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:18:05.374770 unknown[835]: wrote ssh authorized keys file for user: core Dec 13 14:18:05.420098 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:18:05.557340 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:18:05.559546 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:18:05.903678 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 14:18:05.942946 systemd-networkd[719]: eth0: Gained IPv6LL Dec 13 14:18:06.669688 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:18:06.669688 ignition[835]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:18:06.674998 ignition[835]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:18:06.722976 ignition[835]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:18:06.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.726418 ignition[835]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:18:06.726418 ignition[835]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:18:06.726418 ignition[835]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:18:06.726418 ignition[835]: INFO : files: files passed Dec 13 14:18:06.726418 ignition[835]: INFO : Ignition finished successfully Dec 13 14:18:06.746132 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 14:18:06.746160 kernel: audit: type=1130 audit(1734099486.725:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.746172 kernel: audit: type=1130 audit(1734099486.738:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.746209 kernel: audit: type=1131 audit(1734099486.738:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.724714 systemd[1]: Finished ignition-files.service. Dec 13 14:18:06.727076 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:18:06.732570 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:18:06.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.752738 initrd-setup-root-after-ignition[860]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:18:06.759693 kernel: audit: type=1130 audit(1734099486.751:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.733451 systemd[1]: Starting ignition-quench.service... Dec 13 14:18:06.761055 initrd-setup-root-after-ignition[862]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:18:06.737468 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:18:06.737556 systemd[1]: Finished ignition-quench.service. Dec 13 14:18:06.749753 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:18:06.752623 systemd[1]: Reached target ignition-complete.target. Dec 13 14:18:06.757656 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:18:06.769589 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:18:06.769684 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:18:06.780841 kernel: audit: type=1130 audit(1734099486.771:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.780870 kernel: audit: type=1131 audit(1734099486.771:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.771985 systemd[1]: Reached target initrd-fs.target. Dec 13 14:18:06.777781 systemd[1]: Reached target initrd.target. Dec 13 14:18:06.777993 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:18:06.778899 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:18:06.788775 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:18:06.794280 kernel: audit: type=1130 audit(1734099486.788:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.789644 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:18:06.801551 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:18:06.802571 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:18:06.804640 systemd[1]: Stopped target timers.target. Dec 13 14:18:06.806253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:18:06.812794 kernel: audit: type=1131 audit(1734099486.807:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.806376 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:18:06.835890 kernel: audit: type=1131 audit(1734099486.815:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.835923 kernel: audit: type=1131 audit(1734099486.819:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.808205 systemd[1]: Stopped target initrd.target. Dec 13 14:18:06.812900 systemd[1]: Stopped target basic.target. Dec 13 14:18:06.813050 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:18:06.813431 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:18:06.813903 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:18:06.841053 ignition[875]: INFO : Ignition 2.14.0 Dec 13 14:18:06.841053 ignition[875]: INFO : Stage: umount Dec 13 14:18:06.841053 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:06.841053 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:06.841053 ignition[875]: INFO : umount: umount passed Dec 13 14:18:06.814403 systemd[1]: Stopped target remote-fs.target. Dec 13 14:18:06.847191 ignition[875]: INFO : Ignition finished successfully Dec 13 14:18:06.814537 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:18:06.814713 systemd[1]: Stopped target sysinit.target. Dec 13 14:18:06.815116 systemd[1]: Stopped target local-fs.target. Dec 13 14:18:06.815470 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:18:06.816029 systemd[1]: Stopped target swap.target. Dec 13 14:18:06.816205 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:18:06.816296 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:18:06.816640 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:18:06.820118 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:18:06.820229 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:18:06.820508 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:18:06.820596 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:18:06.824220 systemd[1]: Stopped target paths.target. Dec 13 14:18:06.824306 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:18:06.827757 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:18:06.828207 systemd[1]: Stopped target slices.target. Dec 13 14:18:06.828588 systemd[1]: Stopped target sockets.target. Dec 13 14:18:06.829279 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:18:06.829490 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:18:06.830139 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:18:06.830258 systemd[1]: Stopped ignition-files.service. Dec 13 14:18:06.885301 systemd[1]: Stopping ignition-mount.service... Dec 13 14:18:06.886381 systemd[1]: Stopping iscsid.service... Dec 13 14:18:06.888441 iscsid[726]: iscsid shutting down. Dec 13 14:18:06.888892 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:18:06.890305 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:18:06.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.890524 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:18:06.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.892425 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:18:06.892558 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:18:06.896768 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:18:06.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.896878 systemd[1]: Stopped iscsid.service. Dec 13 14:18:06.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.899351 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:18:06.899444 systemd[1]: Stopped ignition-mount.service. Dec 13 14:18:06.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.902362 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:18:06.902457 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:18:06.904499 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:18:06.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.904538 systemd[1]: Closed iscsid.socket. Dec 13 14:18:06.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.906648 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:18:06.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.906690 systemd[1]: Stopped ignition-disks.service. Dec 13 14:18:06.908278 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:18:06.908314 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:18:06.910023 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:18:06.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.910070 systemd[1]: Stopped ignition-setup.service. Dec 13 14:18:06.911624 systemd[1]: Stopping iscsiuio.service... Dec 13 14:18:06.914370 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:18:06.915400 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:18:06.915483 systemd[1]: Stopped iscsiuio.service. Dec 13 14:18:06.916609 systemd[1]: Stopped target network.target. Dec 13 14:18:06.918261 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:18:06.918291 systemd[1]: Closed iscsiuio.socket. Dec 13 14:18:06.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.918522 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:18:06.919047 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:18:06.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.924755 systemd-networkd[719]: eth0: DHCPv6 lease lost Dec 13 14:18:06.936000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:18:06.926304 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:18:06.926403 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:18:06.938000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:18:06.932872 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:18:06.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.932972 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:18:06.934634 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:18:06.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.934659 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:18:06.937437 systemd[1]: Stopping network-cleanup.service... Dec 13 14:18:06.938846 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:18:06.938910 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:18:06.940604 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:18:06.940658 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:18:06.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.942599 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:18:06.942648 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:18:06.944419 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:18:06.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.948411 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:18:06.951471 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:18:06.951562 systemd[1]: Stopped network-cleanup.service. Dec 13 14:18:06.955174 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:18:06.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.955327 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:18:06.957877 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:18:06.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.957922 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:18:06.959955 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:18:06.959994 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:18:06.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.961869 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:18:06.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.961924 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:18:06.963600 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:18:06.963717 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:18:06.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:06.965429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:18:06.965471 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:18:06.968279 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:18:06.969945 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:18:06.969997 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:18:06.972732 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:18:06.972778 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:18:06.973668 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:18:06.973770 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:18:06.976603 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:18:06.977158 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:18:06.977261 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:18:07.007999 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:18:07.008110 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:18:07.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:07.009952 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:18:07.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:07.011487 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:18:07.011529 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:18:07.012550 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:18:07.028793 systemd[1]: Switching root. Dec 13 14:18:07.058129 systemd-journald[198]: Journal stopped Dec 13 14:18:11.110792 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 14:18:11.110868 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:18:11.110894 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:18:11.110909 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:18:11.110924 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:18:11.110939 kernel: SELinux: policy capability open_perms=1 Dec 13 14:18:11.110958 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:18:11.110971 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:18:11.110985 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:18:11.110999 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:18:11.111021 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:18:11.111041 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:18:11.111061 systemd[1]: Successfully loaded SELinux policy in 44.547ms. Dec 13 14:18:11.111085 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.056ms. Dec 13 14:18:11.111113 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:18:11.111130 systemd[1]: Detected virtualization kvm. Dec 13 14:18:11.111146 systemd[1]: Detected architecture x86-64. Dec 13 14:18:11.111169 systemd[1]: Detected first boot. Dec 13 14:18:11.111184 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:18:11.111199 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:18:11.111214 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:18:11.111230 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:11.111254 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:11.111284 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:11.111303 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:18:11.111318 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:18:11.111334 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:18:11.111350 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:18:11.111365 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:18:11.111380 systemd[1]: Created slice system-getty.slice. Dec 13 14:18:11.111395 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:18:11.111419 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:18:11.111436 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:18:11.111452 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:18:11.111467 systemd[1]: Created slice user.slice. Dec 13 14:18:11.111482 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:18:11.111497 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:18:11.111512 systemd[1]: Set up automount boot.automount. Dec 13 14:18:11.111528 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:18:11.111543 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:18:11.111570 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:18:11.111586 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:18:11.111602 systemd[1]: Reached target integritysetup.target. Dec 13 14:18:11.111617 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:18:11.111640 systemd[1]: Reached target remote-fs.target. Dec 13 14:18:11.111656 systemd[1]: Reached target slices.target. Dec 13 14:18:11.111671 systemd[1]: Reached target swap.target. Dec 13 14:18:11.111686 systemd[1]: Reached target torcx.target. Dec 13 14:18:11.111742 systemd[1]: Reached target veritysetup.target. Dec 13 14:18:11.111761 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:18:11.111778 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:18:11.111793 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:18:11.111808 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:18:11.111823 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:18:11.111838 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:18:11.111854 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:18:11.111870 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:18:11.111885 systemd[1]: Mounting media.mount... Dec 13 14:18:11.111910 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:11.111927 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:18:11.111942 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:18:11.111958 systemd[1]: Mounting tmp.mount... Dec 13 14:18:11.111973 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:18:11.111989 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:11.112004 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:18:11.112020 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:18:11.112044 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:11.113029 systemd[1]: Starting modprobe@drm.service... Dec 13 14:18:11.113048 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:11.113063 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:18:11.113079 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:11.113112 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:18:11.113128 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:18:11.113143 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:18:11.113159 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:18:11.113183 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:18:11.113200 systemd[1]: Stopped systemd-journald.service. Dec 13 14:18:11.113214 kernel: loop: module loaded Dec 13 14:18:11.113228 kernel: fuse: init (API version 7.34) Dec 13 14:18:11.113243 systemd[1]: Starting systemd-journald.service... Dec 13 14:18:11.113259 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:18:11.113274 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:18:11.113290 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:18:11.113305 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:18:11.113320 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:18:11.113343 systemd[1]: Stopped verity-setup.service. Dec 13 14:18:11.113359 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:11.113375 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:18:11.113397 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:18:11.113416 systemd-journald[986]: Journal started Dec 13 14:18:11.113468 systemd-journald[986]: Runtime Journal (/run/log/journal/6fc9281581ff4d84bf00662413da7081) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:18:07.115000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:18:07.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:18:07.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:18:07.357000 audit: BPF prog-id=10 op=LOAD Dec 13 14:18:07.357000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:18:07.357000 audit: BPF prog-id=11 op=LOAD Dec 13 14:18:07.357000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:18:07.395000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:18:07.395000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018f8e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:07.395000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:18:07.397000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:18:07.397000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018f9b9 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:07.397000 audit: CWD cwd="/" Dec 13 14:18:07.397000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:07.397000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:07.397000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:18:10.964000 audit: BPF prog-id=12 op=LOAD Dec 13 14:18:10.964000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:18:10.964000 audit: BPF prog-id=13 op=LOAD Dec 13 14:18:10.964000 audit: BPF prog-id=14 op=LOAD Dec 13 14:18:10.964000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:18:10.964000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:18:10.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:10.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:10.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:10.976000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:18:11.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.083000 audit: BPF prog-id=15 op=LOAD Dec 13 14:18:11.116626 systemd[1]: Started systemd-journald.service. Dec 13 14:18:11.083000 audit: BPF prog-id=16 op=LOAD Dec 13 14:18:11.083000 audit: BPF prog-id=17 op=LOAD Dec 13 14:18:11.083000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:18:11.083000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:18:11.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.106000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:18:11.106000 audit[986]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd6c910dd0 a2=4000 a3=7ffd6c910e6c items=0 ppid=1 pid=986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:11.106000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:18:11.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:07.394497 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:10.962990 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:18:07.394942 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:18:10.963006 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:18:07.394982 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:18:11.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:10.966082 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:18:07.395017 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:18:11.116787 systemd[1]: Mounted media.mount. Dec 13 14:18:07.395027 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:18:11.117730 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:18:11.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:07.395062 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:18:11.118820 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:18:07.395076 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:18:11.119925 systemd[1]: Mounted tmp.mount. Dec 13 14:18:07.395334 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:18:11.121124 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:18:11.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:07.395372 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:18:11.122665 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:18:07.395399 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:18:11.124155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:18:07.395770 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:18:11.124363 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:18:07.395801 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:18:11.125962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:11.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:07.395818 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:18:11.126202 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:07.395830 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:18:07.395848 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:18:11.127691 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:18:07.395876 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:07Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:18:11.127986 systemd[1]: Finished modprobe@drm.service. Dec 13 14:18:11.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:10.633423 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:10Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:10.633738 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:10Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:10.633844 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:10Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:11.129523 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:10.634041 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:10Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:10.634099 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:10Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:18:10.634176 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:18:10Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:18:11.129810 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:11.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.131470 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:18:11.131827 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:18:11.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.133194 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:11.133440 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:11.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.135190 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:18:11.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.136823 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:18:11.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.138487 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:18:11.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.140297 systemd[1]: Reached target network-pre.target. Dec 13 14:18:11.143253 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:18:11.145957 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:18:11.147067 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:18:11.205645 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:18:11.208150 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:18:11.209279 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:11.210688 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:18:11.211883 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:11.212888 systemd-journald[986]: Time spent on flushing to /var/log/journal/6fc9281581ff4d84bf00662413da7081 is 22.374ms for 1092 entries. Dec 13 14:18:11.212888 systemd-journald[986]: System Journal (/var/log/journal/6fc9281581ff4d84bf00662413da7081) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:18:11.256921 systemd-journald[986]: Received client request to flush runtime journal. Dec 13 14:18:11.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.213219 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:18:11.216443 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:18:11.220279 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:18:11.221743 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:18:11.257684 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:18:11.223005 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:18:11.224068 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:18:11.225327 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:18:11.227108 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:18:11.233822 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:18:11.235645 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:18:11.238351 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:18:11.252830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:18:11.257634 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:18:11.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.978781 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:18:11.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.980784 kernel: kauditd_printk_skb: 95 callbacks suppressed Dec 13 14:18:11.980876 kernel: audit: type=1130 audit(1734099491.979:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:11.980000 audit: BPF prog-id=18 op=LOAD Dec 13 14:18:11.984821 systemd[1]: Starting systemd-udevd.service... Dec 13 14:18:11.985248 kernel: audit: type=1334 audit(1734099491.980:132): prog-id=18 op=LOAD Dec 13 14:18:11.985287 kernel: audit: type=1334 audit(1734099491.983:133): prog-id=19 op=LOAD Dec 13 14:18:11.985306 kernel: audit: type=1334 audit(1734099491.983:134): prog-id=7 op=UNLOAD Dec 13 14:18:11.985324 kernel: audit: type=1334 audit(1734099491.983:135): prog-id=8 op=UNLOAD Dec 13 14:18:11.983000 audit: BPF prog-id=19 op=LOAD Dec 13 14:18:11.983000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:18:11.983000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:18:12.004121 systemd-udevd[1018]: Using default interface naming scheme 'v252'. Dec 13 14:18:12.016995 systemd[1]: Started systemd-udevd.service. Dec 13 14:18:12.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.022739 kernel: audit: type=1130 audit(1734099492.017:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.021000 audit: BPF prog-id=20 op=LOAD Dec 13 14:18:12.024931 systemd[1]: Starting systemd-networkd.service... Dec 13 14:18:12.025745 kernel: audit: type=1334 audit(1734099492.021:137): prog-id=20 op=LOAD Dec 13 14:18:12.029000 audit: BPF prog-id=21 op=LOAD Dec 13 14:18:12.030000 audit: BPF prog-id=22 op=LOAD Dec 13 14:18:12.032116 kernel: audit: type=1334 audit(1734099492.029:138): prog-id=21 op=LOAD Dec 13 14:18:12.032209 kernel: audit: type=1334 audit(1734099492.030:139): prog-id=22 op=LOAD Dec 13 14:18:12.032238 kernel: audit: type=1334 audit(1734099492.031:140): prog-id=23 op=LOAD Dec 13 14:18:12.031000 audit: BPF prog-id=23 op=LOAD Dec 13 14:18:12.033135 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:18:12.046331 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:18:12.084655 systemd[1]: Started systemd-userdbd.service. Dec 13 14:18:12.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.101735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:18:12.107749 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:18:12.110436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:18:12.118000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:18:12.147414 systemd-networkd[1031]: lo: Link UP Dec 13 14:18:12.147753 systemd-networkd[1031]: lo: Gained carrier Dec 13 14:18:12.148289 systemd-networkd[1031]: Enumeration completed Dec 13 14:18:12.148473 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:18:12.148481 systemd[1]: Started systemd-networkd.service. Dec 13 14:18:12.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.150385 systemd-networkd[1031]: eth0: Link UP Dec 13 14:18:12.150463 systemd-networkd[1031]: eth0: Gained carrier Dec 13 14:18:12.164828 systemd-networkd[1031]: eth0: DHCPv4 address 10.0.0.23/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:18:12.118000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c1607ca260 a1=337fc a2=7fc472cc6bc5 a3=5 items=110 ppid=1018 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:12.118000 audit: CWD cwd="/" Dec 13 14:18:12.118000 audit: PATH item=0 name=(null) inode=2064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=1 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=2 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=3 name=(null) inode=13533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=4 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=5 name=(null) inode=13534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=6 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=7 name=(null) inode=13535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=8 name=(null) inode=13535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=9 name=(null) inode=13536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=10 name=(null) inode=13535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=11 name=(null) inode=13537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=12 name=(null) inode=13535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=13 name=(null) inode=13538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=14 name=(null) inode=13535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=15 name=(null) inode=13539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=16 name=(null) inode=13535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=17 name=(null) inode=13540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=18 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=19 name=(null) inode=13541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=20 name=(null) inode=13541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=21 name=(null) inode=13542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=22 name=(null) inode=13541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=23 name=(null) inode=13543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=24 name=(null) inode=13541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=25 name=(null) inode=13544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=26 name=(null) inode=13541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=27 name=(null) inode=13545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=28 name=(null) inode=13541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=29 name=(null) inode=13546 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=30 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=31 name=(null) inode=13547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=32 name=(null) inode=13547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=33 name=(null) inode=13548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=34 name=(null) inode=13547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=35 name=(null) inode=13549 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=36 name=(null) inode=13547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=37 name=(null) inode=13550 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=38 name=(null) inode=13547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=39 name=(null) inode=13551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=40 name=(null) inode=13547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=41 name=(null) inode=13552 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=42 name=(null) inode=13532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=43 name=(null) inode=13553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=44 name=(null) inode=13553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=45 name=(null) inode=13554 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=46 name=(null) inode=13553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=47 name=(null) inode=13555 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=48 name=(null) inode=13553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=49 name=(null) inode=13556 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=50 name=(null) inode=13553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=51 name=(null) inode=13557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=52 name=(null) inode=13553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=53 name=(null) inode=13558 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=54 name=(null) inode=2064 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=55 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=56 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=57 name=(null) inode=13560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=58 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=59 name=(null) inode=13561 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=60 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=61 name=(null) inode=13562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=62 name=(null) inode=13562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=63 name=(null) inode=13563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=64 name=(null) inode=13562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=65 name=(null) inode=13564 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=66 name=(null) inode=13562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=67 name=(null) inode=13565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=68 name=(null) inode=13562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=69 name=(null) inode=13566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=70 name=(null) inode=13562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=71 name=(null) inode=13567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=72 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=73 name=(null) inode=13568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=74 name=(null) inode=13568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=75 name=(null) inode=13569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=76 name=(null) inode=13568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=77 name=(null) inode=13570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=78 name=(null) inode=13568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=79 name=(null) inode=13571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=80 name=(null) inode=13568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=81 name=(null) inode=13572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=82 name=(null) inode=13568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=83 name=(null) inode=13573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=84 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=85 name=(null) inode=13574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=86 name=(null) inode=13574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=87 name=(null) inode=13575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=88 name=(null) inode=13574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=89 name=(null) inode=13576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=90 name=(null) inode=13574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=91 name=(null) inode=13577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=92 name=(null) inode=13574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=93 name=(null) inode=13578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=94 name=(null) inode=13574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=95 name=(null) inode=13579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=96 name=(null) inode=13559 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=97 name=(null) inode=13580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=98 name=(null) inode=13580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=99 name=(null) inode=13581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=100 name=(null) inode=13580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=101 name=(null) inode=13582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=102 name=(null) inode=13580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=103 name=(null) inode=13583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=104 name=(null) inode=13580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=105 name=(null) inode=13584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=106 name=(null) inode=13580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=107 name=(null) inode=13585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PATH item=109 name=(null) inode=14670 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:12.118000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:18:12.205731 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:18:12.210726 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:18:12.211039 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:18:12.211266 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:18:12.213731 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:18:12.220240 kernel: kvm: Nested Virtualization enabled Dec 13 14:18:12.220324 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:18:12.220341 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:18:12.220354 kernel: SVM: Virtual GIF supported Dec 13 14:18:12.238730 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:18:12.286200 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:18:12.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.288446 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:18:12.296565 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:18:12.323901 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:18:12.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.325395 systemd[1]: Reached target cryptsetup.target. Dec 13 14:18:12.328113 systemd[1]: Starting lvm2-activation.service... Dec 13 14:18:12.331807 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:18:12.357728 systemd[1]: Finished lvm2-activation.service. Dec 13 14:18:12.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.358777 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:18:12.359735 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:18:12.359763 systemd[1]: Reached target local-fs.target. Dec 13 14:18:12.360621 systemd[1]: Reached target machines.target. Dec 13 14:18:12.362731 systemd[1]: Starting ldconfig.service... Dec 13 14:18:12.364357 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:12.364391 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:12.365394 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:18:12.368143 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:18:12.370582 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:18:12.373203 systemd[1]: Starting systemd-sysext.service... Dec 13 14:18:12.374372 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Dec 13 14:18:12.376499 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:18:12.382092 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:18:12.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.387002 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:18:12.390289 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:18:12.390477 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:18:12.407739 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:18:12.422973 systemd-fsck[1064]: fsck.fat 4.2 (2021-01-31) Dec 13 14:18:12.422973 systemd-fsck[1064]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:18:12.452568 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:18:12.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.455386 systemd[1]: Mounting boot.mount... Dec 13 14:18:12.681408 systemd[1]: Mounted boot.mount. Dec 13 14:18:12.703722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:18:12.722585 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:18:12.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.735764 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:18:12.740817 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:18:12.741225 (sd-sysext)[1069]: Using extensions 'kubernetes'. Dec 13 14:18:12.741492 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:18:12.741523 (sd-sysext)[1069]: Merged extensions into '/usr'. Dec 13 14:18:12.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.757676 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:12.759500 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:18:12.761001 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:12.763286 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:12.767089 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:12.812770 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:12.813913 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:12.814105 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:12.814224 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:12.816652 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:18:12.817885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:12.818005 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:12.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.819402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:12.819502 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:12.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.820929 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:12.821023 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:12.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.822422 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:12.822510 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:12.823302 systemd[1]: Finished systemd-sysext.service. Dec 13 14:18:12.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:12.825587 systemd[1]: Starting ensure-sysext.service... Dec 13 14:18:12.827425 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:18:12.830972 systemd[1]: Reloading. Dec 13 14:18:12.838006 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:18:12.840447 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:18:12.841262 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:18:12.843906 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:18:12.935881 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2024-12-13T14:18:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:12.935910 /usr/lib/systemd/system-generators/torcx-generator[1095]: time="2024-12-13T14:18:12Z" level=info msg="torcx already run" Dec 13 14:18:12.995759 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:12.995778 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:13.013156 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:13.064000 audit: BPF prog-id=24 op=LOAD Dec 13 14:18:13.064000 audit: BPF prog-id=25 op=LOAD Dec 13 14:18:13.064000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:18:13.064000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:18:13.066000 audit: BPF prog-id=26 op=LOAD Dec 13 14:18:13.066000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:18:13.067000 audit: BPF prog-id=27 op=LOAD Dec 13 14:18:13.067000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:18:13.067000 audit: BPF prog-id=28 op=LOAD Dec 13 14:18:13.067000 audit: BPF prog-id=29 op=LOAD Dec 13 14:18:13.067000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:18:13.068000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:18:13.069000 audit: BPF prog-id=30 op=LOAD Dec 13 14:18:13.069000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:18:13.069000 audit: BPF prog-id=31 op=LOAD Dec 13 14:18:13.069000 audit: BPF prog-id=32 op=LOAD Dec 13 14:18:13.069000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:18:13.069000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:18:13.072467 systemd[1]: Finished ldconfig.service. Dec 13 14:18:13.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.074430 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:18:13.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.078409 systemd[1]: Starting audit-rules.service... Dec 13 14:18:13.080446 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:18:13.082902 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:18:13.084000 audit: BPF prog-id=33 op=LOAD Dec 13 14:18:13.086277 systemd[1]: Starting systemd-resolved.service... Dec 13 14:18:13.087000 audit: BPF prog-id=34 op=LOAD Dec 13 14:18:13.089537 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:18:13.092089 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:18:13.093900 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:18:13.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.097691 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:13.100868 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:18:13.100000 audit[1149]: SYSTEM_BOOT pid=1149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.105150 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.106952 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:13.109567 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:13.112374 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:13.113431 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.113710 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:13.115805 systemd[1]: Starting systemd-update-done.service... Dec 13 14:18:13.117017 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:13.119505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:13.119684 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:13.121594 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:13.121764 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:13.123683 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:13.123974 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:13.127815 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.144833 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:13.147554 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:13.149759 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:13.150863 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.151011 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:13.151221 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:13.152288 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:18:13.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.158418 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:13.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:13.157000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:18:13.157000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff7c2b8e60 a2=420 a3=0 items=0 ppid=1138 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:13.157000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:18:13.159841 augenrules[1161]: No rules Dec 13 14:18:13.158598 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:13.160028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:13.160189 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:13.161605 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:13.161834 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:13.163714 systemd[1]: Finished audit-rules.service. Dec 13 14:18:13.166615 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:13.167049 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.167556 systemd[1]: Finished systemd-update-done.service. Dec 13 14:18:13.172179 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.173765 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:13.176089 systemd[1]: Starting modprobe@drm.service... Dec 13 14:18:13.178310 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:13.180629 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:13.181839 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.181957 systemd-resolved[1144]: Positive Trust Anchors: Dec 13 14:18:13.181972 systemd-resolved[1144]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:18:13.181988 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:13.182007 systemd-resolved[1144]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:18:13.182045 systemd-timesyncd[1148]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:18:13.182117 systemd-timesyncd[1148]: Initial clock synchronization to Fri 2024-12-13 14:18:12.998750 UTC. Dec 13 14:18:13.183374 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:18:13.184837 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:13.186100 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:18:13.187902 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:13.188074 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:13.189376 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:18:13.189511 systemd[1]: Finished modprobe@drm.service. Dec 13 14:18:13.190642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:13.190783 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:13.192041 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:13.192156 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:13.193462 systemd[1]: Reached target time-set.target. Dec 13 14:18:13.194458 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:13.194507 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.194905 systemd[1]: Finished ensure-sysext.service. Dec 13 14:18:13.197436 systemd-resolved[1144]: Defaulting to hostname 'linux'. Dec 13 14:18:13.198880 systemd[1]: Started systemd-resolved.service. Dec 13 14:18:13.199807 systemd[1]: Reached target network.target. Dec 13 14:18:13.200600 systemd[1]: Reached target nss-lookup.target. Dec 13 14:18:13.201435 systemd[1]: Reached target sysinit.target. Dec 13 14:18:13.202307 systemd[1]: Started motdgen.path. Dec 13 14:18:13.203024 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:18:13.204273 systemd[1]: Started logrotate.timer. Dec 13 14:18:13.205071 systemd[1]: Started mdadm.timer. Dec 13 14:18:13.205834 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:18:13.206690 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:18:13.206735 systemd[1]: Reached target paths.target. Dec 13 14:18:13.207494 systemd[1]: Reached target timers.target. Dec 13 14:18:13.208580 systemd[1]: Listening on dbus.socket. Dec 13 14:18:13.210383 systemd[1]: Starting docker.socket... Dec 13 14:18:13.213147 systemd[1]: Listening on sshd.socket. Dec 13 14:18:13.214002 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:13.214358 systemd[1]: Listening on docker.socket. Dec 13 14:18:13.215188 systemd[1]: Reached target sockets.target. Dec 13 14:18:13.215984 systemd[1]: Reached target basic.target. Dec 13 14:18:13.216826 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.216848 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:18:13.217885 systemd[1]: Starting containerd.service... Dec 13 14:18:13.219795 systemd[1]: Starting dbus.service... Dec 13 14:18:13.221526 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:18:13.223804 systemd[1]: Starting extend-filesystems.service... Dec 13 14:18:13.224889 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:18:13.226032 systemd[1]: Starting motdgen.service... Dec 13 14:18:13.226641 jq[1180]: false Dec 13 14:18:13.228826 systemd[1]: Starting prepare-helm.service... Dec 13 14:18:13.231878 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:18:13.234165 systemd[1]: Starting sshd-keygen.service... Dec 13 14:18:13.237986 systemd[1]: Starting systemd-logind.service... Dec 13 14:18:13.239149 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:13.239274 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:18:13.239933 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:18:13.241094 systemd[1]: Starting update-engine.service... Dec 13 14:18:13.244933 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:18:13.249908 jq[1198]: true Dec 13 14:18:13.250640 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:18:13.250926 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:18:13.251335 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:18:13.251513 systemd[1]: Finished motdgen.service. Dec 13 14:18:13.251741 dbus-daemon[1179]: [system] SELinux support is enabled Dec 13 14:18:13.252965 extend-filesystems[1181]: Found loop1 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found sr0 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda1 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda2 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda3 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found usr Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda4 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda6 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda7 Dec 13 14:18:13.252965 extend-filesystems[1181]: Found vda9 Dec 13 14:18:13.252965 extend-filesystems[1181]: Checking size of /dev/vda9 Dec 13 14:18:13.252795 systemd[1]: Started dbus.service. Dec 13 14:18:13.275854 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:18:13.276128 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:18:13.284856 jq[1204]: true Dec 13 14:18:13.286918 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:18:13.286991 systemd[1]: Reached target system-config.target. Dec 13 14:18:13.288312 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:18:13.288359 systemd[1]: Reached target user-config.target. Dec 13 14:18:13.291330 tar[1203]: linux-amd64/helm Dec 13 14:18:13.291826 systemd-logind[1192]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:18:13.291853 systemd-logind[1192]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:18:13.292152 systemd-logind[1192]: New seat seat0. Dec 13 14:18:13.295264 systemd[1]: Started systemd-logind.service. Dec 13 14:18:13.386755 extend-filesystems[1181]: Resized partition /dev/vda9 Dec 13 14:18:13.391736 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:13.391845 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:13.393462 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:18:13.395490 update_engine[1197]: I1213 14:18:13.392431 1197 main.cc:92] Flatcar Update Engine starting Dec 13 14:18:13.401744 systemd[1]: Started update-engine.service. Dec 13 14:18:13.407958 bash[1223]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:18:13.405504 systemd[1]: Started locksmithd.service. Dec 13 14:18:13.407793 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:18:13.409957 update_engine[1197]: I1213 14:18:13.409917 1197 update_check_scheduler.cc:74] Next update check in 2m59s Dec 13 14:18:13.413751 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:18:13.434431 env[1205]: time="2024-12-13T14:18:13.434355076Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:18:13.450734 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:18:13.470914 env[1205]: time="2024-12-13T14:18:13.463836474Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:18:13.471020 env[1205]: time="2024-12-13T14:18:13.470942500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:13.471270 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:18:13.471270 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:18:13.471270 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:18:13.477827 extend-filesystems[1181]: Resized filesystem in /dev/vda9 Dec 13 14:18:13.474581 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472068742Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472109138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472414260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472430190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472441081Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472449206Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.472545426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.473004227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.473161722Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:13.479404 env[1205]: time="2024-12-13T14:18:13.473175799Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:18:13.474746 systemd[1]: Finished extend-filesystems.service. Dec 13 14:18:13.479904 env[1205]: time="2024-12-13T14:18:13.473241011Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:18:13.479904 env[1205]: time="2024-12-13T14:18:13.473259696Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:18:13.480622 env[1205]: time="2024-12-13T14:18:13.480559075Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:18:13.480771 env[1205]: time="2024-12-13T14:18:13.480626261Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:18:13.480771 env[1205]: time="2024-12-13T14:18:13.480690802Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:18:13.480908 env[1205]: time="2024-12-13T14:18:13.480835974Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.480908 env[1205]: time="2024-12-13T14:18:13.480868515Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.480908 env[1205]: time="2024-12-13T14:18:13.480888022Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.480908 env[1205]: time="2024-12-13T14:18:13.480899573Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.481075 env[1205]: time="2024-12-13T14:18:13.480911175Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.481075 env[1205]: time="2024-12-13T14:18:13.480954036Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.481075 env[1205]: time="2024-12-13T14:18:13.480980996Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.481075 env[1205]: time="2024-12-13T14:18:13.480997106Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.481075 env[1205]: time="2024-12-13T14:18:13.481011373Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:18:13.481295 env[1205]: time="2024-12-13T14:18:13.481165051Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:18:13.481295 env[1205]: time="2024-12-13T14:18:13.481254369Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:18:13.482279 env[1205]: time="2024-12-13T14:18:13.482241700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:18:13.482427 env[1205]: time="2024-12-13T14:18:13.482313916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482427 env[1205]: time="2024-12-13T14:18:13.482348741Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:18:13.482582 env[1205]: time="2024-12-13T14:18:13.482553305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482719 env[1205]: time="2024-12-13T14:18:13.482587689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482719 env[1205]: time="2024-12-13T14:18:13.482623426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482719 env[1205]: time="2024-12-13T14:18:13.482654014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482719 env[1205]: time="2024-12-13T14:18:13.482668020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482719 env[1205]: time="2024-12-13T14:18:13.482687797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482858 env[1205]: time="2024-12-13T14:18:13.482720819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482858 env[1205]: time="2024-12-13T14:18:13.482734595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482858 env[1205]: time="2024-12-13T14:18:13.482747569Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:18:13.482950 env[1205]: time="2024-12-13T14:18:13.482860771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482950 env[1205]: time="2024-12-13T14:18:13.482876501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482950 env[1205]: time="2024-12-13T14:18:13.482889305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.482950 env[1205]: time="2024-12-13T14:18:13.482900536Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:18:13.482950 env[1205]: time="2024-12-13T14:18:13.482917798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:18:13.482950 env[1205]: time="2024-12-13T14:18:13.482929290Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:18:13.483331 env[1205]: time="2024-12-13T14:18:13.482953816Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:18:13.483331 env[1205]: time="2024-12-13T14:18:13.483026843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:18:13.483591 env[1205]: time="2024-12-13T14:18:13.483475123Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:18:13.483591 env[1205]: time="2024-12-13T14:18:13.483617531Z" level=info msg="Connect containerd service" Dec 13 14:18:13.484761 env[1205]: time="2024-12-13T14:18:13.483996281Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:18:13.484761 env[1205]: time="2024-12-13T14:18:13.484680614Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:18:13.485067 env[1205]: time="2024-12-13T14:18:13.485039848Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:18:13.485144 env[1205]: time="2024-12-13T14:18:13.485084932Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:18:13.485171 systemd[1]: Started containerd.service. Dec 13 14:18:13.486210 env[1205]: time="2024-12-13T14:18:13.485242247Z" level=info msg="containerd successfully booted in 0.057047s" Dec 13 14:18:13.486728 env[1205]: time="2024-12-13T14:18:13.486651029Z" level=info msg="Start subscribing containerd event" Dec 13 14:18:13.486803 env[1205]: time="2024-12-13T14:18:13.486747911Z" level=info msg="Start recovering state" Dec 13 14:18:13.486851 env[1205]: time="2024-12-13T14:18:13.486832700Z" level=info msg="Start event monitor" Dec 13 14:18:13.486904 env[1205]: time="2024-12-13T14:18:13.486859811Z" level=info msg="Start snapshots syncer" Dec 13 14:18:13.486904 env[1205]: time="2024-12-13T14:18:13.486873567Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:18:13.486904 env[1205]: time="2024-12-13T14:18:13.486898874Z" level=info msg="Start streaming server" Dec 13 14:18:13.549413 locksmithd[1229]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:18:13.895237 tar[1203]: linux-amd64/LICENSE Dec 13 14:18:13.895603 tar[1203]: linux-amd64/README.md Dec 13 14:18:13.900727 systemd[1]: Finished prepare-helm.service. Dec 13 14:18:14.132055 systemd-networkd[1031]: eth0: Gained IPv6LL Dec 13 14:18:14.134219 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:18:14.135903 systemd[1]: Reached target network-online.target. Dec 13 14:18:14.138718 systemd[1]: Starting kubelet.service... Dec 13 14:18:14.421151 sshd_keygen[1199]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:18:14.445868 systemd[1]: Finished sshd-keygen.service. Dec 13 14:18:14.448696 systemd[1]: Starting issuegen.service... Dec 13 14:18:14.455267 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:18:14.455471 systemd[1]: Finished issuegen.service. Dec 13 14:18:14.459056 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:18:14.466318 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:18:14.469891 systemd[1]: Started getty@tty1.service. Dec 13 14:18:14.472797 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:18:14.474508 systemd[1]: Reached target getty.target. Dec 13 14:18:14.871184 systemd[1]: Started kubelet.service. Dec 13 14:18:14.872510 systemd[1]: Reached target multi-user.target. Dec 13 14:18:14.874738 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:18:14.881831 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:18:14.881973 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:18:14.883131 systemd[1]: Startup finished in 1.013s (kernel) + 5.053s (initrd) + 7.814s (userspace) = 13.881s. Dec 13 14:18:15.603237 kubelet[1261]: E1213 14:18:15.603165 1261 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:15.604889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:15.605002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:15.605272 systemd[1]: kubelet.service: Consumed 1.390s CPU time. Dec 13 14:18:22.135027 systemd[1]: Created slice system-sshd.slice. Dec 13 14:18:22.136422 systemd[1]: Started sshd@0-10.0.0.23:22-10.0.0.1:47170.service. Dec 13 14:18:22.182168 sshd[1271]: Accepted publickey for core from 10.0.0.1 port 47170 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:22.184002 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:22.193524 systemd[1]: Created slice user-500.slice. Dec 13 14:18:22.194870 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:18:22.196633 systemd-logind[1192]: New session 1 of user core. Dec 13 14:18:22.205168 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:18:22.207036 systemd[1]: Starting user@500.service... Dec 13 14:18:22.210484 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:22.284023 systemd[1274]: Queued start job for default target default.target. Dec 13 14:18:22.284518 systemd[1274]: Reached target paths.target. Dec 13 14:18:22.284534 systemd[1274]: Reached target sockets.target. Dec 13 14:18:22.284546 systemd[1274]: Reached target timers.target. Dec 13 14:18:22.284559 systemd[1274]: Reached target basic.target. Dec 13 14:18:22.284605 systemd[1274]: Reached target default.target. Dec 13 14:18:22.284629 systemd[1274]: Startup finished in 67ms. Dec 13 14:18:22.284720 systemd[1]: Started user@500.service. Dec 13 14:18:22.285661 systemd[1]: Started session-1.scope. Dec 13 14:18:22.336174 systemd[1]: Started sshd@1-10.0.0.23:22-10.0.0.1:47176.service. Dec 13 14:18:22.375683 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 47176 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:22.377138 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:22.381569 systemd-logind[1192]: New session 2 of user core. Dec 13 14:18:22.383005 systemd[1]: Started session-2.scope. Dec 13 14:18:22.439051 sshd[1283]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:22.443171 systemd[1]: Started sshd@2-10.0.0.23:22-10.0.0.1:47192.service. Dec 13 14:18:22.443849 systemd[1]: sshd@1-10.0.0.23:22-10.0.0.1:47176.service: Deactivated successfully. Dec 13 14:18:22.444599 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:18:22.445265 systemd-logind[1192]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:18:22.446208 systemd-logind[1192]: Removed session 2. Dec 13 14:18:22.483509 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 47192 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:22.485010 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:22.488429 systemd-logind[1192]: New session 3 of user core. Dec 13 14:18:22.489211 systemd[1]: Started session-3.scope. Dec 13 14:18:22.538219 sshd[1288]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:22.540960 systemd[1]: sshd@2-10.0.0.23:22-10.0.0.1:47192.service: Deactivated successfully. Dec 13 14:18:22.541521 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:18:22.542022 systemd-logind[1192]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:18:22.543146 systemd[1]: Started sshd@3-10.0.0.23:22-10.0.0.1:47194.service. Dec 13 14:18:22.543758 systemd-logind[1192]: Removed session 3. Dec 13 14:18:22.581755 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 47194 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:22.583007 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:22.586603 systemd-logind[1192]: New session 4 of user core. Dec 13 14:18:22.587421 systemd[1]: Started session-4.scope. Dec 13 14:18:22.639777 sshd[1296]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:22.642518 systemd[1]: sshd@3-10.0.0.23:22-10.0.0.1:47194.service: Deactivated successfully. Dec 13 14:18:22.643051 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:18:22.643511 systemd-logind[1192]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:18:22.644519 systemd[1]: Started sshd@4-10.0.0.23:22-10.0.0.1:47202.service. Dec 13 14:18:22.645178 systemd-logind[1192]: Removed session 4. Dec 13 14:18:22.681425 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 47202 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:22.682632 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:22.685957 systemd-logind[1192]: New session 5 of user core. Dec 13 14:18:22.686729 systemd[1]: Started session-5.scope. Dec 13 14:18:22.740894 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:18:22.741087 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:18:22.768615 systemd[1]: Starting docker.service... Dec 13 14:18:22.940648 env[1317]: time="2024-12-13T14:18:22.940571151Z" level=info msg="Starting up" Dec 13 14:18:22.942154 env[1317]: time="2024-12-13T14:18:22.942126028Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:18:22.942154 env[1317]: time="2024-12-13T14:18:22.942147031Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:18:22.942250 env[1317]: time="2024-12-13T14:18:22.942166852Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:18:22.942250 env[1317]: time="2024-12-13T14:18:22.942178620Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:18:22.947013 env[1317]: time="2024-12-13T14:18:22.945833400Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:18:22.947013 env[1317]: time="2024-12-13T14:18:22.945867852Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:18:22.947013 env[1317]: time="2024-12-13T14:18:22.945891281Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:18:22.947013 env[1317]: time="2024-12-13T14:18:22.945902493Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:18:23.199039 env[1317]: time="2024-12-13T14:18:23.198883507Z" level=info msg="Loading containers: start." Dec 13 14:18:23.334735 kernel: Initializing XFRM netlink socket Dec 13 14:18:23.364322 env[1317]: time="2024-12-13T14:18:23.364276116Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:18:23.415032 systemd-networkd[1031]: docker0: Link UP Dec 13 14:18:23.434011 env[1317]: time="2024-12-13T14:18:23.433975974Z" level=info msg="Loading containers: done." Dec 13 14:18:23.449980 env[1317]: time="2024-12-13T14:18:23.449836760Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:18:23.450165 env[1317]: time="2024-12-13T14:18:23.450108489Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:18:23.450268 env[1317]: time="2024-12-13T14:18:23.450244711Z" level=info msg="Daemon has completed initialization" Dec 13 14:18:23.495109 systemd[1]: Started docker.service. Dec 13 14:18:23.499012 env[1317]: time="2024-12-13T14:18:23.498929636Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:18:24.191858 env[1205]: time="2024-12-13T14:18:24.191790108Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:18:24.801954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1303729350.mount: Deactivated successfully. Dec 13 14:18:25.856101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:18:25.856407 systemd[1]: Stopped kubelet.service. Dec 13 14:18:25.856464 systemd[1]: kubelet.service: Consumed 1.390s CPU time. Dec 13 14:18:25.858098 systemd[1]: Starting kubelet.service... Dec 13 14:18:25.972237 systemd[1]: Started kubelet.service. Dec 13 14:18:26.038617 kubelet[1458]: E1213 14:18:26.038540 1458 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:26.041741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:26.041878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:27.365184 env[1205]: time="2024-12-13T14:18:27.365084040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:27.367075 env[1205]: time="2024-12-13T14:18:27.367048119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:27.368984 env[1205]: time="2024-12-13T14:18:27.368913692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:27.370855 env[1205]: time="2024-12-13T14:18:27.370819317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:27.371545 env[1205]: time="2024-12-13T14:18:27.371502305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 14:18:27.393136 env[1205]: time="2024-12-13T14:18:27.393097675Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:18:30.178928 env[1205]: time="2024-12-13T14:18:30.178855900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:30.180973 env[1205]: time="2024-12-13T14:18:30.180899422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:30.185573 env[1205]: time="2024-12-13T14:18:30.185530175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:30.187368 env[1205]: time="2024-12-13T14:18:30.187342878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:30.188274 env[1205]: time="2024-12-13T14:18:30.188242951Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 14:18:30.197656 env[1205]: time="2024-12-13T14:18:30.197603325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:18:32.100183 env[1205]: time="2024-12-13T14:18:32.100112809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:32.103588 env[1205]: time="2024-12-13T14:18:32.103529572Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:32.105732 env[1205]: time="2024-12-13T14:18:32.105692840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:32.108000 env[1205]: time="2024-12-13T14:18:32.107979254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:32.108942 env[1205]: time="2024-12-13T14:18:32.108904344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 14:18:32.142033 env[1205]: time="2024-12-13T14:18:32.141976577Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:18:33.960758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2036911997.mount: Deactivated successfully. Dec 13 14:18:34.919251 env[1205]: time="2024-12-13T14:18:34.919166902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:34.921221 env[1205]: time="2024-12-13T14:18:34.921163242Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:34.922529 env[1205]: time="2024-12-13T14:18:34.922493769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:34.924212 env[1205]: time="2024-12-13T14:18:34.924152712Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:34.924560 env[1205]: time="2024-12-13T14:18:34.924520759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:18:34.957727 env[1205]: time="2024-12-13T14:18:34.957654036Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:18:35.549190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295066865.mount: Deactivated successfully. Dec 13 14:18:36.135337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:18:36.135596 systemd[1]: Stopped kubelet.service. Dec 13 14:18:36.137658 systemd[1]: Starting kubelet.service... Dec 13 14:18:36.225215 systemd[1]: Started kubelet.service. Dec 13 14:18:36.416607 kubelet[1498]: E1213 14:18:36.416468 1498 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:36.418691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:36.418873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:37.285576 env[1205]: time="2024-12-13T14:18:37.285421760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:37.288298 env[1205]: time="2024-12-13T14:18:37.288250624Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:37.291758 env[1205]: time="2024-12-13T14:18:37.291726614Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:37.293673 env[1205]: time="2024-12-13T14:18:37.293600261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:37.294481 env[1205]: time="2024-12-13T14:18:37.294445089Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:18:37.330619 env[1205]: time="2024-12-13T14:18:37.330559311Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:18:38.074379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount152237290.mount: Deactivated successfully. Dec 13 14:18:38.080737 env[1205]: time="2024-12-13T14:18:38.080677575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:38.082858 env[1205]: time="2024-12-13T14:18:38.082791106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:38.084672 env[1205]: time="2024-12-13T14:18:38.084624803Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:38.086614 env[1205]: time="2024-12-13T14:18:38.086562999Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:38.086967 env[1205]: time="2024-12-13T14:18:38.086933911Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:18:38.103302 env[1205]: time="2024-12-13T14:18:38.103255980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:18:39.037548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702596000.mount: Deactivated successfully. Dec 13 14:18:42.009827 env[1205]: time="2024-12-13T14:18:42.009740167Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:42.011767 env[1205]: time="2024-12-13T14:18:42.011733291Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:42.013667 env[1205]: time="2024-12-13T14:18:42.013613955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:42.015529 env[1205]: time="2024-12-13T14:18:42.015498954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:42.016139 env[1205]: time="2024-12-13T14:18:42.016094508Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 14:18:44.349924 systemd[1]: Stopped kubelet.service. Dec 13 14:18:44.352603 systemd[1]: Starting kubelet.service... Dec 13 14:18:44.373379 systemd[1]: Reloading. Dec 13 14:18:44.491422 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2024-12-13T14:18:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:44.491454 /usr/lib/systemd/system-generators/torcx-generator[1623]: time="2024-12-13T14:18:44Z" level=info msg="torcx already run" Dec 13 14:18:44.577144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:44.577161 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:44.596191 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:44.678135 systemd[1]: Started kubelet.service. Dec 13 14:18:44.679643 systemd[1]: Stopping kubelet.service... Dec 13 14:18:44.680037 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:18:44.680220 systemd[1]: Stopped kubelet.service. Dec 13 14:18:44.681863 systemd[1]: Starting kubelet.service... Dec 13 14:18:44.767403 systemd[1]: Started kubelet.service. Dec 13 14:18:44.831223 kubelet[1672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:18:44.831223 kubelet[1672]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:18:44.831223 kubelet[1672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:18:44.832647 kubelet[1672]: I1213 14:18:44.832582 1672 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:18:45.328151 kubelet[1672]: I1213 14:18:45.328090 1672 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:18:45.328151 kubelet[1672]: I1213 14:18:45.328137 1672 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:18:45.328442 kubelet[1672]: I1213 14:18:45.328420 1672 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:18:45.355606 kubelet[1672]: I1213 14:18:45.355543 1672 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:18:45.356442 kubelet[1672]: E1213 14:18:45.356414 1672 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.373258 kubelet[1672]: I1213 14:18:45.373209 1672 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:18:45.375797 kubelet[1672]: I1213 14:18:45.375750 1672 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:18:45.376002 kubelet[1672]: I1213 14:18:45.375793 1672 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:18:45.377190 kubelet[1672]: I1213 14:18:45.377163 1672 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:18:45.377190 kubelet[1672]: I1213 14:18:45.377185 1672 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:18:45.377359 kubelet[1672]: I1213 14:18:45.377336 1672 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:18:45.378195 kubelet[1672]: I1213 14:18:45.378169 1672 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:18:45.378195 kubelet[1672]: I1213 14:18:45.378191 1672 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:18:45.378289 kubelet[1672]: I1213 14:18:45.378240 1672 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:18:45.378289 kubelet[1672]: I1213 14:18:45.378263 1672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:18:45.378810 kubelet[1672]: W1213 14:18:45.378749 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.378871 kubelet[1672]: E1213 14:18:45.378832 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.383960 kubelet[1672]: W1213 14:18:45.383890 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.383960 kubelet[1672]: E1213 14:18:45.383959 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.389738 kubelet[1672]: I1213 14:18:45.389525 1672 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:18:45.391236 kubelet[1672]: I1213 14:18:45.391214 1672 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:18:45.391316 kubelet[1672]: W1213 14:18:45.391298 1672 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:18:45.391909 kubelet[1672]: I1213 14:18:45.391890 1672 server.go:1264] "Started kubelet" Dec 13 14:18:45.392444 kubelet[1672]: I1213 14:18:45.392153 1672 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:18:45.392655 kubelet[1672]: I1213 14:18:45.392601 1672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:18:45.393275 kubelet[1672]: I1213 14:18:45.392926 1672 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:18:45.393460 kubelet[1672]: I1213 14:18:45.393437 1672 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:18:45.396803 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:18:45.396944 kubelet[1672]: I1213 14:18:45.396917 1672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:18:45.401184 kubelet[1672]: I1213 14:18:45.401167 1672 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:18:45.402997 kubelet[1672]: I1213 14:18:45.402972 1672 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:18:45.403085 kubelet[1672]: I1213 14:18:45.403057 1672 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:18:45.403535 kubelet[1672]: W1213 14:18:45.403480 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.403535 kubelet[1672]: E1213 14:18:45.403542 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.403722 kubelet[1672]: E1213 14:18:45.403620 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="200ms" Dec 13 14:18:45.405259 kubelet[1672]: I1213 14:18:45.405222 1672 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:18:45.405259 kubelet[1672]: I1213 14:18:45.405245 1672 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:18:45.405382 kubelet[1672]: I1213 14:18:45.405304 1672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:18:45.407393 kubelet[1672]: E1213 14:18:45.407106 1672 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.23:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.23:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c254a827d26d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:18:45.391864429 +0000 UTC m=+0.620783098,LastTimestamp:2024-12-13 14:18:45.391864429 +0000 UTC m=+0.620783098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:18:45.408927 kubelet[1672]: E1213 14:18:45.408891 1672 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:18:45.415095 kubelet[1672]: I1213 14:18:45.415039 1672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:18:45.416533 kubelet[1672]: I1213 14:18:45.416482 1672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:18:45.416595 kubelet[1672]: I1213 14:18:45.416547 1672 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:18:45.416595 kubelet[1672]: I1213 14:18:45.416578 1672 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:18:45.416663 kubelet[1672]: E1213 14:18:45.416634 1672 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:18:45.421113 kubelet[1672]: W1213 14:18:45.421071 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.421113 kubelet[1672]: E1213 14:18:45.421113 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:45.422308 kubelet[1672]: I1213 14:18:45.422282 1672 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:18:45.422308 kubelet[1672]: I1213 14:18:45.422301 1672 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:18:45.422415 kubelet[1672]: I1213 14:18:45.422333 1672 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:18:45.502944 kubelet[1672]: I1213 14:18:45.502896 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:18:45.503305 kubelet[1672]: E1213 14:18:45.503276 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 13 14:18:45.517683 kubelet[1672]: E1213 14:18:45.517597 1672 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:18:45.604347 kubelet[1672]: E1213 14:18:45.604202 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="400ms" Dec 13 14:18:45.654604 kubelet[1672]: I1213 14:18:45.654549 1672 policy_none.go:49] "None policy: Start" Dec 13 14:18:45.655381 kubelet[1672]: I1213 14:18:45.655350 1672 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:18:45.655471 kubelet[1672]: I1213 14:18:45.655395 1672 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:18:45.689407 systemd[1]: Created slice kubepods.slice. Dec 13 14:18:45.693390 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:18:45.695763 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:18:45.703338 kubelet[1672]: I1213 14:18:45.703302 1672 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:18:45.703545 kubelet[1672]: I1213 14:18:45.703485 1672 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:18:45.704155 kubelet[1672]: I1213 14:18:45.703623 1672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:18:45.704655 kubelet[1672]: I1213 14:18:45.704627 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:18:45.704968 kubelet[1672]: E1213 14:18:45.704942 1672 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:18:45.705044 kubelet[1672]: E1213 14:18:45.704980 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 13 14:18:45.718252 kubelet[1672]: I1213 14:18:45.718227 1672 topology_manager.go:215] "Topology Admit Handler" podUID="54953f01e52bd9bb7587f380e3fcad78" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:18:45.718975 kubelet[1672]: I1213 14:18:45.718955 1672 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:18:45.719716 kubelet[1672]: I1213 14:18:45.719681 1672 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:18:45.723370 systemd[1]: Created slice kubepods-burstable-pod54953f01e52bd9bb7587f380e3fcad78.slice. Dec 13 14:18:45.731549 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 14:18:45.738081 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 14:18:45.804488 kubelet[1672]: I1213 14:18:45.804416 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54953f01e52bd9bb7587f380e3fcad78-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54953f01e52bd9bb7587f380e3fcad78\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:45.804488 kubelet[1672]: I1213 14:18:45.804466 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:45.804488 kubelet[1672]: I1213 14:18:45.804500 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:45.804783 kubelet[1672]: I1213 14:18:45.804559 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:45.804783 kubelet[1672]: I1213 14:18:45.804583 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:45.804783 kubelet[1672]: I1213 14:18:45.804655 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54953f01e52bd9bb7587f380e3fcad78-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54953f01e52bd9bb7587f380e3fcad78\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:45.804783 kubelet[1672]: I1213 14:18:45.804695 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:45.804783 kubelet[1672]: I1213 14:18:45.804746 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:18:45.804959 kubelet[1672]: I1213 14:18:45.804770 1672 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54953f01e52bd9bb7587f380e3fcad78-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54953f01e52bd9bb7587f380e3fcad78\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:46.005658 kubelet[1672]: E1213 14:18:46.005585 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="800ms" Dec 13 14:18:46.030092 kubelet[1672]: E1213 14:18:46.030020 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:46.031088 env[1205]: time="2024-12-13T14:18:46.031008342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54953f01e52bd9bb7587f380e3fcad78,Namespace:kube-system,Attempt:0,}" Dec 13 14:18:46.037255 kubelet[1672]: E1213 14:18:46.037199 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:46.037813 env[1205]: time="2024-12-13T14:18:46.037759295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 14:18:46.040123 kubelet[1672]: E1213 14:18:46.040090 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:46.040591 env[1205]: time="2024-12-13T14:18:46.040546077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 14:18:46.107962 kubelet[1672]: I1213 14:18:46.107901 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:18:46.108407 kubelet[1672]: E1213 14:18:46.108360 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 13 14:18:46.496925 kubelet[1672]: W1213 14:18:46.496835 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.496925 kubelet[1672]: E1213 14:18:46.496917 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.23:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.573974 kubelet[1672]: W1213 14:18:46.573910 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.573974 kubelet[1672]: E1213 14:18:46.573971 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.23:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.713689 kubelet[1672]: W1213 14:18:46.713609 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.713689 kubelet[1672]: E1213 14:18:46.713691 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.23:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.806940 kubelet[1672]: E1213 14:18:46.806779 1672 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.23:6443: connect: connection refused" interval="1.6s" Dec 13 14:18:46.862987 kubelet[1672]: W1213 14:18:46.862885 1672 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.862987 kubelet[1672]: E1213 14:18:46.862984 1672 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.23:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:46.910741 kubelet[1672]: I1213 14:18:46.910677 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:18:46.911188 kubelet[1672]: E1213 14:18:46.911147 1672 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.23:6443/api/v1/nodes\": dial tcp 10.0.0.23:6443: connect: connection refused" node="localhost" Dec 13 14:18:47.399107 kubelet[1672]: E1213 14:18:47.399035 1672 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.23:6443: connect: connection refused Dec 13 14:18:47.427068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2949822965.mount: Deactivated successfully. Dec 13 14:18:47.583390 env[1205]: time="2024-12-13T14:18:47.583310053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.586846 env[1205]: time="2024-12-13T14:18:47.586765713Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.589273 env[1205]: time="2024-12-13T14:18:47.589188548Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.590280 env[1205]: time="2024-12-13T14:18:47.590210170Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.592137 env[1205]: time="2024-12-13T14:18:47.592075095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.593636 env[1205]: time="2024-12-13T14:18:47.593590817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.595163 env[1205]: time="2024-12-13T14:18:47.595125434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.596544 env[1205]: time="2024-12-13T14:18:47.596515156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.600140 env[1205]: time="2024-12-13T14:18:47.600093809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.601599 env[1205]: time="2024-12-13T14:18:47.601544697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.602363 env[1205]: time="2024-12-13T14:18:47.602313148Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.605793 env[1205]: time="2024-12-13T14:18:47.605738590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.628167 env[1205]: time="2024-12-13T14:18:47.628060222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:18:47.628167 env[1205]: time="2024-12-13T14:18:47.628119214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:18:47.628167 env[1205]: time="2024-12-13T14:18:47.628128872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:18:47.628799 env[1205]: time="2024-12-13T14:18:47.628665523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/23c82a6cec75e70ed0e3d7641eb2e475ac5b2812ed77b8e2799e2e69e8772f68 pid=1711 runtime=io.containerd.runc.v2 Dec 13 14:18:47.638402 env[1205]: time="2024-12-13T14:18:47.638174339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:18:47.638402 env[1205]: time="2024-12-13T14:18:47.638226407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:18:47.638402 env[1205]: time="2024-12-13T14:18:47.638237278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:18:47.638402 env[1205]: time="2024-12-13T14:18:47.638363978Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81d7a78f6497300d0d10fecf44dac30fc4e54c50e7fa69a4e9cce0ff3857eb9d pid=1729 runtime=io.containerd.runc.v2 Dec 13 14:18:47.642730 env[1205]: time="2024-12-13T14:18:47.641562989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:18:47.642730 env[1205]: time="2024-12-13T14:18:47.641615430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:18:47.642730 env[1205]: time="2024-12-13T14:18:47.641625619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:18:47.642730 env[1205]: time="2024-12-13T14:18:47.641799580Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c92820367897e926b8c1bbc3421f9c8e39c95dd77dd9bd36ca6ebcaf5cdce2c4 pid=1754 runtime=io.containerd.runc.v2 Dec 13 14:18:47.652080 systemd[1]: Started cri-containerd-23c82a6cec75e70ed0e3d7641eb2e475ac5b2812ed77b8e2799e2e69e8772f68.scope. Dec 13 14:18:47.754954 systemd[1]: Started cri-containerd-81d7a78f6497300d0d10fecf44dac30fc4e54c50e7fa69a4e9cce0ff3857eb9d.scope. Dec 13 14:18:47.777202 systemd[1]: Started cri-containerd-c92820367897e926b8c1bbc3421f9c8e39c95dd77dd9bd36ca6ebcaf5cdce2c4.scope. Dec 13 14:18:47.839789 env[1205]: time="2024-12-13T14:18:47.839733984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"23c82a6cec75e70ed0e3d7641eb2e475ac5b2812ed77b8e2799e2e69e8772f68\"" Dec 13 14:18:47.842266 kubelet[1672]: E1213 14:18:47.841347 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:47.844858 env[1205]: time="2024-12-13T14:18:47.844354527Z" level=info msg="CreateContainer within sandbox \"23c82a6cec75e70ed0e3d7641eb2e475ac5b2812ed77b8e2799e2e69e8772f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:18:47.854587 env[1205]: time="2024-12-13T14:18:47.854531944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:54953f01e52bd9bb7587f380e3fcad78,Namespace:kube-system,Attempt:0,} returns sandbox id \"81d7a78f6497300d0d10fecf44dac30fc4e54c50e7fa69a4e9cce0ff3857eb9d\"" Dec 13 14:18:47.855487 kubelet[1672]: E1213 14:18:47.855261 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:47.857545 env[1205]: time="2024-12-13T14:18:47.857519103Z" level=info msg="CreateContainer within sandbox \"81d7a78f6497300d0d10fecf44dac30fc4e54c50e7fa69a4e9cce0ff3857eb9d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:18:47.870607 env[1205]: time="2024-12-13T14:18:47.869528893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c92820367897e926b8c1bbc3421f9c8e39c95dd77dd9bd36ca6ebcaf5cdce2c4\"" Dec 13 14:18:47.870793 kubelet[1672]: E1213 14:18:47.870269 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:47.872680 env[1205]: time="2024-12-13T14:18:47.872640328Z" level=info msg="CreateContainer within sandbox \"c92820367897e926b8c1bbc3421f9c8e39c95dd77dd9bd36ca6ebcaf5cdce2c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:18:47.932592 env[1205]: time="2024-12-13T14:18:47.932444673Z" level=info msg="CreateContainer within sandbox \"23c82a6cec75e70ed0e3d7641eb2e475ac5b2812ed77b8e2799e2e69e8772f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1104e2ccbcaaddf4bf013cc854dbbe41e265dec57e6e80c7e957d1dbb971440e\"" Dec 13 14:18:47.933468 env[1205]: time="2024-12-13T14:18:47.933416120Z" level=info msg="StartContainer for \"1104e2ccbcaaddf4bf013cc854dbbe41e265dec57e6e80c7e957d1dbb971440e\"" Dec 13 14:18:47.939036 env[1205]: time="2024-12-13T14:18:47.938966010Z" level=info msg="CreateContainer within sandbox \"81d7a78f6497300d0d10fecf44dac30fc4e54c50e7fa69a4e9cce0ff3857eb9d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3aba515ffd4489a27ab5a20e21b4782f0ae66d6852e5bf448d82bf0a3bf2c8a4\"" Dec 13 14:18:47.939567 env[1205]: time="2024-12-13T14:18:47.939535824Z" level=info msg="StartContainer for \"3aba515ffd4489a27ab5a20e21b4782f0ae66d6852e5bf448d82bf0a3bf2c8a4\"" Dec 13 14:18:47.942654 env[1205]: time="2024-12-13T14:18:47.942576103Z" level=info msg="CreateContainer within sandbox \"c92820367897e926b8c1bbc3421f9c8e39c95dd77dd9bd36ca6ebcaf5cdce2c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3171454f70e9ee045388af9e2bb9f50ddb3432e1e4e82e39a25ca36d400b74b2\"" Dec 13 14:18:47.943302 env[1205]: time="2024-12-13T14:18:47.943277436Z" level=info msg="StartContainer for \"3171454f70e9ee045388af9e2bb9f50ddb3432e1e4e82e39a25ca36d400b74b2\"" Dec 13 14:18:47.951114 systemd[1]: Started cri-containerd-1104e2ccbcaaddf4bf013cc854dbbe41e265dec57e6e80c7e957d1dbb971440e.scope. Dec 13 14:18:47.966578 systemd[1]: Started cri-containerd-3aba515ffd4489a27ab5a20e21b4782f0ae66d6852e5bf448d82bf0a3bf2c8a4.scope. Dec 13 14:18:47.972728 systemd[1]: Started cri-containerd-3171454f70e9ee045388af9e2bb9f50ddb3432e1e4e82e39a25ca36d400b74b2.scope. Dec 13 14:18:48.043963 env[1205]: time="2024-12-13T14:18:48.043903893Z" level=info msg="StartContainer for \"1104e2ccbcaaddf4bf013cc854dbbe41e265dec57e6e80c7e957d1dbb971440e\" returns successfully" Dec 13 14:18:48.055326 env[1205]: time="2024-12-13T14:18:48.055232271Z" level=info msg="StartContainer for \"3aba515ffd4489a27ab5a20e21b4782f0ae66d6852e5bf448d82bf0a3bf2c8a4\" returns successfully" Dec 13 14:18:48.066438 env[1205]: time="2024-12-13T14:18:48.066386169Z" level=info msg="StartContainer for \"3171454f70e9ee045388af9e2bb9f50ddb3432e1e4e82e39a25ca36d400b74b2\" returns successfully" Dec 13 14:18:48.430998 kubelet[1672]: E1213 14:18:48.430966 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:48.440297 kubelet[1672]: E1213 14:18:48.440269 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:48.442371 kubelet[1672]: E1213 14:18:48.442344 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:48.512719 kubelet[1672]: I1213 14:18:48.512673 1672 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:18:49.444737 kubelet[1672]: E1213 14:18:49.444688 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:49.962804 kubelet[1672]: E1213 14:18:49.962745 1672 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:18:50.146986 kubelet[1672]: I1213 14:18:50.146916 1672 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:18:50.382457 kubelet[1672]: I1213 14:18:50.382312 1672 apiserver.go:52] "Watching apiserver" Dec 13 14:18:50.403525 kubelet[1672]: I1213 14:18:50.403481 1672 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:18:50.472947 kubelet[1672]: E1213 14:18:50.472894 1672 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:50.473423 kubelet[1672]: E1213 14:18:50.473404 1672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:52.210116 systemd[1]: Reloading. Dec 13 14:18:52.290280 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2024-12-13T14:18:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:52.290314 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2024-12-13T14:18:52Z" level=info msg="torcx already run" Dec 13 14:18:52.367281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:52.367299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:52.386736 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:52.482872 kubelet[1672]: I1213 14:18:52.482734 1672 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:18:52.482872 kubelet[1672]: E1213 14:18:52.482654 1672 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.1810c254a827d26d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:18:45.391864429 +0000 UTC m=+0.620783098,LastTimestamp:2024-12-13 14:18:45.391864429 +0000 UTC m=+0.620783098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:18:52.482813 systemd[1]: Stopping kubelet.service... Dec 13 14:18:52.500192 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:18:52.500409 systemd[1]: Stopped kubelet.service. Dec 13 14:18:52.500472 systemd[1]: kubelet.service: Consumed 1.029s CPU time. Dec 13 14:18:52.502355 systemd[1]: Starting kubelet.service... Dec 13 14:18:52.601428 systemd[1]: Started kubelet.service. Dec 13 14:18:52.658878 kubelet[2018]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:18:52.658878 kubelet[2018]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:18:52.658878 kubelet[2018]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:18:52.659407 kubelet[2018]: I1213 14:18:52.658903 2018 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:18:52.663956 kubelet[2018]: I1213 14:18:52.663911 2018 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:18:52.663956 kubelet[2018]: I1213 14:18:52.663949 2018 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:18:52.664450 kubelet[2018]: I1213 14:18:52.664425 2018 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:18:52.668373 kubelet[2018]: I1213 14:18:52.668339 2018 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:18:52.669742 kubelet[2018]: I1213 14:18:52.669674 2018 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:18:52.677415 kubelet[2018]: I1213 14:18:52.677368 2018 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:18:52.677671 kubelet[2018]: I1213 14:18:52.677629 2018 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:18:52.677888 kubelet[2018]: I1213 14:18:52.677665 2018 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:18:52.677992 kubelet[2018]: I1213 14:18:52.677902 2018 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:18:52.677992 kubelet[2018]: I1213 14:18:52.677917 2018 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:18:52.677992 kubelet[2018]: I1213 14:18:52.677972 2018 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:18:52.678082 kubelet[2018]: I1213 14:18:52.678070 2018 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:18:52.678111 kubelet[2018]: I1213 14:18:52.678088 2018 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:18:52.678137 kubelet[2018]: I1213 14:18:52.678113 2018 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:18:52.678137 kubelet[2018]: I1213 14:18:52.678133 2018 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:18:52.722255 kubelet[2018]: I1213 14:18:52.681514 2018 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:18:52.722255 kubelet[2018]: I1213 14:18:52.681768 2018 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:18:52.722255 kubelet[2018]: I1213 14:18:52.682192 2018 server.go:1264] "Started kubelet" Dec 13 14:18:52.725863 kubelet[2018]: I1213 14:18:52.725823 2018 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:18:52.729406 kubelet[2018]: I1213 14:18:52.728999 2018 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:18:52.730010 kubelet[2018]: I1213 14:18:52.729955 2018 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:18:52.730242 kubelet[2018]: I1213 14:18:52.730221 2018 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:18:52.734156 kubelet[2018]: I1213 14:18:52.731838 2018 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:18:52.734156 kubelet[2018]: I1213 14:18:52.732321 2018 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:18:52.734156 kubelet[2018]: I1213 14:18:52.732486 2018 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:18:52.738136 kubelet[2018]: I1213 14:18:52.738115 2018 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:18:52.738715 kubelet[2018]: I1213 14:18:52.738674 2018 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:18:52.740265 kubelet[2018]: I1213 14:18:52.740244 2018 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:18:52.742926 kubelet[2018]: E1213 14:18:52.742887 2018 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:18:52.743891 kubelet[2018]: I1213 14:18:52.743855 2018 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:18:52.750988 kubelet[2018]: I1213 14:18:52.750958 2018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:18:52.752170 kubelet[2018]: I1213 14:18:52.752155 2018 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:18:52.752281 kubelet[2018]: I1213 14:18:52.752265 2018 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:18:52.752370 kubelet[2018]: I1213 14:18:52.752355 2018 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:18:52.752492 kubelet[2018]: E1213 14:18:52.752471 2018 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:18:52.775889 kubelet[2018]: I1213 14:18:52.775842 2018 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:18:52.776209 kubelet[2018]: I1213 14:18:52.776172 2018 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:18:52.776333 kubelet[2018]: I1213 14:18:52.776317 2018 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:18:52.776583 kubelet[2018]: I1213 14:18:52.776566 2018 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:18:52.776685 kubelet[2018]: I1213 14:18:52.776652 2018 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:18:52.776783 kubelet[2018]: I1213 14:18:52.776769 2018 policy_none.go:49] "None policy: Start" Dec 13 14:18:52.777590 kubelet[2018]: I1213 14:18:52.777560 2018 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:18:52.777590 kubelet[2018]: I1213 14:18:52.777587 2018 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:18:52.777813 kubelet[2018]: I1213 14:18:52.777799 2018 state_mem.go:75] "Updated machine memory state" Dec 13 14:18:52.784295 kubelet[2018]: I1213 14:18:52.784279 2018 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:18:52.784609 kubelet[2018]: I1213 14:18:52.784578 2018 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:18:52.784808 kubelet[2018]: I1213 14:18:52.784796 2018 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:18:52.836488 kubelet[2018]: I1213 14:18:52.836440 2018 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:18:52.843915 kubelet[2018]: I1213 14:18:52.843877 2018 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:18:52.844056 kubelet[2018]: I1213 14:18:52.843968 2018 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:18:52.853688 kubelet[2018]: I1213 14:18:52.853641 2018 topology_manager.go:215] "Topology Admit Handler" podUID="54953f01e52bd9bb7587f380e3fcad78" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:18:52.853871 kubelet[2018]: I1213 14:18:52.853785 2018 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:18:52.853871 kubelet[2018]: I1213 14:18:52.853839 2018 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:18:53.033998 kubelet[2018]: I1213 14:18:53.033817 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54953f01e52bd9bb7587f380e3fcad78-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"54953f01e52bd9bb7587f380e3fcad78\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:53.033998 kubelet[2018]: I1213 14:18:53.033867 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54953f01e52bd9bb7587f380e3fcad78-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"54953f01e52bd9bb7587f380e3fcad78\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:53.033998 kubelet[2018]: I1213 14:18:53.033897 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:53.033998 kubelet[2018]: I1213 14:18:53.033914 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:53.033998 kubelet[2018]: I1213 14:18:53.033979 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:53.034333 kubelet[2018]: I1213 14:18:53.034019 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54953f01e52bd9bb7587f380e3fcad78-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"54953f01e52bd9bb7587f380e3fcad78\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:53.034333 kubelet[2018]: I1213 14:18:53.034050 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:53.034333 kubelet[2018]: I1213 14:18:53.034071 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:18:53.034333 kubelet[2018]: I1213 14:18:53.034095 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:18:53.160280 kubelet[2018]: E1213 14:18:53.160206 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:53.160536 kubelet[2018]: E1213 14:18:53.160495 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:53.160591 kubelet[2018]: E1213 14:18:53.160550 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:53.679233 kubelet[2018]: I1213 14:18:53.679195 2018 apiserver.go:52] "Watching apiserver" Dec 13 14:18:53.732879 kubelet[2018]: I1213 14:18:53.732821 2018 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:18:53.765220 kubelet[2018]: E1213 14:18:53.765184 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:53.765220 kubelet[2018]: E1213 14:18:53.765201 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:53.874218 kubelet[2018]: E1213 14:18:53.874166 2018 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:18:53.874935 kubelet[2018]: I1213 14:18:53.874857 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.874812672 podStartE2EDuration="1.874812672s" podCreationTimestamp="2024-12-13 14:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:18:53.873965417 +0000 UTC m=+1.264801041" watchObservedRunningTime="2024-12-13 14:18:53.874812672 +0000 UTC m=+1.265648296" Dec 13 14:18:53.875202 kubelet[2018]: E1213 14:18:53.875149 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:53.988865 kubelet[2018]: I1213 14:18:53.988616 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.988592197 podStartE2EDuration="1.988592197s" podCreationTimestamp="2024-12-13 14:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:18:53.973498404 +0000 UTC m=+1.364334018" watchObservedRunningTime="2024-12-13 14:18:53.988592197 +0000 UTC m=+1.379427821" Dec 13 14:18:53.997943 kubelet[2018]: I1213 14:18:53.997881 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.997859006 podStartE2EDuration="1.997859006s" podCreationTimestamp="2024-12-13 14:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:18:53.989146156 +0000 UTC m=+1.379981780" watchObservedRunningTime="2024-12-13 14:18:53.997859006 +0000 UTC m=+1.388694630" Dec 13 14:18:54.767066 kubelet[2018]: E1213 14:18:54.767028 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:55.012751 sudo[1305]: pam_unix(sudo:session): session closed for user root Dec 13 14:18:55.014857 sshd[1302]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:55.017656 systemd[1]: sshd@4-10.0.0.23:22-10.0.0.1:47202.service: Deactivated successfully. Dec 13 14:18:55.018380 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:18:55.018524 systemd[1]: session-5.scope: Consumed 4.043s CPU time. Dec 13 14:18:55.019052 systemd-logind[1192]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:18:55.019876 systemd-logind[1192]: Removed session 5. Dec 13 14:18:55.768719 kubelet[2018]: E1213 14:18:55.768635 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:18:59.005969 update_engine[1197]: I1213 14:18:59.005887 1197 update_attempter.cc:509] Updating boot flags... Dec 13 14:19:02.022013 kubelet[2018]: E1213 14:19:02.021966 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:02.779254 kubelet[2018]: E1213 14:19:02.779222 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:02.879165 kubelet[2018]: E1213 14:19:02.879097 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:03.780224 kubelet[2018]: E1213 14:19:03.780186 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:04.942119 kubelet[2018]: E1213 14:19:04.942048 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:05.813211 kubelet[2018]: I1213 14:19:05.813166 2018 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:19:05.813552 env[1205]: time="2024-12-13T14:19:05.813498338Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:19:05.813822 kubelet[2018]: I1213 14:19:05.813718 2018 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:19:06.506399 kubelet[2018]: I1213 14:19:06.506335 2018 topology_manager.go:215] "Topology Admit Handler" podUID="3d56d610-8c1e-4831-8b23-1ca46269d168" podNamespace="kube-system" podName="kube-proxy-blcsh" Dec 13 14:19:06.513719 systemd[1]: Created slice kubepods-besteffort-pod3d56d610_8c1e_4831_8b23_1ca46269d168.slice. Dec 13 14:19:06.542609 kubelet[2018]: I1213 14:19:06.542565 2018 topology_manager.go:215] "Topology Admit Handler" podUID="1a8d2de7-04a9-4971-8fdc-44416b3055f0" podNamespace="kube-flannel" podName="kube-flannel-ds-ptqr8" Dec 13 14:19:06.549104 systemd[1]: Created slice kubepods-burstable-pod1a8d2de7_04a9_4971_8fdc_44416b3055f0.slice. Dec 13 14:19:06.615194 kubelet[2018]: I1213 14:19:06.615100 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d56d610-8c1e-4831-8b23-1ca46269d168-xtables-lock\") pod \"kube-proxy-blcsh\" (UID: \"3d56d610-8c1e-4831-8b23-1ca46269d168\") " pod="kube-system/kube-proxy-blcsh" Dec 13 14:19:06.615194 kubelet[2018]: I1213 14:19:06.615174 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1a8d2de7-04a9-4971-8fdc-44416b3055f0-run\") pod \"kube-flannel-ds-ptqr8\" (UID: \"1a8d2de7-04a9-4971-8fdc-44416b3055f0\") " pod="kube-flannel/kube-flannel-ds-ptqr8" Dec 13 14:19:06.615194 kubelet[2018]: I1213 14:19:06.615202 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/1a8d2de7-04a9-4971-8fdc-44416b3055f0-flannel-cfg\") pod \"kube-flannel-ds-ptqr8\" (UID: \"1a8d2de7-04a9-4971-8fdc-44416b3055f0\") " pod="kube-flannel/kube-flannel-ds-ptqr8" Dec 13 14:19:06.615519 kubelet[2018]: I1213 14:19:06.615225 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d56d610-8c1e-4831-8b23-1ca46269d168-kube-proxy\") pod \"kube-proxy-blcsh\" (UID: \"3d56d610-8c1e-4831-8b23-1ca46269d168\") " pod="kube-system/kube-proxy-blcsh" Dec 13 14:19:06.615519 kubelet[2018]: I1213 14:19:06.615248 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6xm7\" (UniqueName: \"kubernetes.io/projected/3d56d610-8c1e-4831-8b23-1ca46269d168-kube-api-access-h6xm7\") pod \"kube-proxy-blcsh\" (UID: \"3d56d610-8c1e-4831-8b23-1ca46269d168\") " pod="kube-system/kube-proxy-blcsh" Dec 13 14:19:06.615519 kubelet[2018]: I1213 14:19:06.615305 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/1a8d2de7-04a9-4971-8fdc-44416b3055f0-cni\") pod \"kube-flannel-ds-ptqr8\" (UID: \"1a8d2de7-04a9-4971-8fdc-44416b3055f0\") " pod="kube-flannel/kube-flannel-ds-ptqr8" Dec 13 14:19:06.615519 kubelet[2018]: I1213 14:19:06.615361 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/1a8d2de7-04a9-4971-8fdc-44416b3055f0-cni-plugin\") pod \"kube-flannel-ds-ptqr8\" (UID: \"1a8d2de7-04a9-4971-8fdc-44416b3055f0\") " pod="kube-flannel/kube-flannel-ds-ptqr8" Dec 13 14:19:06.615519 kubelet[2018]: I1213 14:19:06.615378 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a8d2de7-04a9-4971-8fdc-44416b3055f0-xtables-lock\") pod \"kube-flannel-ds-ptqr8\" (UID: \"1a8d2de7-04a9-4971-8fdc-44416b3055f0\") " pod="kube-flannel/kube-flannel-ds-ptqr8" Dec 13 14:19:06.615687 kubelet[2018]: I1213 14:19:06.615438 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d56d610-8c1e-4831-8b23-1ca46269d168-lib-modules\") pod \"kube-proxy-blcsh\" (UID: \"3d56d610-8c1e-4831-8b23-1ca46269d168\") " pod="kube-system/kube-proxy-blcsh" Dec 13 14:19:06.716028 kubelet[2018]: I1213 14:19:06.715970 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlfs\" (UniqueName: \"kubernetes.io/projected/1a8d2de7-04a9-4971-8fdc-44416b3055f0-kube-api-access-zzlfs\") pod \"kube-flannel-ds-ptqr8\" (UID: \"1a8d2de7-04a9-4971-8fdc-44416b3055f0\") " pod="kube-flannel/kube-flannel-ds-ptqr8" Dec 13 14:19:06.821729 kubelet[2018]: E1213 14:19:06.821543 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.822350 env[1205]: time="2024-12-13T14:19:06.822294167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blcsh,Uid:3d56d610-8c1e-4831-8b23-1ca46269d168,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:06.849863 env[1205]: time="2024-12-13T14:19:06.849742386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:06.849863 env[1205]: time="2024-12-13T14:19:06.849807720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:06.850136 env[1205]: time="2024-12-13T14:19:06.849821305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:06.850136 env[1205]: time="2024-12-13T14:19:06.850045628Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47702dde53b25fab07a3bdc2619a2dc1aece73bf644eb20ac7ed0d231d456dbb pid=2108 runtime=io.containerd.runc.v2 Dec 13 14:19:06.855729 kubelet[2018]: E1213 14:19:06.852946 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.855884 env[1205]: time="2024-12-13T14:19:06.853756613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ptqr8,Uid:1a8d2de7-04a9-4971-8fdc-44416b3055f0,Namespace:kube-flannel,Attempt:0,}" Dec 13 14:19:06.870963 systemd[1]: Started cri-containerd-47702dde53b25fab07a3bdc2619a2dc1aece73bf644eb20ac7ed0d231d456dbb.scope. Dec 13 14:19:06.878760 env[1205]: time="2024-12-13T14:19:06.878647222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:06.878760 env[1205]: time="2024-12-13T14:19:06.878717835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:06.878760 env[1205]: time="2024-12-13T14:19:06.878730960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:06.879478 env[1205]: time="2024-12-13T14:19:06.879430238Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132 pid=2135 runtime=io.containerd.runc.v2 Dec 13 14:19:06.902032 systemd[1]: Started cri-containerd-0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132.scope. Dec 13 14:19:06.928753 env[1205]: time="2024-12-13T14:19:06.928472874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blcsh,Uid:3d56d610-8c1e-4831-8b23-1ca46269d168,Namespace:kube-system,Attempt:0,} returns sandbox id \"47702dde53b25fab07a3bdc2619a2dc1aece73bf644eb20ac7ed0d231d456dbb\"" Dec 13 14:19:06.929503 kubelet[2018]: E1213 14:19:06.929472 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.932391 env[1205]: time="2024-12-13T14:19:06.932336907Z" level=info msg="CreateContainer within sandbox \"47702dde53b25fab07a3bdc2619a2dc1aece73bf644eb20ac7ed0d231d456dbb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:19:06.951177 env[1205]: time="2024-12-13T14:19:06.951096459Z" level=info msg="CreateContainer within sandbox \"47702dde53b25fab07a3bdc2619a2dc1aece73bf644eb20ac7ed0d231d456dbb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e925dab004b2e3dd88b37c9a9ff8d66109226682ee11308df6625a73429180cd\"" Dec 13 14:19:06.952212 env[1205]: time="2024-12-13T14:19:06.952184420Z" level=info msg="StartContainer for \"e925dab004b2e3dd88b37c9a9ff8d66109226682ee11308df6625a73429180cd\"" Dec 13 14:19:06.960946 env[1205]: time="2024-12-13T14:19:06.960880993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ptqr8,Uid:1a8d2de7-04a9-4971-8fdc-44416b3055f0,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\"" Dec 13 14:19:06.961781 kubelet[2018]: E1213 14:19:06.961741 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.963299 env[1205]: time="2024-12-13T14:19:06.963258274Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 14:19:06.971786 systemd[1]: Started cri-containerd-e925dab004b2e3dd88b37c9a9ff8d66109226682ee11308df6625a73429180cd.scope. Dec 13 14:19:07.007477 env[1205]: time="2024-12-13T14:19:07.007406951Z" level=info msg="StartContainer for \"e925dab004b2e3dd88b37c9a9ff8d66109226682ee11308df6625a73429180cd\" returns successfully" Dec 13 14:19:07.790918 kubelet[2018]: E1213 14:19:07.790824 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:07.805292 kubelet[2018]: I1213 14:19:07.805092 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-blcsh" podStartSLOduration=1.805066406 podStartE2EDuration="1.805066406s" podCreationTimestamp="2024-12-13 14:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:07.804836283 +0000 UTC m=+15.195671937" watchObservedRunningTime="2024-12-13 14:19:07.805066406 +0000 UTC m=+15.195902030" Dec 13 14:19:09.046341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3542211562.mount: Deactivated successfully. Dec 13 14:19:09.280584 env[1205]: time="2024-12-13T14:19:09.280420753Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:09.284692 env[1205]: time="2024-12-13T14:19:09.284646524Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:09.287153 env[1205]: time="2024-12-13T14:19:09.287049932Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:09.289525 env[1205]: time="2024-12-13T14:19:09.289489587Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:09.290062 env[1205]: time="2024-12-13T14:19:09.290002945Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 14:19:09.295483 env[1205]: time="2024-12-13T14:19:09.294827373Z" level=info msg="CreateContainer within sandbox \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 14:19:09.314201 env[1205]: time="2024-12-13T14:19:09.313992657Z" level=info msg="CreateContainer within sandbox \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24\"" Dec 13 14:19:09.314929 env[1205]: time="2024-12-13T14:19:09.314853599Z" level=info msg="StartContainer for \"b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24\"" Dec 13 14:19:09.338463 systemd[1]: Started cri-containerd-b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24.scope. Dec 13 14:19:09.368745 env[1205]: time="2024-12-13T14:19:09.368662506Z" level=info msg="StartContainer for \"b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24\" returns successfully" Dec 13 14:19:09.369905 systemd[1]: cri-containerd-b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24.scope: Deactivated successfully. Dec 13 14:19:09.427506 env[1205]: time="2024-12-13T14:19:09.427431977Z" level=info msg="shim disconnected" id=b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24 Dec 13 14:19:09.427506 env[1205]: time="2024-12-13T14:19:09.427503011Z" level=warning msg="cleaning up after shim disconnected" id=b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24 namespace=k8s.io Dec 13 14:19:09.427506 env[1205]: time="2024-12-13T14:19:09.427516957Z" level=info msg="cleaning up dead shim" Dec 13 14:19:09.434932 env[1205]: time="2024-12-13T14:19:09.434860812Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:19:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2385 runtime=io.containerd.runc.v2\n" Dec 13 14:19:09.795133 kubelet[2018]: E1213 14:19:09.795071 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:09.796168 env[1205]: time="2024-12-13T14:19:09.796131545Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 14:19:09.959935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b225dd50989d562832ba7649c3c6703dbb399d963e3bfdd73db7e9c319a38a24-rootfs.mount: Deactivated successfully. Dec 13 14:19:11.790915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4294781887.mount: Deactivated successfully. Dec 13 14:19:12.904351 env[1205]: time="2024-12-13T14:19:12.904262011Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:12.974758 env[1205]: time="2024-12-13T14:19:12.974659060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:12.988209 env[1205]: time="2024-12-13T14:19:12.988159069Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:13.025451 env[1205]: time="2024-12-13T14:19:13.025355197Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:13.026501 env[1205]: time="2024-12-13T14:19:13.026432064Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 14:19:13.028956 env[1205]: time="2024-12-13T14:19:13.028921651Z" level=info msg="CreateContainer within sandbox \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:19:13.305929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967985893.mount: Deactivated successfully. Dec 13 14:19:13.506518 env[1205]: time="2024-12-13T14:19:13.506444940Z" level=info msg="CreateContainer within sandbox \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250\"" Dec 13 14:19:13.506905 env[1205]: time="2024-12-13T14:19:13.506842208Z" level=info msg="StartContainer for \"a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250\"" Dec 13 14:19:13.526899 systemd[1]: Started cri-containerd-a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250.scope. Dec 13 14:19:13.550306 systemd[1]: cri-containerd-a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250.scope: Deactivated successfully. Dec 13 14:19:13.596444 kubelet[2018]: I1213 14:19:13.596014 2018 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:19:13.632925 env[1205]: time="2024-12-13T14:19:13.632339908Z" level=info msg="StartContainer for \"a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250\" returns successfully" Dec 13 14:19:13.653010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250-rootfs.mount: Deactivated successfully. Dec 13 14:19:13.809502 kubelet[2018]: E1213 14:19:13.809434 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.310681 kubelet[2018]: I1213 14:19:14.005452 2018 topology_manager.go:215] "Topology Admit Handler" podUID="1eaff0c1-848a-41bd-88b2-fe696d34e896" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6hznf" Dec 13 14:19:14.310681 kubelet[2018]: I1213 14:19:14.007661 2018 topology_manager.go:215] "Topology Admit Handler" podUID="290906fb-d721-442d-943d-307f76fc4c18" podNamespace="kube-system" podName="coredns-7db6d8ff4d-s59p8" Dec 13 14:19:14.012499 systemd[1]: Created slice kubepods-burstable-pod1eaff0c1_848a_41bd_88b2_fe696d34e896.slice. Dec 13 14:19:14.016287 systemd[1]: Created slice kubepods-burstable-pod290906fb_d721_442d_943d_307f76fc4c18.slice. Dec 13 14:19:14.345936 env[1205]: time="2024-12-13T14:19:14.345856594Z" level=info msg="shim disconnected" id=a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250 Dec 13 14:19:14.345936 env[1205]: time="2024-12-13T14:19:14.345921466Z" level=warning msg="cleaning up after shim disconnected" id=a87e2cd9212b83cc4ffcffaa0dcfdcc2fc0bf32dcf819c06c6295e29cdc04250 namespace=k8s.io Dec 13 14:19:14.345936 env[1205]: time="2024-12-13T14:19:14.345932336Z" level=info msg="cleaning up dead shim" Dec 13 14:19:14.355443 env[1205]: time="2024-12-13T14:19:14.355374840Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:19:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2443 runtime=io.containerd.runc.v2\n" Dec 13 14:19:14.465798 kubelet[2018]: I1213 14:19:14.465662 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/290906fb-d721-442d-943d-307f76fc4c18-config-volume\") pod \"coredns-7db6d8ff4d-s59p8\" (UID: \"290906fb-d721-442d-943d-307f76fc4c18\") " pod="kube-system/coredns-7db6d8ff4d-s59p8" Dec 13 14:19:14.465798 kubelet[2018]: I1213 14:19:14.465752 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1eaff0c1-848a-41bd-88b2-fe696d34e896-config-volume\") pod \"coredns-7db6d8ff4d-6hznf\" (UID: \"1eaff0c1-848a-41bd-88b2-fe696d34e896\") " pod="kube-system/coredns-7db6d8ff4d-6hznf" Dec 13 14:19:14.465798 kubelet[2018]: I1213 14:19:14.465781 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p29bg\" (UniqueName: \"kubernetes.io/projected/290906fb-d721-442d-943d-307f76fc4c18-kube-api-access-p29bg\") pod \"coredns-7db6d8ff4d-s59p8\" (UID: \"290906fb-d721-442d-943d-307f76fc4c18\") " pod="kube-system/coredns-7db6d8ff4d-s59p8" Dec 13 14:19:14.466141 kubelet[2018]: I1213 14:19:14.465879 2018 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ptwt\" (UniqueName: \"kubernetes.io/projected/1eaff0c1-848a-41bd-88b2-fe696d34e896-kube-api-access-8ptwt\") pod \"coredns-7db6d8ff4d-6hznf\" (UID: \"1eaff0c1-848a-41bd-88b2-fe696d34e896\") " pod="kube-system/coredns-7db6d8ff4d-6hznf" Dec 13 14:19:14.611537 kubelet[2018]: E1213 14:19:14.611383 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.611537 kubelet[2018]: E1213 14:19:14.611378 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.612016 env[1205]: time="2024-12-13T14:19:14.611984784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hznf,Uid:1eaff0c1-848a-41bd-88b2-fe696d34e896,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:14.612598 env[1205]: time="2024-12-13T14:19:14.612545799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s59p8,Uid:290906fb-d721-442d-943d-307f76fc4c18,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:14.657297 systemd[1]: run-netns-cni\x2dbdda1a69\x2ddbd9\x2de122\x2d35ad\x2d065308192d82.mount: Deactivated successfully. Dec 13 14:19:14.657431 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53a29948837d5e5be97d039c6b14e16860e7f199fb6d7fa32e3c5f07d1905860-shm.mount: Deactivated successfully. Dec 13 14:19:14.657513 systemd[1]: run-netns-cni\x2d49db1835\x2d528f\x2d18e3\x2d944c\x2dd765e35b4a88.mount: Deactivated successfully. Dec 13 14:19:14.657599 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1967308fa97b0f47168a157b9e9def1d3a315812d7213078c31a6a247eba5a91-shm.mount: Deactivated successfully. Dec 13 14:19:14.662270 env[1205]: time="2024-12-13T14:19:14.662149100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hznf,Uid:1eaff0c1-848a-41bd-88b2-fe696d34e896,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1967308fa97b0f47168a157b9e9def1d3a315812d7213078c31a6a247eba5a91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:19:14.662633 kubelet[2018]: E1213 14:19:14.662549 2018 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1967308fa97b0f47168a157b9e9def1d3a315812d7213078c31a6a247eba5a91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:19:14.662745 kubelet[2018]: E1213 14:19:14.662674 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1967308fa97b0f47168a157b9e9def1d3a315812d7213078c31a6a247eba5a91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6hznf" Dec 13 14:19:14.662792 kubelet[2018]: E1213 14:19:14.662746 2018 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1967308fa97b0f47168a157b9e9def1d3a315812d7213078c31a6a247eba5a91\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-6hznf" Dec 13 14:19:14.662867 kubelet[2018]: E1213 14:19:14.662824 2018 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6hznf_kube-system(1eaff0c1-848a-41bd-88b2-fe696d34e896)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6hznf_kube-system(1eaff0c1-848a-41bd-88b2-fe696d34e896)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1967308fa97b0f47168a157b9e9def1d3a315812d7213078c31a6a247eba5a91\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-6hznf" podUID="1eaff0c1-848a-41bd-88b2-fe696d34e896" Dec 13 14:19:14.663371 env[1205]: time="2024-12-13T14:19:14.663318200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s59p8,Uid:290906fb-d721-442d-943d-307f76fc4c18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53a29948837d5e5be97d039c6b14e16860e7f199fb6d7fa32e3c5f07d1905860\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:19:14.664792 kubelet[2018]: E1213 14:19:14.664745 2018 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a29948837d5e5be97d039c6b14e16860e7f199fb6d7fa32e3c5f07d1905860\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 14:19:14.664881 kubelet[2018]: E1213 14:19:14.664815 2018 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a29948837d5e5be97d039c6b14e16860e7f199fb6d7fa32e3c5f07d1905860\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s59p8" Dec 13 14:19:14.664881 kubelet[2018]: E1213 14:19:14.664850 2018 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53a29948837d5e5be97d039c6b14e16860e7f199fb6d7fa32e3c5f07d1905860\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-s59p8" Dec 13 14:19:14.664972 kubelet[2018]: E1213 14:19:14.664921 2018 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-s59p8_kube-system(290906fb-d721-442d-943d-307f76fc4c18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-s59p8_kube-system(290906fb-d721-442d-943d-307f76fc4c18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53a29948837d5e5be97d039c6b14e16860e7f199fb6d7fa32e3c5f07d1905860\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-s59p8" podUID="290906fb-d721-442d-943d-307f76fc4c18" Dec 13 14:19:14.813082 kubelet[2018]: E1213 14:19:14.813002 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.815977 env[1205]: time="2024-12-13T14:19:14.815921138Z" level=info msg="CreateContainer within sandbox \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 14:19:14.875943 env[1205]: time="2024-12-13T14:19:14.875788398Z" level=info msg="CreateContainer within sandbox \"0b8c2db4a324c350d3ab2344e02b996a558514cdf234eae0962d0f669c134132\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"15fef714dbd28da690394d11ba7563dcf8725d90e11a93e489ef0caad684e047\"" Dec 13 14:19:14.876520 env[1205]: time="2024-12-13T14:19:14.876432600Z" level=info msg="StartContainer for \"15fef714dbd28da690394d11ba7563dcf8725d90e11a93e489ef0caad684e047\"" Dec 13 14:19:14.892372 systemd[1]: Started cri-containerd-15fef714dbd28da690394d11ba7563dcf8725d90e11a93e489ef0caad684e047.scope. Dec 13 14:19:14.918605 env[1205]: time="2024-12-13T14:19:14.918543446Z" level=info msg="StartContainer for \"15fef714dbd28da690394d11ba7563dcf8725d90e11a93e489ef0caad684e047\" returns successfully" Dec 13 14:19:15.818195 kubelet[2018]: E1213 14:19:15.818136 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:15.864896 kubelet[2018]: I1213 14:19:15.864810 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-ptqr8" podStartSLOduration=3.8000211999999998 podStartE2EDuration="9.864788613s" podCreationTimestamp="2024-12-13 14:19:06 +0000 UTC" firstStartedPulling="2024-12-13 14:19:06.962833943 +0000 UTC m=+14.353669567" lastFinishedPulling="2024-12-13 14:19:13.027601356 +0000 UTC m=+20.418436980" observedRunningTime="2024-12-13 14:19:15.864767612 +0000 UTC m=+23.255603246" watchObservedRunningTime="2024-12-13 14:19:15.864788613 +0000 UTC m=+23.255624237" Dec 13 14:19:15.975448 systemd-networkd[1031]: flannel.1: Link UP Dec 13 14:19:15.975455 systemd-networkd[1031]: flannel.1: Gained carrier Dec 13 14:19:16.819168 kubelet[2018]: E1213 14:19:16.819100 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:17.875895 systemd-networkd[1031]: flannel.1: Gained IPv6LL Dec 13 14:19:21.663155 systemd[1]: Started sshd@5-10.0.0.23:22-10.0.0.1:54410.service. Dec 13 14:19:21.704164 sshd[2653]: Accepted publickey for core from 10.0.0.1 port 54410 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:21.705772 sshd[2653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:21.710666 systemd-logind[1192]: New session 6 of user core. Dec 13 14:19:21.711659 systemd[1]: Started session-6.scope. Dec 13 14:19:21.840843 sshd[2653]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:21.843392 systemd[1]: sshd@5-10.0.0.23:22-10.0.0.1:54410.service: Deactivated successfully. Dec 13 14:19:21.844317 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:19:21.845237 systemd-logind[1192]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:19:21.846080 systemd-logind[1192]: Removed session 6. Dec 13 14:19:26.753775 kubelet[2018]: E1213 14:19:26.753677 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:26.754316 env[1205]: time="2024-12-13T14:19:26.754255407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hznf,Uid:1eaff0c1-848a-41bd-88b2-fe696d34e896,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:26.789308 systemd-networkd[1031]: cni0: Link UP Dec 13 14:19:26.789317 systemd-networkd[1031]: cni0: Gained carrier Dec 13 14:19:26.792818 systemd-networkd[1031]: cni0: Lost carrier Dec 13 14:19:26.797214 systemd-networkd[1031]: veth0337c411: Link UP Dec 13 14:19:26.799725 kernel: cni0: port 1(veth0337c411) entered blocking state Dec 13 14:19:26.799790 kernel: cni0: port 1(veth0337c411) entered disabled state Dec 13 14:19:26.799834 kernel: device veth0337c411 entered promiscuous mode Dec 13 14:19:26.802843 kernel: cni0: port 1(veth0337c411) entered blocking state Dec 13 14:19:26.802928 kernel: cni0: port 1(veth0337c411) entered forwarding state Dec 13 14:19:26.802979 kernel: cni0: port 1(veth0337c411) entered disabled state Dec 13 14:19:26.812037 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0337c411: link becomes ready Dec 13 14:19:26.812279 kernel: cni0: port 1(veth0337c411) entered blocking state Dec 13 14:19:26.812419 kernel: cni0: port 1(veth0337c411) entered forwarding state Dec 13 14:19:26.814052 systemd-networkd[1031]: veth0337c411: Gained carrier Dec 13 14:19:26.815888 systemd-networkd[1031]: cni0: Gained carrier Dec 13 14:19:26.822498 env[1205]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c928), "name":"cbr0", "type":"bridge"} Dec 13 14:19:26.822498 env[1205]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:19:26.836531 env[1205]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T14:19:26.836278567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:26.836531 env[1205]: time="2024-12-13T14:19:26.836309415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:26.836531 env[1205]: time="2024-12-13T14:19:26.836318742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:26.836531 env[1205]: time="2024-12-13T14:19:26.836414352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/505adc1e4b73d7a6dd082bf5a09675ff9808797f7f1e58aa6b9e5821af79b94e pid=2737 runtime=io.containerd.runc.v2 Dec 13 14:19:26.845573 systemd[1]: Started sshd@6-10.0.0.23:22-10.0.0.1:54420.service. Dec 13 14:19:26.858422 systemd[1]: Started cri-containerd-505adc1e4b73d7a6dd082bf5a09675ff9808797f7f1e58aa6b9e5821af79b94e.scope. Dec 13 14:19:26.869686 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:19:26.888548 sshd[2749]: Accepted publickey for core from 10.0.0.1 port 54420 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:26.890629 sshd[2749]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:26.893275 env[1205]: time="2024-12-13T14:19:26.893226991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6hznf,Uid:1eaff0c1-848a-41bd-88b2-fe696d34e896,Namespace:kube-system,Attempt:0,} returns sandbox id \"505adc1e4b73d7a6dd082bf5a09675ff9808797f7f1e58aa6b9e5821af79b94e\"" Dec 13 14:19:26.894697 kubelet[2018]: E1213 14:19:26.894645 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:26.897365 systemd[1]: Started session-7.scope. Dec 13 14:19:26.898240 systemd-logind[1192]: New session 7 of user core. Dec 13 14:19:26.898647 env[1205]: time="2024-12-13T14:19:26.898318924Z" level=info msg="CreateContainer within sandbox \"505adc1e4b73d7a6dd082bf5a09675ff9808797f7f1e58aa6b9e5821af79b94e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:19:26.921323 env[1205]: time="2024-12-13T14:19:26.921249248Z" level=info msg="CreateContainer within sandbox \"505adc1e4b73d7a6dd082bf5a09675ff9808797f7f1e58aa6b9e5821af79b94e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48a4ffce85abf504407e2b0a0da766ff4b21eafecae328f5af918c18853d3847\"" Dec 13 14:19:26.923458 env[1205]: time="2024-12-13T14:19:26.923410892Z" level=info msg="StartContainer for \"48a4ffce85abf504407e2b0a0da766ff4b21eafecae328f5af918c18853d3847\"" Dec 13 14:19:26.942407 systemd[1]: Started cri-containerd-48a4ffce85abf504407e2b0a0da766ff4b21eafecae328f5af918c18853d3847.scope. Dec 13 14:19:26.978990 env[1205]: time="2024-12-13T14:19:26.978926412Z" level=info msg="StartContainer for \"48a4ffce85abf504407e2b0a0da766ff4b21eafecae328f5af918c18853d3847\" returns successfully" Dec 13 14:19:27.030942 sshd[2749]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:27.033604 systemd[1]: sshd@6-10.0.0.23:22-10.0.0.1:54420.service: Deactivated successfully. Dec 13 14:19:27.034289 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:19:27.034889 systemd-logind[1192]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:19:27.035554 systemd-logind[1192]: Removed session 7. Dec 13 14:19:27.753688 kubelet[2018]: E1213 14:19:27.753618 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.754164 env[1205]: time="2024-12-13T14:19:27.754113317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s59p8,Uid:290906fb-d721-442d-943d-307f76fc4c18,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:27.813852 systemd-networkd[1031]: vethf82f8339: Link UP Dec 13 14:19:27.816624 kernel: cni0: port 2(vethf82f8339) entered blocking state Dec 13 14:19:27.816680 kernel: cni0: port 2(vethf82f8339) entered disabled state Dec 13 14:19:27.818759 kernel: device vethf82f8339 entered promiscuous mode Dec 13 14:19:27.823192 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:19:27.823239 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf82f8339: link becomes ready Dec 13 14:19:27.823261 kernel: cni0: port 2(vethf82f8339) entered blocking state Dec 13 14:19:27.824133 kernel: cni0: port 2(vethf82f8339) entered forwarding state Dec 13 14:19:27.825161 systemd-networkd[1031]: vethf82f8339: Gained carrier Dec 13 14:19:27.826924 env[1205]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e928), "name":"cbr0", "type":"bridge"} Dec 13 14:19:27.826924 env[1205]: delegateAdd: netconf sent to delegate plugin: Dec 13 14:19:27.836306 env[1205]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T14:19:27.836218871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:27.836306 env[1205]: time="2024-12-13T14:19:27.836274286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:27.836306 env[1205]: time="2024-12-13T14:19:27.836288312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:27.836519 env[1205]: time="2024-12-13T14:19:27.836471507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf5867361106adbd920a823694b16e42dce8800d9283ace829a3cc5a51767b60 pid=2864 runtime=io.containerd.runc.v2 Dec 13 14:19:27.839936 kubelet[2018]: E1213 14:19:27.839903 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.857507 systemd[1]: Started cri-containerd-cf5867361106adbd920a823694b16e42dce8800d9283ace829a3cc5a51767b60.scope. Dec 13 14:19:27.860090 kubelet[2018]: I1213 14:19:27.860036 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6hznf" podStartSLOduration=21.860013387 podStartE2EDuration="21.860013387s" podCreationTimestamp="2024-12-13 14:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:27.849421293 +0000 UTC m=+35.240256937" watchObservedRunningTime="2024-12-13 14:19:27.860013387 +0000 UTC m=+35.250849011" Dec 13 14:19:27.872617 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:19:27.894915 env[1205]: time="2024-12-13T14:19:27.894867516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-s59p8,Uid:290906fb-d721-442d-943d-307f76fc4c18,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf5867361106adbd920a823694b16e42dce8800d9283ace829a3cc5a51767b60\"" Dec 13 14:19:27.896089 kubelet[2018]: E1213 14:19:27.895904 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.898043 env[1205]: time="2024-12-13T14:19:27.898012849Z" level=info msg="CreateContainer within sandbox \"cf5867361106adbd920a823694b16e42dce8800d9283ace829a3cc5a51767b60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:19:27.909985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214308855.mount: Deactivated successfully. Dec 13 14:19:27.912035 env[1205]: time="2024-12-13T14:19:27.911974027Z" level=info msg="CreateContainer within sandbox \"cf5867361106adbd920a823694b16e42dce8800d9283ace829a3cc5a51767b60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f190bb27e09d2b24b6fa0e9d1c63b86f5203c39eeaabda18f9eaf14926c2f2f\"" Dec 13 14:19:27.912955 env[1205]: time="2024-12-13T14:19:27.912915307Z" level=info msg="StartContainer for \"5f190bb27e09d2b24b6fa0e9d1c63b86f5203c39eeaabda18f9eaf14926c2f2f\"" Dec 13 14:19:27.926681 systemd[1]: Started cri-containerd-5f190bb27e09d2b24b6fa0e9d1c63b86f5203c39eeaabda18f9eaf14926c2f2f.scope. Dec 13 14:19:27.952543 env[1205]: time="2024-12-13T14:19:27.952481674Z" level=info msg="StartContainer for \"5f190bb27e09d2b24b6fa0e9d1c63b86f5203c39eeaabda18f9eaf14926c2f2f\" returns successfully" Dec 13 14:19:28.307890 systemd-networkd[1031]: cni0: Gained IPv6LL Dec 13 14:19:28.627939 systemd-networkd[1031]: veth0337c411: Gained IPv6LL Dec 13 14:19:28.843079 kubelet[2018]: E1213 14:19:28.843030 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:28.843453 kubelet[2018]: E1213 14:19:28.843262 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:28.852615 kubelet[2018]: I1213 14:19:28.852552 2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-s59p8" podStartSLOduration=22.852528632 podStartE2EDuration="22.852528632s" podCreationTimestamp="2024-12-13 14:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:28.852488086 +0000 UTC m=+36.243323720" watchObservedRunningTime="2024-12-13 14:19:28.852528632 +0000 UTC m=+36.243364256" Dec 13 14:19:29.459976 systemd-networkd[1031]: vethf82f8339: Gained IPv6LL Dec 13 14:19:29.845025 kubelet[2018]: E1213 14:19:29.844900 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:29.845357 kubelet[2018]: E1213 14:19:29.845209 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:30.847599 kubelet[2018]: E1213 14:19:30.847538 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:32.036946 systemd[1]: Started sshd@7-10.0.0.23:22-10.0.0.1:49166.service. Dec 13 14:19:32.078310 sshd[2966]: Accepted publickey for core from 10.0.0.1 port 49166 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:32.079852 sshd[2966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:32.083881 systemd-logind[1192]: New session 8 of user core. Dec 13 14:19:32.084951 systemd[1]: Started session-8.scope. Dec 13 14:19:32.231025 sshd[2966]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:32.233865 systemd[1]: sshd@7-10.0.0.23:22-10.0.0.1:49166.service: Deactivated successfully. Dec 13 14:19:32.234731 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:19:32.235325 systemd-logind[1192]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:19:32.236198 systemd-logind[1192]: Removed session 8. Dec 13 14:19:37.236635 systemd[1]: Started sshd@8-10.0.0.23:22-10.0.0.1:49176.service. Dec 13 14:19:37.276872 sshd[3004]: Accepted publickey for core from 10.0.0.1 port 49176 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:37.278290 sshd[3004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:37.282147 systemd-logind[1192]: New session 9 of user core. Dec 13 14:19:37.283095 systemd[1]: Started session-9.scope. Dec 13 14:19:37.405694 sshd[3004]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:37.408847 systemd[1]: sshd@8-10.0.0.23:22-10.0.0.1:49176.service: Deactivated successfully. Dec 13 14:19:37.409410 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:19:37.409990 systemd-logind[1192]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:19:37.411284 systemd[1]: Started sshd@9-10.0.0.23:22-10.0.0.1:49180.service. Dec 13 14:19:37.412017 systemd-logind[1192]: Removed session 9. Dec 13 14:19:37.449556 sshd[3018]: Accepted publickey for core from 10.0.0.1 port 49180 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:37.450786 sshd[3018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:37.455065 systemd-logind[1192]: New session 10 of user core. Dec 13 14:19:37.456234 systemd[1]: Started session-10.scope. Dec 13 14:19:37.642562 sshd[3018]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:37.646227 systemd[1]: sshd@9-10.0.0.23:22-10.0.0.1:49180.service: Deactivated successfully. Dec 13 14:19:37.646976 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:19:37.647714 systemd-logind[1192]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:19:37.649178 systemd[1]: Started sshd@10-10.0.0.23:22-10.0.0.1:49194.service. Dec 13 14:19:37.650110 systemd-logind[1192]: Removed session 10. Dec 13 14:19:37.687535 sshd[3030]: Accepted publickey for core from 10.0.0.1 port 49194 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:37.688986 sshd[3030]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:37.692559 systemd-logind[1192]: New session 11 of user core. Dec 13 14:19:37.693434 systemd[1]: Started session-11.scope. Dec 13 14:19:37.990170 sshd[3030]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:37.992964 systemd[1]: sshd@10-10.0.0.23:22-10.0.0.1:49194.service: Deactivated successfully. Dec 13 14:19:37.993726 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:19:37.994517 systemd-logind[1192]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:19:37.995395 systemd-logind[1192]: Removed session 11. Dec 13 14:19:42.996251 systemd[1]: Started sshd@11-10.0.0.23:22-10.0.0.1:40392.service. Dec 13 14:19:43.037685 sshd[3064]: Accepted publickey for core from 10.0.0.1 port 40392 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:43.039383 sshd[3064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:43.045139 systemd-logind[1192]: New session 12 of user core. Dec 13 14:19:43.046440 systemd[1]: Started session-12.scope. Dec 13 14:19:43.174809 sshd[3064]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:43.178066 systemd[1]: sshd@11-10.0.0.23:22-10.0.0.1:40392.service: Deactivated successfully. Dec 13 14:19:43.179001 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:19:43.179668 systemd-logind[1192]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:19:43.180604 systemd-logind[1192]: Removed session 12. Dec 13 14:19:48.180747 systemd[1]: Started sshd@12-10.0.0.23:22-10.0.0.1:37338.service. Dec 13 14:19:48.223127 sshd[3099]: Accepted publickey for core from 10.0.0.1 port 37338 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:48.224822 sshd[3099]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:48.229250 systemd-logind[1192]: New session 13 of user core. Dec 13 14:19:48.230379 systemd[1]: Started session-13.scope. Dec 13 14:19:48.351437 sshd[3099]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:48.354997 systemd[1]: sshd@12-10.0.0.23:22-10.0.0.1:37338.service: Deactivated successfully. Dec 13 14:19:48.355589 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:19:48.356161 systemd-logind[1192]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:19:48.357341 systemd[1]: Started sshd@13-10.0.0.23:22-10.0.0.1:37348.service. Dec 13 14:19:48.358205 systemd-logind[1192]: Removed session 13. Dec 13 14:19:48.396005 sshd[3112]: Accepted publickey for core from 10.0.0.1 port 37348 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:48.397593 sshd[3112]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:48.401669 systemd-logind[1192]: New session 14 of user core. Dec 13 14:19:48.402785 systemd[1]: Started session-14.scope. Dec 13 14:19:48.659377 sshd[3112]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:48.662532 systemd[1]: sshd@13-10.0.0.23:22-10.0.0.1:37348.service: Deactivated successfully. Dec 13 14:19:48.663194 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:19:48.663700 systemd-logind[1192]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:19:48.664912 systemd[1]: Started sshd@14-10.0.0.23:22-10.0.0.1:37352.service. Dec 13 14:19:48.665840 systemd-logind[1192]: Removed session 14. Dec 13 14:19:48.704767 sshd[3123]: Accepted publickey for core from 10.0.0.1 port 37352 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:48.705999 sshd[3123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:48.710171 systemd-logind[1192]: New session 15 of user core. Dec 13 14:19:48.711080 systemd[1]: Started session-15.scope. Dec 13 14:19:50.174470 sshd[3123]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:50.179631 systemd[1]: Started sshd@15-10.0.0.23:22-10.0.0.1:37356.service. Dec 13 14:19:50.180779 systemd[1]: sshd@14-10.0.0.23:22-10.0.0.1:37352.service: Deactivated successfully. Dec 13 14:19:50.181618 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:19:50.182294 systemd-logind[1192]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:19:50.183255 systemd-logind[1192]: Removed session 15. Dec 13 14:19:50.221965 sshd[3155]: Accepted publickey for core from 10.0.0.1 port 37356 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:50.223384 sshd[3155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:50.227131 systemd-logind[1192]: New session 16 of user core. Dec 13 14:19:50.228174 systemd[1]: Started session-16.scope. Dec 13 14:19:50.546049 sshd[3155]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:50.549591 systemd[1]: sshd@15-10.0.0.23:22-10.0.0.1:37356.service: Deactivated successfully. Dec 13 14:19:50.550292 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:19:50.551130 systemd-logind[1192]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:19:50.552652 systemd[1]: Started sshd@16-10.0.0.23:22-10.0.0.1:37368.service. Dec 13 14:19:50.553642 systemd-logind[1192]: Removed session 16. Dec 13 14:19:50.591519 sshd[3168]: Accepted publickey for core from 10.0.0.1 port 37368 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:50.593056 sshd[3168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:50.597101 systemd-logind[1192]: New session 17 of user core. Dec 13 14:19:50.598037 systemd[1]: Started session-17.scope. Dec 13 14:19:50.783457 sshd[3168]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:50.786993 systemd[1]: sshd@16-10.0.0.23:22-10.0.0.1:37368.service: Deactivated successfully. Dec 13 14:19:50.787935 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:19:50.788535 systemd-logind[1192]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:19:50.789509 systemd-logind[1192]: Removed session 17. Dec 13 14:19:55.789758 systemd[1]: Started sshd@17-10.0.0.23:22-10.0.0.1:37370.service. Dec 13 14:19:55.833521 sshd[3204]: Accepted publickey for core from 10.0.0.1 port 37370 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:55.835659 sshd[3204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:55.841369 systemd-logind[1192]: New session 18 of user core. Dec 13 14:19:55.842574 systemd[1]: Started session-18.scope. Dec 13 14:19:55.956562 sshd[3204]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:55.959896 systemd[1]: sshd@17-10.0.0.23:22-10.0.0.1:37370.service: Deactivated successfully. Dec 13 14:19:55.960977 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:19:55.961584 systemd-logind[1192]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:19:55.962676 systemd-logind[1192]: Removed session 18. Dec 13 14:20:00.961676 systemd[1]: Started sshd@18-10.0.0.23:22-10.0.0.1:43678.service. Dec 13 14:20:00.998752 sshd[3240]: Accepted publickey for core from 10.0.0.1 port 43678 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:01.000201 sshd[3240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:01.005494 systemd-logind[1192]: New session 19 of user core. Dec 13 14:20:01.006473 systemd[1]: Started session-19.scope. Dec 13 14:20:01.109773 sshd[3240]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:01.113131 systemd[1]: sshd@18-10.0.0.23:22-10.0.0.1:43678.service: Deactivated successfully. Dec 13 14:20:01.113894 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:20:01.114629 systemd-logind[1192]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:20:01.115461 systemd-logind[1192]: Removed session 19. Dec 13 14:20:06.115370 systemd[1]: Started sshd@19-10.0.0.23:22-10.0.0.1:43682.service. Dec 13 14:20:06.153926 sshd[3278]: Accepted publickey for core from 10.0.0.1 port 43682 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:06.155473 sshd[3278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:06.159480 systemd-logind[1192]: New session 20 of user core. Dec 13 14:20:06.160637 systemd[1]: Started session-20.scope. Dec 13 14:20:06.268384 sshd[3278]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:06.270864 systemd[1]: sshd@19-10.0.0.23:22-10.0.0.1:43682.service: Deactivated successfully. Dec 13 14:20:06.271626 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:20:06.272289 systemd-logind[1192]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:20:06.273243 systemd-logind[1192]: Removed session 20. Dec 13 14:20:11.274826 systemd[1]: Started sshd@20-10.0.0.23:22-10.0.0.1:59926.service. Dec 13 14:20:11.315563 sshd[3334]: Accepted publickey for core from 10.0.0.1 port 59926 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:11.317442 sshd[3334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:11.324407 systemd-logind[1192]: New session 21 of user core. Dec 13 14:20:11.325531 systemd[1]: Started session-21.scope. Dec 13 14:20:11.447531 sshd[3334]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:11.451143 systemd[1]: sshd@20-10.0.0.23:22-10.0.0.1:59926.service: Deactivated successfully. Dec 13 14:20:11.452462 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:20:11.453191 systemd-logind[1192]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:20:11.454299 systemd-logind[1192]: Removed session 21. Dec 13 14:20:16.452777 systemd[1]: Started sshd@21-10.0.0.23:22-10.0.0.1:59938.service. Dec 13 14:20:16.491622 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 59938 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:16.493266 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:16.496888 systemd-logind[1192]: New session 22 of user core. Dec 13 14:20:16.497694 systemd[1]: Started session-22.scope. Dec 13 14:20:16.599284 sshd[3368]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:16.601408 systemd[1]: sshd@21-10.0.0.23:22-10.0.0.1:59938.service: Deactivated successfully. Dec 13 14:20:16.602142 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:20:16.602677 systemd-logind[1192]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:20:16.603362 systemd-logind[1192]: Removed session 22. Dec 13 14:20:16.753934 kubelet[2018]: E1213 14:20:16.753765 2018 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"