Dec 13 01:56:56.913965 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:56:56.913988 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:56:56.914003 kernel: BIOS-provided physical RAM map: Dec 13 01:56:56.914025 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:56:56.914046 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:56:56.914057 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:56:56.914069 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:56:56.914077 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:56:56.914086 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:56:56.914093 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:56:56.914101 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:56:56.914108 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:56:56.914116 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:56:56.914124 kernel: NX (Execute Disable) protection: active Dec 13 01:56:56.914134 kernel: SMBIOS 2.8 present. Dec 13 01:56:56.914142 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:56:56.914149 kernel: Hypervisor detected: KVM Dec 13 01:56:56.914156 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:56:56.914164 kernel: kvm-clock: cpu 0, msr 5619b001, primary cpu clock Dec 13 01:56:56.914172 kernel: kvm-clock: using sched offset of 2497140400 cycles Dec 13 01:56:56.914181 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:56:56.914189 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:56:56.914197 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:56:56.914207 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:56:56.914215 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:56:56.914223 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:56:56.914231 kernel: Using GB pages for direct mapping Dec 13 01:56:56.914255 kernel: ACPI: Early table checksum verification disabled Dec 13 01:56:56.914264 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:56:56.914272 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914280 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914289 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914299 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:56:56.914307 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914315 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914322 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914331 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:56:56.914339 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:56:56.914347 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:56:56.914364 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:56:56.914377 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:56:56.914386 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:56:56.914394 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:56:56.914404 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:56:56.914413 kernel: No NUMA configuration found Dec 13 01:56:56.914422 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:56:56.914433 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:56:56.914442 kernel: Zone ranges: Dec 13 01:56:56.914451 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:56:56.914460 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:56:56.914469 kernel: Normal empty Dec 13 01:56:56.914478 kernel: Movable zone start for each node Dec 13 01:56:56.914488 kernel: Early memory node ranges Dec 13 01:56:56.914497 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:56:56.914506 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:56:56.914516 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:56:56.914526 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:56:56.914534 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:56:56.914544 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:56:56.914553 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:56:56.914562 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:56:56.914571 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:56:56.914581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:56:56.914590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:56:56.914599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:56:56.914610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:56:56.914619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:56:56.914628 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:56:56.914637 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:56:56.914646 kernel: TSC deadline timer available Dec 13 01:56:56.914655 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:56:56.914664 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:56:56.914674 kernel: kvm-guest: setup PV sched yield Dec 13 01:56:56.914683 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:56:56.914693 kernel: Booting paravirtualized kernel on KVM Dec 13 01:56:56.914703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:56:56.914724 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:56:56.914734 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:56:56.914744 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:56:56.914753 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:56:56.914762 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:56:56.914771 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 01:56:56.914780 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:56:56.914791 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:56:56.914801 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:56:56.914810 kernel: Policy zone: DMA32 Dec 13 01:56:56.914821 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:56:56.914831 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:56:56.914840 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:56:56.914850 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:56:56.914859 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:56:56.914871 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 01:56:56.914880 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:56:56.914890 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:56:56.914899 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:56:56.914908 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:56:56.914918 kernel: rcu: RCU event tracing is enabled. Dec 13 01:56:56.914928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:56:56.914937 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:56:56.914946 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:56:56.914958 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:56:56.914967 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:56:56.914976 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:56:56.914986 kernel: random: crng init done Dec 13 01:56:56.914995 kernel: Console: colour VGA+ 80x25 Dec 13 01:56:56.915015 kernel: printk: console [ttyS0] enabled Dec 13 01:56:56.915024 kernel: ACPI: Core revision 20210730 Dec 13 01:56:56.915034 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:56:56.915043 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:56:56.915054 kernel: x2apic enabled Dec 13 01:56:56.915063 kernel: Switched APIC routing to physical x2apic. Dec 13 01:56:56.915073 kernel: kvm-guest: setup PV IPIs Dec 13 01:56:56.915082 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:56:56.915091 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:56:56.915100 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:56:56.915110 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:56:56.915119 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:56:56.915129 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:56:56.915145 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:56:56.915155 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:56:56.915165 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:56:56.915176 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:56:56.915186 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:56:56.915195 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:56:56.915205 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:56:56.915215 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:56:56.915225 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:56:56.915237 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:56:56.915247 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:56:56.915256 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:56:56.915267 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:56:56.915276 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:56:56.915286 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:56:56.915296 kernel: LSM: Security Framework initializing Dec 13 01:56:56.915307 kernel: SELinux: Initializing. Dec 13 01:56:56.915317 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:56:56.915327 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:56:56.915336 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:56:56.915346 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:56:56.915366 kernel: ... version: 0 Dec 13 01:56:56.915376 kernel: ... bit width: 48 Dec 13 01:56:56.915386 kernel: ... generic registers: 6 Dec 13 01:56:56.915395 kernel: ... value mask: 0000ffffffffffff Dec 13 01:56:56.915407 kernel: ... max period: 00007fffffffffff Dec 13 01:56:56.915417 kernel: ... fixed-purpose events: 0 Dec 13 01:56:56.915426 kernel: ... event mask: 000000000000003f Dec 13 01:56:56.915436 kernel: signal: max sigframe size: 1776 Dec 13 01:56:56.915445 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:56:56.915455 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:56:56.915465 kernel: x86: Booting SMP configuration: Dec 13 01:56:56.915475 kernel: .... node #0, CPUs: #1 Dec 13 01:56:56.915485 kernel: kvm-clock: cpu 1, msr 5619b041, secondary cpu clock Dec 13 01:56:56.915494 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:56:56.915505 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 01:56:56.915515 kernel: #2 Dec 13 01:56:56.915525 kernel: kvm-clock: cpu 2, msr 5619b081, secondary cpu clock Dec 13 01:56:56.915534 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:56:56.915544 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 01:56:56.915554 kernel: #3 Dec 13 01:56:56.915563 kernel: kvm-clock: cpu 3, msr 5619b0c1, secondary cpu clock Dec 13 01:56:56.915572 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:56:56.915582 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 01:56:56.915593 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:56:56.915603 kernel: smpboot: Max logical packages: 1 Dec 13 01:56:56.915615 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:56:56.915625 kernel: devtmpfs: initialized Dec 13 01:56:56.915637 kernel: x86/mm: Memory block size: 128MB Dec 13 01:56:56.915647 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:56:56.915657 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:56:56.915667 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:56:56.915676 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:56:56.915687 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:56:56.915698 kernel: audit: type=2000 audit(1734055016.360:1): state=initialized audit_enabled=0 res=1 Dec 13 01:56:56.915718 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:56:56.915728 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:56:56.915738 kernel: cpuidle: using governor menu Dec 13 01:56:56.915747 kernel: ACPI: bus type PCI registered Dec 13 01:56:56.915757 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:56:56.915767 kernel: dca service started, version 1.12.1 Dec 13 01:56:56.915777 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:56:56.915789 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:56:56.915798 kernel: PCI: Using configuration type 1 for base access Dec 13 01:56:56.915808 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:56:56.915818 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:56:56.915828 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:56:56.915838 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:56:56.915847 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:56:56.915857 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:56:56.915867 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:56:56.915878 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:56:56.915888 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:56:56.915897 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:56:56.915907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:56:56.915917 kernel: ACPI: Interpreter enabled Dec 13 01:56:56.915927 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:56:56.915936 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:56:56.915946 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:56:56.915956 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:56:56.915967 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:56:56.916121 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:56:56.916217 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:56:56.916302 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:56:56.916314 kernel: PCI host bridge to bus 0000:00 Dec 13 01:56:56.916416 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:56:56.916495 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:56:56.916576 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:56:56.916655 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:56:56.916759 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:56:56.916839 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:56:56.916916 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:56:56.917015 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:56:56.917120 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:56:56.917211 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:56:56.917337 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:56:56.917445 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:56:56.917536 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:56:56.917682 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:56:56.917796 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:56:56.917894 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:56:56.917987 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:56:56.918088 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:56:56.918181 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:56:56.918274 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:56:56.918376 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:56:56.918478 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:56:56.918577 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:56:56.918669 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:56:56.918781 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:56:56.918877 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:56:56.918977 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:56:56.919068 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:56:56.919185 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:56:56.919280 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:56:56.919377 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:56:56.919477 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:56:56.919568 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:56:56.919581 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:56:56.919592 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:56:56.919602 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:56:56.919615 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:56:56.919625 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:56:56.919635 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:56:56.919645 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:56:56.919655 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:56:56.919665 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:56:56.919675 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:56:56.919685 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:56:56.919695 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:56:56.919718 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:56:56.919728 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:56:56.919738 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:56:56.919748 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:56:56.919758 kernel: iommu: Default domain type: Translated Dec 13 01:56:56.919769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:56:56.919862 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:56:56.919952 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:56:56.920042 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:56:56.920055 kernel: vgaarb: loaded Dec 13 01:56:56.920066 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:56:56.920076 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:56:56.920086 kernel: PTP clock support registered Dec 13 01:56:56.920096 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:56:56.920106 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:56:56.920115 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:56:56.920125 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:56:56.920135 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:56:56.920145 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:56:56.920154 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:56:56.920178 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:56:56.920191 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:56:56.920201 kernel: pnp: PnP ACPI init Dec 13 01:56:56.920300 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:56:56.920314 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:56:56.920326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:56:56.920336 kernel: NET: Registered PF_INET protocol family Dec 13 01:56:56.920346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:56:56.920364 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:56:56.920373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:56:56.920382 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:56:56.920392 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:56:56.920401 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:56:56.920415 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:56:56.920426 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:56:56.920436 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:56:56.920445 kernel: NET: Registered PF_XDP protocol family Dec 13 01:56:56.920532 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:56:56.920611 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:56:56.920691 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:56:56.920785 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:56:56.920867 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:56:56.920951 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:56:56.920967 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:56:56.920977 kernel: Initialise system trusted keyrings Dec 13 01:56:56.920987 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:56:56.920997 kernel: Key type asymmetric registered Dec 13 01:56:56.921006 kernel: Asymmetric key parser 'x509' registered Dec 13 01:56:56.921016 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:56:56.921026 kernel: io scheduler mq-deadline registered Dec 13 01:56:56.921035 kernel: io scheduler kyber registered Dec 13 01:56:56.921045 kernel: io scheduler bfq registered Dec 13 01:56:56.921056 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:56:56.921066 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:56:56.921076 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:56:56.921086 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:56:56.921096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:56:56.921105 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:56:56.921115 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:56:56.921125 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:56:56.921134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:56:56.921228 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:56:56.921242 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:56:56.921320 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:56:56.921413 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:56:56 UTC (1734055016) Dec 13 01:56:56.921498 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:56:56.921511 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:56:56.921521 kernel: Segment Routing with IPv6 Dec 13 01:56:56.921531 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:56:56.921543 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:56:56.921553 kernel: Key type dns_resolver registered Dec 13 01:56:56.921562 kernel: IPI shorthand broadcast: enabled Dec 13 01:56:56.921572 kernel: sched_clock: Marking stable (595673968, 123358227)->(877491219, -158459024) Dec 13 01:56:56.921582 kernel: registered taskstats version 1 Dec 13 01:56:56.921591 kernel: Loading compiled-in X.509 certificates Dec 13 01:56:56.921601 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:56:56.921611 kernel: Key type .fscrypt registered Dec 13 01:56:56.921621 kernel: Key type fscrypt-provisioning registered Dec 13 01:56:56.921632 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:56:56.921642 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:56:56.921651 kernel: ima: No architecture policies found Dec 13 01:56:56.921661 kernel: clk: Disabling unused clocks Dec 13 01:56:56.921670 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:56:56.921680 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:56:56.921690 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:56:56.921699 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:56:56.921723 kernel: Run /init as init process Dec 13 01:56:56.921732 kernel: with arguments: Dec 13 01:56:56.921742 kernel: /init Dec 13 01:56:56.921752 kernel: with environment: Dec 13 01:56:56.921761 kernel: HOME=/ Dec 13 01:56:56.921770 kernel: TERM=linux Dec 13 01:56:56.921780 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:56:56.921792 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:56:56.921807 systemd[1]: Detected virtualization kvm. Dec 13 01:56:56.921818 systemd[1]: Detected architecture x86-64. Dec 13 01:56:56.921828 systemd[1]: Running in initrd. Dec 13 01:56:56.921838 systemd[1]: No hostname configured, using default hostname. Dec 13 01:56:56.921848 systemd[1]: Hostname set to . Dec 13 01:56:56.921859 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:56:56.921869 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:56:56.921880 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:56:56.921891 systemd[1]: Reached target cryptsetup.target. Dec 13 01:56:56.921902 systemd[1]: Reached target paths.target. Dec 13 01:56:56.921920 systemd[1]: Reached target slices.target. Dec 13 01:56:56.921931 systemd[1]: Reached target swap.target. Dec 13 01:56:56.921942 systemd[1]: Reached target timers.target. Dec 13 01:56:56.921953 systemd[1]: Listening on iscsid.socket. Dec 13 01:56:56.921965 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:56:56.921977 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:56:56.921987 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:56:56.921998 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:56:56.922009 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:56:56.922019 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:56:56.922030 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:56:56.922041 systemd[1]: Reached target sockets.target. Dec 13 01:56:56.922051 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:56:56.922063 systemd[1]: Finished network-cleanup.service. Dec 13 01:56:56.922074 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:56:56.922085 systemd[1]: Starting systemd-journald.service... Dec 13 01:56:56.922095 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:56:56.922106 systemd[1]: Starting systemd-resolved.service... Dec 13 01:56:56.922116 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:56:56.922127 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:56:56.922138 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:56:56.922148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:56:56.922161 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:56:56.922174 systemd-journald[198]: Journal started Dec 13 01:56:56.922223 systemd-journald[198]: Runtime Journal (/run/log/journal/d8d0228f2db94da3a7b3ab0393c7206b) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:56:56.914276 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 01:56:56.956654 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:56:56.956675 kernel: audit: type=1130 audit(1734055016.949:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.956684 systemd[1]: Started systemd-journald.service. Dec 13 01:56:56.956696 kernel: Bridge firewalling registered Dec 13 01:56:56.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.935034 systemd-resolved[200]: Positive Trust Anchors: Dec 13 01:56:56.961115 kernel: audit: type=1130 audit(1734055016.956:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.935042 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:56:56.966217 kernel: audit: type=1130 audit(1734055016.961:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.935068 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:56:56.937309 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 01:56:56.956606 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 01:56:56.957543 systemd[1]: Started systemd-resolved.service. Dec 13 01:56:56.962004 systemd[1]: Reached target nss-lookup.target. Dec 13 01:56:56.979886 kernel: audit: type=1130 audit(1734055016.975:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.974568 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:56:56.980052 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:56:56.982844 kernel: SCSI subsystem initialized Dec 13 01:56:56.994194 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:56:56.994235 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:56:56.994249 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:56:56.995661 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:56:57.001261 kernel: audit: type=1130 audit(1734055016.995:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:56.997395 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:56:57.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.000444 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 01:56:57.008762 kernel: audit: type=1130 audit(1734055017.002:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.008778 dracut-cmdline[216]: dracut-dracut-053 Dec 13 01:56:57.008778 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 01:56:57.008778 dracut-cmdline[216]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:56:57.019441 kernel: audit: type=1130 audit(1734055017.013:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.001385 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:56:57.003691 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:56:57.013053 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:56:57.053735 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:56:57.069744 kernel: iscsi: registered transport (tcp) Dec 13 01:56:57.090048 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:56:57.090079 kernel: QLogic iSCSI HBA Driver Dec 13 01:56:57.109669 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:56:57.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.112131 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:56:57.115763 kernel: audit: type=1130 audit(1734055017.110:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.156734 kernel: raid6: avx2x4 gen() 30796 MB/s Dec 13 01:56:57.173731 kernel: raid6: avx2x4 xor() 8376 MB/s Dec 13 01:56:57.190731 kernel: raid6: avx2x2 gen() 32390 MB/s Dec 13 01:56:57.207731 kernel: raid6: avx2x2 xor() 19191 MB/s Dec 13 01:56:57.224732 kernel: raid6: avx2x1 gen() 26440 MB/s Dec 13 01:56:57.241731 kernel: raid6: avx2x1 xor() 15303 MB/s Dec 13 01:56:57.258730 kernel: raid6: sse2x4 gen() 14748 MB/s Dec 13 01:56:57.275745 kernel: raid6: sse2x4 xor() 7561 MB/s Dec 13 01:56:57.292763 kernel: raid6: sse2x2 gen() 15928 MB/s Dec 13 01:56:57.309730 kernel: raid6: sse2x2 xor() 9820 MB/s Dec 13 01:56:57.326730 kernel: raid6: sse2x1 gen() 12511 MB/s Dec 13 01:56:57.344126 kernel: raid6: sse2x1 xor() 7820 MB/s Dec 13 01:56:57.344148 kernel: raid6: using algorithm avx2x2 gen() 32390 MB/s Dec 13 01:56:57.344158 kernel: raid6: .... xor() 19191 MB/s, rmw enabled Dec 13 01:56:57.344852 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:56:57.356733 kernel: xor: automatically using best checksumming function avx Dec 13 01:56:57.444736 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:56:57.453002 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:56:57.457556 kernel: audit: type=1130 audit(1734055017.452:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.456000 audit: BPF prog-id=7 op=LOAD Dec 13 01:56:57.456000 audit: BPF prog-id=8 op=LOAD Dec 13 01:56:57.457940 systemd[1]: Starting systemd-udevd.service... Dec 13 01:56:57.470656 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 01:56:57.475323 systemd[1]: Started systemd-udevd.service. Dec 13 01:56:57.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.476118 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:56:57.485675 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 01:56:57.516639 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:56:57.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.518481 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:56:57.553796 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:56:57.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:57.590737 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:56:57.604767 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:56:57.604847 kernel: AES CTR mode by8 optimization enabled Dec 13 01:56:57.651131 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:56:57.656051 kernel: libata version 3.00 loaded. Dec 13 01:56:57.656069 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:56:57.656087 kernel: GPT:9289727 != 19775487 Dec 13 01:56:57.656098 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:56:57.656110 kernel: GPT:9289727 != 19775487 Dec 13 01:56:57.656121 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:56:57.656132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:57.668895 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:56:57.676870 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:56:57.676890 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:56:57.677018 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:56:57.677133 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) Dec 13 01:56:57.677147 kernel: scsi host0: ahci Dec 13 01:56:57.677302 kernel: scsi host1: ahci Dec 13 01:56:57.677440 kernel: scsi host2: ahci Dec 13 01:56:57.677564 kernel: scsi host3: ahci Dec 13 01:56:57.677695 kernel: scsi host4: ahci Dec 13 01:56:57.677841 kernel: scsi host5: ahci Dec 13 01:56:57.677982 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:56:57.677997 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:56:57.678009 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:56:57.678025 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:56:57.678038 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:56:57.678050 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:56:57.672798 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:56:57.709861 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:56:57.710941 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:56:57.720478 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:56:57.726776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:56:57.728684 systemd[1]: Starting disk-uuid.service... Dec 13 01:56:57.739208 disk-uuid[531]: Primary Header is updated. Dec 13 01:56:57.739208 disk-uuid[531]: Secondary Entries is updated. Dec 13 01:56:57.739208 disk-uuid[531]: Secondary Header is updated. Dec 13 01:56:57.743777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:57.746745 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:57.982745 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:57.982799 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:57.990740 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:57.990813 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:57.991738 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:56:57.992735 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:56:57.993960 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:56:57.993970 kernel: ata3.00: applying bridge limits Dec 13 01:56:57.995238 kernel: ata3.00: configured for UDMA/100 Dec 13 01:56:57.995732 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:56:58.028728 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:56:58.046280 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:56:58.046295 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:56:58.757748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:56:58.757961 disk-uuid[532]: The operation has completed successfully. Dec 13 01:56:58.783885 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:56:58.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:58.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:58.784010 systemd[1]: Finished disk-uuid.service. Dec 13 01:56:58.799889 systemd[1]: Starting verity-setup.service... Dec 13 01:56:58.813732 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:56:58.833523 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:56:58.836642 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:56:58.838841 systemd[1]: Finished verity-setup.service. Dec 13 01:56:58.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:58.894730 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:56:58.894975 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:56:58.896532 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:56:58.898469 systemd[1]: Starting ignition-setup.service... Dec 13 01:56:58.900698 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:56:58.907186 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:56:58.907223 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:56:58.907236 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:56:58.915488 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:56:58.977944 systemd[1]: Finished ignition-setup.service. Dec 13 01:56:58.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:58.984260 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:56:59.003084 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:56:59.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.007000 audit: BPF prog-id=9 op=LOAD Dec 13 01:56:59.014286 systemd[1]: Starting systemd-networkd.service... Dec 13 01:56:59.059186 systemd-networkd[717]: lo: Link UP Dec 13 01:56:59.059201 systemd-networkd[717]: lo: Gained carrier Dec 13 01:56:59.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.059720 systemd-networkd[717]: Enumeration completed Dec 13 01:56:59.059985 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:56:59.060272 systemd[1]: Started systemd-networkd.service. Dec 13 01:56:59.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.061721 systemd[1]: Reached target network.target. Dec 13 01:56:59.076984 iscsid[727]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:56:59.076984 iscsid[727]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 01:56:59.076984 iscsid[727]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:56:59.076984 iscsid[727]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:56:59.076984 iscsid[727]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:56:59.076984 iscsid[727]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:56:59.076984 iscsid[727]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:56:59.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.062038 systemd-networkd[717]: eth0: Link UP Dec 13 01:56:59.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.098087 ignition[713]: Ignition 2.14.0 Dec 13 01:56:59.062041 systemd-networkd[717]: eth0: Gained carrier Dec 13 01:56:59.098096 ignition[713]: Stage: fetch-offline Dec 13 01:56:59.066048 systemd[1]: Starting iscsiuio.service... Dec 13 01:56:59.098141 ignition[713]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:59.071142 systemd[1]: Started iscsiuio.service. Dec 13 01:56:59.098151 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:59.073861 systemd[1]: Starting iscsid.service... Dec 13 01:56:59.098325 ignition[713]: parsed url from cmdline: "" Dec 13 01:56:59.077929 systemd[1]: Started iscsid.service. Dec 13 01:56:59.098329 ignition[713]: no config URL provided Dec 13 01:56:59.079852 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:56:59.098335 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:56:59.081493 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:56:59.098344 ignition[713]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:56:59.094045 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:56:59.098365 ignition[713]: op(1): [started] loading QEMU firmware config module Dec 13 01:56:59.097197 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:56:59.098374 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:56:59.099436 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:56:59.102993 ignition[713]: op(1): [finished] loading QEMU firmware config module Dec 13 01:56:59.101496 systemd[1]: Reached target remote-fs.target. Dec 13 01:56:59.115220 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:56:59.123414 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:56:59.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.135162 ignition[713]: parsing config with SHA512: 0360aa8c51acef153e7981a0e0eca036d060bbfe577eb3acf24c1071d66f0039e0ccbcef9dee83601715c548db85fd46351a7074e0786d11c5f4cf7a52e62539 Dec 13 01:56:59.141358 unknown[713]: fetched base config from "system" Dec 13 01:56:59.141370 unknown[713]: fetched user config from "qemu" Dec 13 01:56:59.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.141833 ignition[713]: fetch-offline: fetch-offline passed Dec 13 01:56:59.142796 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:56:59.141878 ignition[713]: Ignition finished successfully Dec 13 01:56:59.144038 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:56:59.144952 systemd[1]: Starting ignition-kargs.service... Dec 13 01:56:59.154165 ignition[743]: Ignition 2.14.0 Dec 13 01:56:59.154178 ignition[743]: Stage: kargs Dec 13 01:56:59.154277 ignition[743]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:59.154286 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:59.157114 systemd[1]: Finished ignition-kargs.service. Dec 13 01:56:59.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.155238 ignition[743]: kargs: kargs passed Dec 13 01:56:59.159676 systemd[1]: Starting ignition-disks.service... Dec 13 01:56:59.155274 ignition[743]: Ignition finished successfully Dec 13 01:56:59.165577 ignition[749]: Ignition 2.14.0 Dec 13 01:56:59.165588 ignition[749]: Stage: disks Dec 13 01:56:59.165682 ignition[749]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:59.165691 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:59.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.167440 systemd[1]: Finished ignition-disks.service. Dec 13 01:56:59.166605 ignition[749]: disks: disks passed Dec 13 01:56:59.168813 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:56:59.166637 ignition[749]: Ignition finished successfully Dec 13 01:56:59.170691 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:56:59.171545 systemd[1]: Reached target local-fs.target. Dec 13 01:56:59.173085 systemd[1]: Reached target sysinit.target. Dec 13 01:56:59.173137 systemd[1]: Reached target basic.target. Dec 13 01:56:59.173964 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:56:59.185409 systemd-fsck[758]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:56:59.191308 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:56:59.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.193134 systemd[1]: Mounting sysroot.mount... Dec 13 01:56:59.198732 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:56:59.198774 systemd[1]: Mounted sysroot.mount. Dec 13 01:56:59.199531 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:56:59.201767 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:56:59.202827 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:56:59.202856 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:56:59.202874 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:56:59.204811 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:56:59.206835 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:56:59.211444 initrd-setup-root[768]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:56:59.213816 initrd-setup-root[776]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:56:59.217460 initrd-setup-root[784]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:56:59.221097 initrd-setup-root[792]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:56:59.245391 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:56:59.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.246240 systemd[1]: Starting ignition-mount.service... Dec 13 01:56:59.249011 systemd[1]: Starting sysroot-boot.service... Dec 13 01:56:59.250847 bash[809]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:56:59.257858 ignition[810]: INFO : Ignition 2.14.0 Dec 13 01:56:59.257858 ignition[810]: INFO : Stage: mount Dec 13 01:56:59.259486 ignition[810]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:59.259486 ignition[810]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:59.259486 ignition[810]: INFO : mount: mount passed Dec 13 01:56:59.259486 ignition[810]: INFO : Ignition finished successfully Dec 13 01:56:59.264705 systemd[1]: Finished ignition-mount.service. Dec 13 01:56:59.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.270473 systemd[1]: Finished sysroot-boot.service. Dec 13 01:56:59.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:56:59.845502 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:56:59.851734 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (819) Dec 13 01:56:59.853899 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:56:59.853919 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:56:59.853929 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:56:59.857996 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:56:59.858853 systemd[1]: Starting ignition-files.service... Dec 13 01:56:59.872575 ignition[839]: INFO : Ignition 2.14.0 Dec 13 01:56:59.872575 ignition[839]: INFO : Stage: files Dec 13 01:56:59.874622 ignition[839]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:56:59.874622 ignition[839]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:56:59.874622 ignition[839]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:56:59.878949 ignition[839]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:56:59.878949 ignition[839]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:56:59.878949 ignition[839]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:56:59.878949 ignition[839]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:56:59.878949 ignition[839]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:56:59.878949 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:56:59.878949 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:56:59.878949 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:56:59.878949 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:56:59.877505 unknown[839]: wrote ssh authorized keys file for user: core Dec 13 01:56:59.920045 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:57:00.011838 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:00.014166 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:57:00.466644 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:57:00.845515 ignition[839]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:00.845515 ignition[839]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:57:00.849886 ignition[839]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:57:00.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.884337 ignition[839]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:57:00.884337 ignition[839]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:57:00.884337 ignition[839]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:00.884337 ignition[839]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:00.884337 ignition[839]: INFO : files: files passed Dec 13 01:57:00.884337 ignition[839]: INFO : Ignition finished successfully Dec 13 01:57:00.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.855792 systemd-networkd[717]: eth0: Gained IPv6LL Dec 13 01:57:00.877574 systemd[1]: Finished ignition-files.service. Dec 13 01:57:00.899082 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:57:00.879651 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:57:00.901670 initrd-setup-root-after-ignition[866]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:57:00.881573 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:57:00.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.882266 systemd[1]: Starting ignition-quench.service... Dec 13 01:57:00.884474 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:57:00.884553 systemd[1]: Finished ignition-quench.service. Dec 13 01:57:00.887280 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:57:00.889553 systemd[1]: Reached target ignition-complete.target. Dec 13 01:57:00.892763 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:57:00.904231 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:57:00.904311 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:57:00.906392 systemd[1]: Reached target initrd-fs.target. Dec 13 01:57:00.907212 systemd[1]: Reached target initrd.target. Dec 13 01:57:00.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.908009 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:57:00.908592 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:57:00.918695 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:57:00.920754 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:57:00.928687 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:57:00.929618 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:57:00.931280 systemd[1]: Stopped target timers.target. Dec 13 01:57:00.932924 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:57:00.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.933009 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:57:00.934562 systemd[1]: Stopped target initrd.target. Dec 13 01:57:00.936236 systemd[1]: Stopped target basic.target. Dec 13 01:57:00.937810 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:57:00.939415 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:57:00.941017 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:57:00.942814 systemd[1]: Stopped target remote-fs.target. Dec 13 01:57:00.944494 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:57:00.946207 systemd[1]: Stopped target sysinit.target. Dec 13 01:57:00.947777 systemd[1]: Stopped target local-fs.target. Dec 13 01:57:00.949357 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:57:00.950950 systemd[1]: Stopped target swap.target. Dec 13 01:57:00.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.952415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:57:00.952498 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:57:00.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.954131 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:57:00.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.955592 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:57:00.955671 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:57:00.957454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:57:00.957534 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:57:00.959136 systemd[1]: Stopped target paths.target. Dec 13 01:57:00.960622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:57:00.964784 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:57:00.966231 systemd[1]: Stopped target slices.target. Dec 13 01:57:00.968198 systemd[1]: Stopped target sockets.target. Dec 13 01:57:00.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.970058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:57:00.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.970200 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:57:00.977734 iscsid[727]: iscsid shutting down. Dec 13 01:57:00.972034 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:57:00.982333 ignition[879]: INFO : Ignition 2.14.0 Dec 13 01:57:00.982333 ignition[879]: INFO : Stage: umount Dec 13 01:57:00.982333 ignition[879]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:00.982333 ignition[879]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:00.982333 ignition[879]: INFO : umount: umount passed Dec 13 01:57:00.982333 ignition[879]: INFO : Ignition finished successfully Dec 13 01:57:00.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.972149 systemd[1]: Stopped ignition-files.service. Dec 13 01:57:00.974547 systemd[1]: Stopping ignition-mount.service... Dec 13 01:57:00.976032 systemd[1]: Stopping iscsid.service... Dec 13 01:57:00.978494 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:57:00.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.980106 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:57:00.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.980263 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:57:00.982434 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:57:01.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.982522 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:57:00.985846 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 01:57:00.985917 systemd[1]: Stopped iscsid.service. Dec 13 01:57:00.987641 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:57:00.987717 systemd[1]: Stopped ignition-mount.service. Dec 13 01:57:00.990408 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:57:01.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.990471 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:57:00.993236 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:57:00.994062 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:57:01.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.994085 systemd[1]: Closed iscsid.socket. Dec 13 01:57:01.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.994902 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:57:00.994933 systemd[1]: Stopped ignition-disks.service. Dec 13 01:57:00.996500 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:57:01.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.996528 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:57:00.998085 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:57:00.998114 systemd[1]: Stopped ignition-setup.service. Dec 13 01:57:01.030000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:57:01.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:00.998230 systemd[1]: Stopping iscsiuio.service... Dec 13 01:57:01.000909 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:57:01.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.000974 systemd[1]: Stopped iscsiuio.service. Dec 13 01:57:01.039770 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 13 01:57:01.040610 kernel: audit: type=1131 audit(1734055021.035:64): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.002246 systemd[1]: Stopped target network.target. Dec 13 01:57:01.003986 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:57:01.004013 systemd[1]: Closed iscsiuio.socket. Dec 13 01:57:01.048454 kernel: audit: type=1131 audit(1734055021.043:65): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.048473 kernel: audit: type=1131 audit(1734055021.047:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.004134 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:57:01.056839 kernel: audit: type=1131 audit(1734055021.051:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.004303 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:57:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.008741 systemd-networkd[717]: eth0: DHCPv6 lease lost Dec 13 01:57:01.067628 kernel: audit: type=1131 audit(1734055021.058:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.067643 kernel: audit: type=1334 audit(1734055021.058:69): prog-id=9 op=UNLOAD Dec 13 01:57:01.067652 kernel: audit: type=1131 audit(1734055021.063:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.058000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:57:01.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.009918 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:57:01.072483 kernel: audit: type=1131 audit(1734055021.068:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.010025 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:57:01.080254 kernel: audit: type=1130 audit(1734055021.072:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.080267 kernel: audit: type=1131 audit(1734055021.072:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.013123 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:57:01.013158 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:57:01.015545 systemd[1]: Stopping network-cleanup.service... Dec 13 01:57:01.017895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:57:01.017944 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:57:01.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.019049 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:57:01.019090 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:57:01.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:01.020936 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:57:01.020976 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:57:01.022001 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:57:01.025381 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:57:01.025776 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:57:01.025853 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:57:01.031215 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:57:01.031325 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:57:01.033665 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:57:01.097000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:57:01.097000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:57:01.033753 systemd[1]: Stopped network-cleanup.service. Dec 13 01:57:01.097000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:57:01.097000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:57:01.097000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:57:01.035461 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:57:01.035489 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:57:01.040655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:57:01.040679 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:57:01.042261 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:57:01.042292 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:57:01.043979 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:57:01.044006 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:57:01.048460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:57:01.048489 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:57:01.055488 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:57:01.056848 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:57:01.056904 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 01:57:01.112202 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Dec 13 01:57:01.061874 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:57:01.061908 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:57:01.063844 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:57:01.063875 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:57:01.069120 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 01:57:01.069461 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:57:01.069526 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:57:01.083973 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:57:01.084067 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:57:01.084399 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:57:01.086723 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:57:01.086768 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:57:01.088381 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:57:01.095589 systemd[1]: Switching root. Dec 13 01:57:01.118805 systemd-journald[198]: Journal stopped Dec 13 01:57:03.625188 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:57:03.625244 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:57:03.625259 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:57:03.625273 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:57:03.625287 kernel: SELinux: policy capability open_perms=1 Dec 13 01:57:03.625300 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:57:03.625316 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:57:03.625329 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:57:03.625342 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:57:03.625357 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:57:03.625370 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:57:03.625388 systemd[1]: Successfully loaded SELinux policy in 36.764ms. Dec 13 01:57:03.625407 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.377ms. Dec 13 01:57:03.625423 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:57:03.625438 systemd[1]: Detected virtualization kvm. Dec 13 01:57:03.625452 systemd[1]: Detected architecture x86-64. Dec 13 01:57:03.625466 systemd[1]: Detected first boot. Dec 13 01:57:03.625483 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:03.625497 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:57:03.625512 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:57:03.625527 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:03.625545 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:03.625561 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:03.625577 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:57:03.625591 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:57:03.625608 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:57:03.625622 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:57:03.625636 systemd[1]: Created slice system-getty.slice. Dec 13 01:57:03.625649 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:57:03.625697 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:57:03.625726 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:57:03.625741 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:57:03.625754 systemd[1]: Created slice user.slice. Dec 13 01:57:03.625768 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:57:03.625786 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:57:03.625804 systemd[1]: Set up automount boot.automount. Dec 13 01:57:03.625819 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:57:03.625833 systemd[1]: Reached target integritysetup.target. Dec 13 01:57:03.625849 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:57:03.625862 systemd[1]: Reached target remote-fs.target. Dec 13 01:57:03.625876 systemd[1]: Reached target slices.target. Dec 13 01:57:03.625891 systemd[1]: Reached target swap.target. Dec 13 01:57:03.625907 systemd[1]: Reached target torcx.target. Dec 13 01:57:03.625922 systemd[1]: Reached target veritysetup.target. Dec 13 01:57:03.625936 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:57:03.625950 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:57:03.625965 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:57:03.625979 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:57:03.625994 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:57:03.626008 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:57:03.626023 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:57:03.626038 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:57:03.626056 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:57:03.626071 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:57:03.626086 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:57:03.626101 systemd[1]: Mounting media.mount... Dec 13 01:57:03.626117 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:03.626132 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:57:03.626147 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:57:03.626161 systemd[1]: Mounting tmp.mount... Dec 13 01:57:03.626186 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:57:03.626204 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:03.626220 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:57:03.626234 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:57:03.626251 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:03.626266 systemd[1]: Starting modprobe@drm.service... Dec 13 01:57:03.626282 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:03.626299 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:57:03.626313 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:03.626328 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:57:03.626346 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:57:03.626360 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:57:03.626375 systemd[1]: Starting systemd-journald.service... Dec 13 01:57:03.626390 kernel: fuse: init (API version 7.34) Dec 13 01:57:03.626404 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:57:03.626419 kernel: loop: module loaded Dec 13 01:57:03.626434 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:57:03.626449 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:57:03.626463 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:57:03.626481 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:03.626495 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:57:03.626510 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:57:03.626524 systemd[1]: Mounted media.mount. Dec 13 01:57:03.626539 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:57:03.626554 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:57:03.626570 systemd[1]: Mounted tmp.mount. Dec 13 01:57:03.626585 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:57:03.626600 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:57:03.626619 systemd-journald[1022]: Journal started Dec 13 01:57:03.626670 systemd-journald[1022]: Runtime Journal (/run/log/journal/d8d0228f2db94da3a7b3ab0393c7206b) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:57:03.537000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:57:03.537000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 01:57:03.623000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:57:03.623000 audit[1022]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc3a537020 a2=4000 a3=7ffc3a5370bc items=0 ppid=1 pid=1022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:03.623000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:57:03.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.629753 systemd[1]: Started systemd-journald.service. Dec 13 01:57:03.630686 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:57:03.630837 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:57:03.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.631947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:03.632074 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:03.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.633139 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:03.633280 systemd[1]: Finished modprobe@drm.service. Dec 13 01:57:03.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.634285 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:03.634437 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:03.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.635579 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:57:03.635731 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:57:03.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.636746 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:03.636894 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:03.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.638129 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:57:03.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.639505 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:57:03.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.640780 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:57:03.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.641943 systemd[1]: Reached target network-pre.target. Dec 13 01:57:03.643662 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:57:03.645335 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:57:03.646225 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:57:03.647322 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:57:03.649235 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:57:03.650383 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:03.651232 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:57:03.652456 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:03.653320 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:03.655156 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:57:03.657332 systemd-journald[1022]: Time spent on flushing to /var/log/journal/d8d0228f2db94da3a7b3ab0393c7206b is 21.741ms for 1043 entries. Dec 13 01:57:03.657332 systemd-journald[1022]: System Journal (/var/log/journal/d8d0228f2db94da3a7b3ab0393c7206b) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:57:03.687817 systemd-journald[1022]: Received client request to flush runtime journal. Dec 13 01:57:03.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.658016 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:57:03.688244 udevadm[1063]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:57:03.659874 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:57:03.661341 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:57:03.663500 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:57:03.665028 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:57:03.666527 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:57:03.670299 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:57:03.672353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:57:03.674813 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:03.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:03.688855 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:57:03.696636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:57:03.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.334631 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:57:04.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.337553 systemd[1]: Starting systemd-udevd.service... Dec 13 01:57:04.369734 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Dec 13 01:57:04.403782 systemd[1]: Started systemd-udevd.service. Dec 13 01:57:04.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.408599 systemd[1]: Starting systemd-networkd.service... Dec 13 01:57:04.417527 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:57:04.443630 systemd[1]: Found device dev-ttyS0.device. Dec 13 01:57:04.499087 systemd[1]: Started systemd-userdbd.service. Dec 13 01:57:04.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.510433 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:57:04.528748 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:57:04.557743 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:57:04.587000 audit[1102]: AVC avc: denied { confidentiality } for pid=1102 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:57:04.610315 systemd-networkd[1083]: lo: Link UP Dec 13 01:57:04.610330 systemd-networkd[1083]: lo: Gained carrier Dec 13 01:57:04.610767 systemd-networkd[1083]: Enumeration completed Dec 13 01:57:04.610872 systemd-networkd[1083]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:04.611945 systemd-networkd[1083]: eth0: Link UP Dec 13 01:57:04.611957 systemd-networkd[1083]: eth0: Gained carrier Dec 13 01:57:04.612367 systemd[1]: Started systemd-networkd.service. Dec 13 01:57:04.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.641977 systemd-networkd[1083]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:04.587000 audit[1102]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ec13a84630 a1=337fc a2=7f5a6fc64bc5 a3=5 items=110 ppid=1074 pid=1102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:04.587000 audit: CWD cwd="/" Dec 13 01:57:04.587000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=1 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=2 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=3 name=(null) inode=14251 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=4 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=5 name=(null) inode=14252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=6 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=7 name=(null) inode=14253 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=8 name=(null) inode=14253 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=9 name=(null) inode=14254 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=10 name=(null) inode=14253 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=11 name=(null) inode=14255 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=12 name=(null) inode=14253 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=13 name=(null) inode=14256 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=14 name=(null) inode=14253 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=15 name=(null) inode=14257 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=16 name=(null) inode=14253 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=17 name=(null) inode=14258 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=18 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=19 name=(null) inode=14259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=20 name=(null) inode=14259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=21 name=(null) inode=14260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=22 name=(null) inode=14259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=23 name=(null) inode=14261 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=24 name=(null) inode=14259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=25 name=(null) inode=14262 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=26 name=(null) inode=14259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=27 name=(null) inode=14263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=28 name=(null) inode=14259 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=29 name=(null) inode=14264 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=30 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=31 name=(null) inode=14265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=32 name=(null) inode=14265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=33 name=(null) inode=14266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=34 name=(null) inode=14265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=35 name=(null) inode=14267 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=36 name=(null) inode=14265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=37 name=(null) inode=14268 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=38 name=(null) inode=14265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=39 name=(null) inode=14269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=40 name=(null) inode=14265 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=41 name=(null) inode=14270 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=42 name=(null) inode=14250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=43 name=(null) inode=14271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=44 name=(null) inode=14271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=45 name=(null) inode=14272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=46 name=(null) inode=14271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=47 name=(null) inode=14273 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=48 name=(null) inode=14271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=49 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=50 name=(null) inode=14271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=51 name=(null) inode=14275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=52 name=(null) inode=14271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=53 name=(null) inode=14276 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=55 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=56 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=57 name=(null) inode=14278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=58 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=59 name=(null) inode=14279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=60 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=61 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=62 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=63 name=(null) inode=14281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=64 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=65 name=(null) inode=14282 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=66 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=67 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=68 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=69 name=(null) inode=14284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=70 name=(null) inode=14280 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=71 name=(null) inode=14285 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=72 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=73 name=(null) inode=14286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=74 name=(null) inode=14286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=75 name=(null) inode=14287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=76 name=(null) inode=14286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=77 name=(null) inode=14288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=78 name=(null) inode=14286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=79 name=(null) inode=14289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=80 name=(null) inode=14286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=81 name=(null) inode=14290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=82 name=(null) inode=14286 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=83 name=(null) inode=14291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=84 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=85 name=(null) inode=14292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=86 name=(null) inode=14292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=87 name=(null) inode=14293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=88 name=(null) inode=14292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=89 name=(null) inode=14294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=90 name=(null) inode=14292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=91 name=(null) inode=14295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=92 name=(null) inode=14292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=93 name=(null) inode=14296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=94 name=(null) inode=14292 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=95 name=(null) inode=14297 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=96 name=(null) inode=14277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=97 name=(null) inode=14298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=98 name=(null) inode=14298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=99 name=(null) inode=14299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=100 name=(null) inode=14298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=101 name=(null) inode=14300 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=102 name=(null) inode=14298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=103 name=(null) inode=14301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=104 name=(null) inode=14298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=105 name=(null) inode=14302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=106 name=(null) inode=14298 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=107 name=(null) inode=14303 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PATH item=109 name=(null) inode=15025 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:04.587000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:57:04.688746 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:57:04.691734 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:57:04.692059 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:57:04.692313 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:57:04.693735 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:57:04.774937 kernel: kvm: Nested Virtualization enabled Dec 13 01:57:04.775083 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:57:04.775111 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:57:04.775808 kernel: SVM: Virtual GIF supported Dec 13 01:57:04.851749 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:57:04.894814 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:57:04.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.897878 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:57:04.911594 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:04.945170 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:57:04.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.946550 systemd[1]: Reached target cryptsetup.target. Dec 13 01:57:04.949171 systemd[1]: Starting lvm2-activation.service... Dec 13 01:57:04.961828 lvm[1113]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:04.996257 systemd[1]: Finished lvm2-activation.service. Dec 13 01:57:04.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:04.997975 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:57:04.999077 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:57:04.999103 systemd[1]: Reached target local-fs.target. Dec 13 01:57:05.000679 systemd[1]: Reached target machines.target. Dec 13 01:57:05.003534 systemd[1]: Starting ldconfig.service... Dec 13 01:57:05.007443 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.007513 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:05.009100 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:57:05.012511 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:57:05.016496 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:57:05.019464 systemd[1]: Starting systemd-sysext.service... Dec 13 01:57:05.021047 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1116 (bootctl) Dec 13 01:57:05.023594 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:57:05.040962 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:57:05.045103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:57:05.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.049836 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:57:05.050153 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:57:05.068271 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:57:05.131176 systemd-fsck[1126]: fsck.fat 4.2 (2021-01-31) Dec 13 01:57:05.131176 systemd-fsck[1126]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 01:57:05.128965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:57:05.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.135024 systemd[1]: Mounting boot.mount... Dec 13 01:57:05.346689 systemd[1]: Mounted boot.mount. Dec 13 01:57:05.359515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:57:05.360240 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:57:05.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.368748 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:57:05.369896 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:57:05.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.384735 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:57:05.388105 (sd-sysext)[1137]: Using extensions 'kubernetes'. Dec 13 01:57:05.388413 (sd-sysext)[1137]: Merged extensions into '/usr'. Dec 13 01:57:05.403506 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:05.404873 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:57:05.406100 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.407871 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:05.409858 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:05.411775 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:05.412727 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.412833 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:05.412925 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:05.415392 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:57:05.416572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:05.416716 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:05.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.417962 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:05.418069 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:05.419592 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:05.419755 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:05.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.421299 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:05.421421 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.422676 systemd[1]: Finished systemd-sysext.service. Dec 13 01:57:05.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.425143 systemd[1]: Starting ensure-sysext.service... Dec 13 01:57:05.426984 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:57:05.430156 systemd[1]: Reloading. Dec 13 01:57:05.439941 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:57:05.441093 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:57:05.442405 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:57:05.453555 ldconfig[1115]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:57:05.475736 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2024-12-13T01:57:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:05.476126 /usr/lib/systemd/system-generators/torcx-generator[1171]: time="2024-12-13T01:57:05Z" level=info msg="torcx already run" Dec 13 01:57:05.552960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:05.552978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:05.571557 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:05.620489 systemd[1]: Finished ldconfig.service. Dec 13 01:57:05.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.621670 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:57:05.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.625238 systemd[1]: Starting audit-rules.service... Dec 13 01:57:05.627044 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:57:05.629039 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:57:05.631433 systemd[1]: Starting systemd-resolved.service... Dec 13 01:57:05.633726 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:57:05.635522 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:57:05.638222 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:57:05.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.639000 audit[1230]: SYSTEM_BOOT pid=1230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.643401 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:05.647908 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.649491 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:05.651521 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:05.654274 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:05.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.655366 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.655503 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:05.655635 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:05.656960 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:57:05.658932 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:57:05.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.660529 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:05.660746 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:05.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.662417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:05.662586 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:05.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.664426 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:05.664668 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:05.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.667130 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:05.667241 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.669003 systemd[1]: Starting systemd-update-done.service... Dec 13 01:57:05.672148 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.673598 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:05.675689 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:05.677903 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:05.679041 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.679205 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:05.679348 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:05.680511 systemd[1]: Finished systemd-update-done.service. Dec 13 01:57:05.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.682350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:05.682516 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:05.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.684293 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:05.684472 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:05.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.686044 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:05.686273 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:05.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.687897 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:05.688022 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.691767 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.693467 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:05.695562 systemd[1]: Starting modprobe@drm.service... Dec 13 01:57:05.697650 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:05.699926 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:05.700992 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:05.701135 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:05.702533 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:57:05.703889 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:05.705102 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:05.705283 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:05.712133 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:05.712270 systemd[1]: Finished modprobe@drm.service. Dec 13 01:57:05.713444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:05.713568 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:05.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.717000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:57:05.717000 audit[1268]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcecd8a330 a2=420 a3=0 items=0 ppid=1220 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:05.717000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:57:05.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:05.718046 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:57:05.718623 augenrules[1268]: No rules Dec 13 01:57:06.329106 systemd-timesyncd[1227]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:57:06.329149 systemd-timesyncd[1227]: Initial clock synchronization to Fri 2024-12-13 01:57:06.329037 UTC. Dec 13 01:57:06.329579 systemd[1]: Finished audit-rules.service. Dec 13 01:57:06.330812 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:06.330969 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:06.332413 systemd[1]: Reached target time-set.target. Dec 13 01:57:06.334117 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:06.334154 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:06.334463 systemd[1]: Finished ensure-sysext.service. Dec 13 01:57:06.347313 systemd-resolved[1225]: Positive Trust Anchors: Dec 13 01:57:06.347326 systemd-resolved[1225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:57:06.347364 systemd-resolved[1225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:57:06.354679 systemd-resolved[1225]: Defaulting to hostname 'linux'. Dec 13 01:57:06.356255 systemd[1]: Started systemd-resolved.service. Dec 13 01:57:06.357261 systemd[1]: Reached target network.target. Dec 13 01:57:06.358126 systemd[1]: Reached target nss-lookup.target. Dec 13 01:57:06.359020 systemd[1]: Reached target sysinit.target. Dec 13 01:57:06.359908 systemd[1]: Started motdgen.path. Dec 13 01:57:06.360686 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:57:06.361965 systemd[1]: Started logrotate.timer. Dec 13 01:57:06.362807 systemd[1]: Started mdadm.timer. Dec 13 01:57:06.363538 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:57:06.364459 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:57:06.364483 systemd[1]: Reached target paths.target. Dec 13 01:57:06.365291 systemd[1]: Reached target timers.target. Dec 13 01:57:06.366492 systemd[1]: Listening on dbus.socket. Dec 13 01:57:06.368334 systemd[1]: Starting docker.socket... Dec 13 01:57:06.369853 systemd[1]: Listening on sshd.socket. Dec 13 01:57:06.370761 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:06.371013 systemd[1]: Listening on docker.socket. Dec 13 01:57:06.371841 systemd[1]: Reached target sockets.target. Dec 13 01:57:06.372669 systemd[1]: Reached target basic.target. Dec 13 01:57:06.373580 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:57:06.373616 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:57:06.373637 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:57:06.374404 systemd[1]: Starting containerd.service... Dec 13 01:57:06.375906 systemd[1]: Starting dbus.service... Dec 13 01:57:06.377505 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:57:06.379194 systemd[1]: Starting extend-filesystems.service... Dec 13 01:57:06.380243 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:57:06.381107 systemd[1]: Starting motdgen.service... Dec 13 01:57:06.382192 jq[1283]: false Dec 13 01:57:06.383093 systemd[1]: Starting prepare-helm.service... Dec 13 01:57:06.385268 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:57:06.387000 systemd[1]: Starting sshd-keygen.service... Dec 13 01:57:06.389398 systemd[1]: Starting systemd-logind.service... Dec 13 01:57:06.390212 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:06.390256 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:57:06.393070 systemd[1]: Starting update-engine.service... Dec 13 01:57:06.393889 extend-filesystems[1284]: Found loop1 Dec 13 01:57:06.393889 extend-filesystems[1284]: Found sr0 Dec 13 01:57:06.393889 extend-filesystems[1284]: Found vda Dec 13 01:57:06.393889 extend-filesystems[1284]: Found vda1 Dec 13 01:57:06.393889 extend-filesystems[1284]: Found vda2 Dec 13 01:57:06.393889 extend-filesystems[1284]: Found vda3 Dec 13 01:57:06.393889 extend-filesystems[1284]: Found usr Dec 13 01:57:06.398280 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:57:06.401390 extend-filesystems[1284]: Found vda4 Dec 13 01:57:06.401390 extend-filesystems[1284]: Found vda6 Dec 13 01:57:06.401390 extend-filesystems[1284]: Found vda7 Dec 13 01:57:06.401390 extend-filesystems[1284]: Found vda9 Dec 13 01:57:06.401390 extend-filesystems[1284]: Checking size of /dev/vda9 Dec 13 01:57:06.403794 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:57:06.405641 dbus-daemon[1282]: [system] SELinux support is enabled Dec 13 01:57:06.404006 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:57:06.419926 jq[1301]: true Dec 13 01:57:06.404666 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:57:06.404855 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:57:06.408483 systemd[1]: Started dbus.service. Dec 13 01:57:06.420381 jq[1312]: true Dec 13 01:57:06.413778 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:57:06.413800 systemd[1]: Reached target system-config.target. Dec 13 01:57:06.420097 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:57:06.420116 systemd[1]: Reached target user-config.target. Dec 13 01:57:06.426488 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:57:06.426720 systemd[1]: Finished motdgen.service. Dec 13 01:57:06.428227 tar[1311]: linux-amd64/helm Dec 13 01:57:06.436189 extend-filesystems[1284]: Resized partition /dev/vda9 Dec 13 01:57:06.442037 extend-filesystems[1334]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:57:06.445561 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:57:06.452389 env[1313]: time="2024-12-13T01:57:06.452335320Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:57:06.457225 update_engine[1298]: I1213 01:57:06.457072 1298 main.cc:92] Flatcar Update Engine starting Dec 13 01:57:06.468433 update_engine[1298]: I1213 01:57:06.464197 1298 update_check_scheduler.cc:74] Next update check in 7m40s Dec 13 01:57:06.459722 systemd[1]: Started update-engine.service. Dec 13 01:57:06.463045 systemd[1]: Started locksmithd.service. Dec 13 01:57:06.473840 env[1313]: time="2024-12-13T01:57:06.473803941Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:57:06.473960 env[1313]: time="2024-12-13T01:57:06.473931360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:06.474561 systemd-logind[1294]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:57:06.474585 systemd-logind[1294]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:57:06.474790 systemd-logind[1294]: New seat seat0. Dec 13 01:57:06.481572 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:57:06.485170 systemd[1]: Started systemd-logind.service. Dec 13 01:57:06.504123 env[1313]: time="2024-12-13T01:57:06.481979684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504123 env[1313]: time="2024-12-13T01:57:06.482017955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504200 env[1313]: time="2024-12-13T01:57:06.504171902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504200 env[1313]: time="2024-12-13T01:57:06.504196198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504268 env[1313]: time="2024-12-13T01:57:06.504209563Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:57:06.504268 env[1313]: time="2024-12-13T01:57:06.504218860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504309 env[1313]: time="2024-12-13T01:57:06.504278903Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504476 env[1313]: time="2024-12-13T01:57:06.504453410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:57:06.504666 extend-filesystems[1334]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:57:06.504666 extend-filesystems[1334]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:57:06.504666 extend-filesystems[1334]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:57:06.509949 extend-filesystems[1284]: Resized filesystem in /dev/vda9 Dec 13 01:57:06.505315 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:57:06.511377 env[1313]: time="2024-12-13T01:57:06.504819917Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:57:06.511377 env[1313]: time="2024-12-13T01:57:06.504836538Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:57:06.511377 env[1313]: time="2024-12-13T01:57:06.504981330Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:57:06.511377 env[1313]: time="2024-12-13T01:57:06.504994054Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:57:06.505527 systemd[1]: Finished extend-filesystems.service. Dec 13 01:57:06.512501 bash[1340]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:57:06.513106 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514611951Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514634082Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514644862Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514682182Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514695197Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514707260Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514717960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514729441Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514740131Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514752484Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514763295Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514773123Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514838946Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:57:06.518988 env[1313]: time="2024-12-13T01:57:06.514897747Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:57:06.516450 systemd[1]: Started containerd.service. Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515168174Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515189594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515202448Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515240369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515250839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515261890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515273531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515284081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515294892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515304369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515313737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515324768Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515416750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515429594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519282 env[1313]: time="2024-12-13T01:57:06.515439332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519590 env[1313]: time="2024-12-13T01:57:06.515449111Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:57:06.519590 env[1313]: time="2024-12-13T01:57:06.515460322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:57:06.519590 env[1313]: time="2024-12-13T01:57:06.515469279Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:57:06.519590 env[1313]: time="2024-12-13T01:57:06.515486030Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:57:06.519590 env[1313]: time="2024-12-13T01:57:06.515517469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.515699190Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.515742922Z" level=info msg="Connect containerd service" Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.515771535Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.516186273Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.516337397Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.516364407Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:57:06.519698 env[1313]: time="2024-12-13T01:57:06.516398431Z" level=info msg="containerd successfully booted in 0.066353s" Dec 13 01:57:06.520337 env[1313]: time="2024-12-13T01:57:06.519755136Z" level=info msg="Start subscribing containerd event" Dec 13 01:57:06.520337 env[1313]: time="2024-12-13T01:57:06.519817583Z" level=info msg="Start recovering state" Dec 13 01:57:06.520337 env[1313]: time="2024-12-13T01:57:06.519873398Z" level=info msg="Start event monitor" Dec 13 01:57:06.520337 env[1313]: time="2024-12-13T01:57:06.519885521Z" level=info msg="Start snapshots syncer" Dec 13 01:57:06.520337 env[1313]: time="2024-12-13T01:57:06.519897814Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:57:06.520337 env[1313]: time="2024-12-13T01:57:06.520016596Z" level=info msg="Start streaming server" Dec 13 01:57:06.526060 locksmithd[1344]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:57:06.655254 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:06.655311 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:06.819651 tar[1311]: linux-amd64/LICENSE Dec 13 01:57:06.819651 tar[1311]: linux-amd64/README.md Dec 13 01:57:06.823984 systemd[1]: Finished prepare-helm.service. Dec 13 01:57:07.033748 systemd-networkd[1083]: eth0: Gained IPv6LL Dec 13 01:57:07.035676 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:57:07.037020 systemd[1]: Reached target network-online.target. Dec 13 01:57:07.039286 systemd[1]: Starting kubelet.service... Dec 13 01:57:07.151209 sshd_keygen[1300]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:57:07.168575 systemd[1]: Finished sshd-keygen.service. Dec 13 01:57:07.170954 systemd[1]: Starting issuegen.service... Dec 13 01:57:07.176034 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:57:07.176278 systemd[1]: Finished issuegen.service. Dec 13 01:57:07.178430 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:57:07.185104 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:57:07.187437 systemd[1]: Started getty@tty1.service. Dec 13 01:57:07.189277 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:57:07.190408 systemd[1]: Reached target getty.target. Dec 13 01:57:07.591608 systemd[1]: Started kubelet.service. Dec 13 01:57:07.593293 systemd[1]: Reached target multi-user.target. Dec 13 01:57:07.595760 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:57:07.602571 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:57:07.602761 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:57:07.607328 systemd[1]: Startup finished in 5.230s (kernel) + 5.849s (userspace) = 11.079s. Dec 13 01:57:08.071612 kubelet[1383]: E1213 01:57:08.071465 1383 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:08.073635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:08.073775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:15.780698 systemd[1]: Created slice system-sshd.slice. Dec 13 01:57:15.781654 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:46864.service. Dec 13 01:57:15.822095 sshd[1395]: Accepted publickey for core from 10.0.0.1 port 46864 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:57:15.823409 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:15.829722 systemd[1]: Created slice user-500.slice. Dec 13 01:57:15.830492 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:57:15.831938 systemd-logind[1294]: New session 1 of user core. Dec 13 01:57:15.838534 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:57:15.839771 systemd[1]: Starting user@500.service... Dec 13 01:57:15.842315 (systemd)[1400]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:15.902851 systemd[1400]: Queued start job for default target default.target. Dec 13 01:57:15.903059 systemd[1400]: Reached target paths.target. Dec 13 01:57:15.903079 systemd[1400]: Reached target sockets.target. Dec 13 01:57:15.903094 systemd[1400]: Reached target timers.target. Dec 13 01:57:15.903108 systemd[1400]: Reached target basic.target. Dec 13 01:57:15.903148 systemd[1400]: Reached target default.target. Dec 13 01:57:15.903173 systemd[1400]: Startup finished in 56ms. Dec 13 01:57:15.903232 systemd[1]: Started user@500.service. Dec 13 01:57:15.904028 systemd[1]: Started session-1.scope. Dec 13 01:57:15.952706 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:46868.service. Dec 13 01:57:15.993180 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 46868 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:57:15.994177 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:15.997278 systemd-logind[1294]: New session 2 of user core. Dec 13 01:57:15.997984 systemd[1]: Started session-2.scope. Dec 13 01:57:16.048362 sshd[1409]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:16.050321 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:34098.service. Dec 13 01:57:16.050696 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:46868.service: Deactivated successfully. Dec 13 01:57:16.051394 systemd-logind[1294]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:57:16.051471 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:57:16.052240 systemd-logind[1294]: Removed session 2. Dec 13 01:57:16.087652 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 34098 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:57:16.088567 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:16.091284 systemd-logind[1294]: New session 3 of user core. Dec 13 01:57:16.091909 systemd[1]: Started session-3.scope. Dec 13 01:57:16.139258 sshd[1414]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:16.141805 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:34112.service. Dec 13 01:57:16.142356 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:34098.service: Deactivated successfully. Dec 13 01:57:16.143249 systemd-logind[1294]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:57:16.143256 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:57:16.144145 systemd-logind[1294]: Removed session 3. Dec 13 01:57:16.179212 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 34112 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:57:16.180028 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:16.182694 systemd-logind[1294]: New session 4 of user core. Dec 13 01:57:16.183409 systemd[1]: Started session-4.scope. Dec 13 01:57:16.234833 sshd[1422]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:16.237406 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:34118.service. Dec 13 01:57:16.237972 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:34112.service: Deactivated successfully. Dec 13 01:57:16.239251 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:57:16.239286 systemd-logind[1294]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:57:16.240319 systemd-logind[1294]: Removed session 4. Dec 13 01:57:16.275119 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 34118 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:57:16.275987 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:57:16.278734 systemd-logind[1294]: New session 5 of user core. Dec 13 01:57:16.279508 systemd[1]: Started session-5.scope. Dec 13 01:57:16.332539 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:57:16.332734 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:57:16.349112 systemd[1]: Starting docker.service... Dec 13 01:57:16.375724 env[1447]: time="2024-12-13T01:57:16.375657772Z" level=info msg="Starting up" Dec 13 01:57:16.376656 env[1447]: time="2024-12-13T01:57:16.376638441Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:57:16.376656 env[1447]: time="2024-12-13T01:57:16.376653529Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:57:16.376735 env[1447]: time="2024-12-13T01:57:16.376681421Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:57:16.376735 env[1447]: time="2024-12-13T01:57:16.376693143Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:57:16.378105 env[1447]: time="2024-12-13T01:57:16.378089632Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:57:16.378105 env[1447]: time="2024-12-13T01:57:16.378103107Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:57:16.378165 env[1447]: time="2024-12-13T01:57:16.378113357Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:57:16.378165 env[1447]: time="2024-12-13T01:57:16.378119999Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:57:17.042382 env[1447]: time="2024-12-13T01:57:17.042327044Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 01:57:17.042382 env[1447]: time="2024-12-13T01:57:17.042353514Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 01:57:17.042627 env[1447]: time="2024-12-13T01:57:17.042500560Z" level=info msg="Loading containers: start." Dec 13 01:57:17.147567 kernel: Initializing XFRM netlink socket Dec 13 01:57:17.173621 env[1447]: time="2024-12-13T01:57:17.173592925Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 01:57:17.215706 systemd-networkd[1083]: docker0: Link UP Dec 13 01:57:17.235576 env[1447]: time="2024-12-13T01:57:17.235531652Z" level=info msg="Loading containers: done." Dec 13 01:57:17.248243 env[1447]: time="2024-12-13T01:57:17.248200400Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:57:17.248375 env[1447]: time="2024-12-13T01:57:17.248357886Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 01:57:17.248443 env[1447]: time="2024-12-13T01:57:17.248425132Z" level=info msg="Daemon has completed initialization" Dec 13 01:57:17.263891 systemd[1]: Started docker.service. Dec 13 01:57:17.269632 env[1447]: time="2024-12-13T01:57:17.269585074Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:57:17.913757 env[1313]: time="2024-12-13T01:57:17.913717923Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:57:18.324569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:57:18.324751 systemd[1]: Stopped kubelet.service. Dec 13 01:57:18.325973 systemd[1]: Starting kubelet.service... Dec 13 01:57:18.394256 systemd[1]: Started kubelet.service. Dec 13 01:57:18.733652 kubelet[1592]: E1213 01:57:18.733505 1592 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:18.736784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:18.736958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:18.822743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158741895.mount: Deactivated successfully. Dec 13 01:57:20.484869 env[1313]: time="2024-12-13T01:57:20.484816771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:20.487197 env[1313]: time="2024-12-13T01:57:20.487150558Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:20.489398 env[1313]: time="2024-12-13T01:57:20.489364971Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:20.491171 env[1313]: time="2024-12-13T01:57:20.491144959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:20.491828 env[1313]: time="2024-12-13T01:57:20.491796200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:57:20.500505 env[1313]: time="2024-12-13T01:57:20.500466621Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:57:22.658133 env[1313]: time="2024-12-13T01:57:22.658078132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:22.660213 env[1313]: time="2024-12-13T01:57:22.660182368Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:22.661914 env[1313]: time="2024-12-13T01:57:22.661885773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:22.663797 env[1313]: time="2024-12-13T01:57:22.663764336Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:22.664406 env[1313]: time="2024-12-13T01:57:22.664365874Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:57:22.673087 env[1313]: time="2024-12-13T01:57:22.673042255Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:57:24.259038 env[1313]: time="2024-12-13T01:57:24.258975586Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:24.261138 env[1313]: time="2024-12-13T01:57:24.261111993Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:24.263045 env[1313]: time="2024-12-13T01:57:24.262984785Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:24.265848 env[1313]: time="2024-12-13T01:57:24.265801878Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:24.266241 env[1313]: time="2024-12-13T01:57:24.266200436Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:57:24.277774 env[1313]: time="2024-12-13T01:57:24.277736871Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:57:25.516619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932190185.mount: Deactivated successfully. Dec 13 01:57:26.043729 env[1313]: time="2024-12-13T01:57:26.043677890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.045718 env[1313]: time="2024-12-13T01:57:26.045689803Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.047185 env[1313]: time="2024-12-13T01:57:26.047135765Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.049226 env[1313]: time="2024-12-13T01:57:26.049200147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:26.049589 env[1313]: time="2024-12-13T01:57:26.049563698Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:57:26.057599 env[1313]: time="2024-12-13T01:57:26.057565314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:57:26.703420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838940866.mount: Deactivated successfully. Dec 13 01:57:27.944166 env[1313]: time="2024-12-13T01:57:27.944102244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:27.946021 env[1313]: time="2024-12-13T01:57:27.945989283Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:27.947681 env[1313]: time="2024-12-13T01:57:27.947655127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:27.949355 env[1313]: time="2024-12-13T01:57:27.949312105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:27.950001 env[1313]: time="2024-12-13T01:57:27.949972824Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:57:27.957428 env[1313]: time="2024-12-13T01:57:27.957401455Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:57:28.429787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount101239747.mount: Deactivated successfully. Dec 13 01:57:28.435638 env[1313]: time="2024-12-13T01:57:28.435601771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:28.437728 env[1313]: time="2024-12-13T01:57:28.437687813Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:28.439167 env[1313]: time="2024-12-13T01:57:28.439124919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:28.440508 env[1313]: time="2024-12-13T01:57:28.440482244Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:28.440838 env[1313]: time="2024-12-13T01:57:28.440817082Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:57:28.449511 env[1313]: time="2024-12-13T01:57:28.449475800Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:57:28.930634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:57:28.930775 systemd[1]: Stopped kubelet.service. Dec 13 01:57:28.931948 systemd[1]: Starting kubelet.service... Dec 13 01:57:28.937879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113142596.mount: Deactivated successfully. Dec 13 01:57:29.002053 systemd[1]: Started kubelet.service. Dec 13 01:57:29.141492 kubelet[1648]: E1213 01:57:29.141430 1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:57:29.143619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:57:29.143760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:57:32.528578 env[1313]: time="2024-12-13T01:57:32.528498103Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:32.530768 env[1313]: time="2024-12-13T01:57:32.530738575Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:32.532927 env[1313]: time="2024-12-13T01:57:32.532901701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:32.534642 env[1313]: time="2024-12-13T01:57:32.534584728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:32.535382 env[1313]: time="2024-12-13T01:57:32.535348841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:57:35.061185 systemd[1]: Stopped kubelet.service. Dec 13 01:57:35.063164 systemd[1]: Starting kubelet.service... Dec 13 01:57:35.077869 systemd[1]: Reloading. Dec 13 01:57:35.136049 /usr/lib/systemd/system-generators/torcx-generator[1757]: time="2024-12-13T01:57:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:35.136083 /usr/lib/systemd/system-generators/torcx-generator[1757]: time="2024-12-13T01:57:35Z" level=info msg="torcx already run" Dec 13 01:57:35.350495 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:35.350511 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:35.371669 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:35.441820 systemd[1]: Started kubelet.service. Dec 13 01:57:35.443308 systemd[1]: Stopping kubelet.service... Dec 13 01:57:35.443605 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:57:35.443827 systemd[1]: Stopped kubelet.service. Dec 13 01:57:35.445175 systemd[1]: Starting kubelet.service... Dec 13 01:57:35.517293 systemd[1]: Started kubelet.service. Dec 13 01:57:35.552224 kubelet[1818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:35.552224 kubelet[1818]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:57:35.552224 kubelet[1818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:35.552613 kubelet[1818]: I1213 01:57:35.552276 1818 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:57:35.774268 kubelet[1818]: I1213 01:57:35.774174 1818 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:57:35.774268 kubelet[1818]: I1213 01:57:35.774208 1818 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:57:35.774455 kubelet[1818]: I1213 01:57:35.774424 1818 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:57:35.790827 kubelet[1818]: I1213 01:57:35.790790 1818 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:57:35.791086 kubelet[1818]: E1213 01:57:35.791067 1818 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.799576 kubelet[1818]: I1213 01:57:35.799540 1818 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:57:35.800708 kubelet[1818]: I1213 01:57:35.800690 1818 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:57:35.800875 kubelet[1818]: I1213 01:57:35.800861 1818 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:57:35.801222 kubelet[1818]: I1213 01:57:35.801207 1818 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:57:35.801250 kubelet[1818]: I1213 01:57:35.801223 1818 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:57:35.801330 kubelet[1818]: I1213 01:57:35.801317 1818 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:35.801398 kubelet[1818]: I1213 01:57:35.801387 1818 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:57:35.801423 kubelet[1818]: I1213 01:57:35.801401 1818 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:57:35.801423 kubelet[1818]: I1213 01:57:35.801420 1818 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:57:35.801463 kubelet[1818]: I1213 01:57:35.801431 1818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:57:35.802247 kubelet[1818]: W1213 01:57:35.802156 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.802247 kubelet[1818]: W1213 01:57:35.802158 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.802247 kubelet[1818]: E1213 01:57:35.802196 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.802247 kubelet[1818]: E1213 01:57:35.802203 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.802381 kubelet[1818]: I1213 01:57:35.802282 1818 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:57:35.807776 kubelet[1818]: I1213 01:57:35.807754 1818 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:57:35.808578 kubelet[1818]: W1213 01:57:35.808560 1818 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:57:35.809005 kubelet[1818]: I1213 01:57:35.808984 1818 server.go:1256] "Started kubelet" Dec 13 01:57:35.809091 kubelet[1818]: I1213 01:57:35.809025 1818 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:57:35.809143 kubelet[1818]: I1213 01:57:35.809113 1818 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:57:35.809367 kubelet[1818]: I1213 01:57:35.809347 1818 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:57:35.815200 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 01:57:35.815398 kubelet[1818]: I1213 01:57:35.815316 1818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:57:35.815898 kubelet[1818]: I1213 01:57:35.815879 1818 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:57:35.817152 kubelet[1818]: I1213 01:57:35.817126 1818 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:57:35.817401 kubelet[1818]: E1213 01:57:35.817389 1818 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099e2c6769df2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:57:35.808962034 +0000 UTC m=+0.288101905,LastTimestamp:2024-12-13 01:57:35.808962034 +0000 UTC m=+0.288101905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:57:35.817776 kubelet[1818]: I1213 01:57:35.817752 1818 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:57:35.817834 kubelet[1818]: I1213 01:57:35.817813 1818 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:57:35.818344 kubelet[1818]: W1213 01:57:35.818224 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.818344 kubelet[1818]: E1213 01:57:35.818262 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.818344 kubelet[1818]: E1213 01:57:35.818311 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Dec 13 01:57:35.818451 kubelet[1818]: E1213 01:57:35.818419 1818 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:57:35.818619 kubelet[1818]: I1213 01:57:35.818600 1818 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:57:35.818681 kubelet[1818]: I1213 01:57:35.818669 1818 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:57:35.819874 kubelet[1818]: I1213 01:57:35.819852 1818 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:57:35.828604 kubelet[1818]: I1213 01:57:35.828590 1818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:57:35.829438 kubelet[1818]: I1213 01:57:35.829400 1818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:57:35.829438 kubelet[1818]: I1213 01:57:35.829426 1818 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:57:35.829438 kubelet[1818]: I1213 01:57:35.829443 1818 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:57:35.829603 kubelet[1818]: E1213 01:57:35.829480 1818 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:57:35.834801 kubelet[1818]: W1213 01:57:35.834760 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.834801 kubelet[1818]: E1213 01:57:35.834803 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:35.835192 kubelet[1818]: I1213 01:57:35.835174 1818 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:57:35.835192 kubelet[1818]: I1213 01:57:35.835187 1818 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:57:35.835281 kubelet[1818]: I1213 01:57:35.835199 1818 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:35.918591 kubelet[1818]: I1213 01:57:35.918566 1818 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:35.918929 kubelet[1818]: E1213 01:57:35.918900 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:57:35.930065 kubelet[1818]: E1213 01:57:35.930043 1818 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:57:36.019597 kubelet[1818]: E1213 01:57:36.019576 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Dec 13 01:57:36.120839 kubelet[1818]: I1213 01:57:36.120755 1818 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:36.121038 kubelet[1818]: E1213 01:57:36.121026 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:57:36.131152 kubelet[1818]: E1213 01:57:36.131126 1818 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:57:36.150509 kubelet[1818]: I1213 01:57:36.150480 1818 policy_none.go:49] "None policy: Start" Dec 13 01:57:36.151035 kubelet[1818]: I1213 01:57:36.151020 1818 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:57:36.151102 kubelet[1818]: I1213 01:57:36.151054 1818 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:57:36.156821 kubelet[1818]: I1213 01:57:36.156801 1818 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:57:36.157000 kubelet[1818]: I1213 01:57:36.156982 1818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:57:36.158202 kubelet[1818]: E1213 01:57:36.158187 1818 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:57:36.420702 kubelet[1818]: E1213 01:57:36.420609 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Dec 13 01:57:36.522822 kubelet[1818]: I1213 01:57:36.522783 1818 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:36.523066 kubelet[1818]: E1213 01:57:36.523051 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:57:36.532193 kubelet[1818]: I1213 01:57:36.532170 1818 topology_manager.go:215] "Topology Admit Handler" podUID="0d38e2a127342d5641584882efbb35d2" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:57:36.532795 kubelet[1818]: I1213 01:57:36.532779 1818 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:57:36.533458 kubelet[1818]: I1213 01:57:36.533445 1818 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:57:36.562192 kubelet[1818]: E1213 01:57:36.562154 1818 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099e2c6769df2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:57:35.808962034 +0000 UTC m=+0.288101905,LastTimestamp:2024-12-13 01:57:35.808962034 +0000 UTC m=+0.288101905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:57:36.621483 kubelet[1818]: I1213 01:57:36.621431 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:36.621483 kubelet[1818]: I1213 01:57:36.621476 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:36.621605 kubelet[1818]: I1213 01:57:36.621513 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:36.621605 kubelet[1818]: I1213 01:57:36.621562 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:36.621605 kubelet[1818]: I1213 01:57:36.621597 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:36.621712 kubelet[1818]: I1213 01:57:36.621624 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:57:36.621712 kubelet[1818]: I1213 01:57:36.621649 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:36.621712 kubelet[1818]: I1213 01:57:36.621677 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:36.621712 kubelet[1818]: I1213 01:57:36.621703 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:36.703889 kubelet[1818]: W1213 01:57:36.703798 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:36.703889 kubelet[1818]: E1213 01:57:36.703838 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:36.835411 kubelet[1818]: E1213 01:57:36.835387 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:36.835850 env[1313]: time="2024-12-13T01:57:36.835805031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d38e2a127342d5641584882efbb35d2,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:36.836204 kubelet[1818]: E1213 01:57:36.836162 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:36.836415 env[1313]: time="2024-12-13T01:57:36.836394016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:36.837579 kubelet[1818]: E1213 01:57:36.837560 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:36.837818 env[1313]: time="2024-12-13T01:57:36.837789082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:37.082256 kubelet[1818]: W1213 01:57:37.082132 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:37.082256 kubelet[1818]: E1213 01:57:37.082193 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:37.221310 kubelet[1818]: E1213 01:57:37.221274 1818 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="1.6s" Dec 13 01:57:37.324874 kubelet[1818]: I1213 01:57:37.324833 1818 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:37.325125 kubelet[1818]: E1213 01:57:37.325099 1818 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Dec 13 01:57:37.356927 kubelet[1818]: W1213 01:57:37.356794 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:37.356927 kubelet[1818]: E1213 01:57:37.356832 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:37.405369 kubelet[1818]: W1213 01:57:37.405324 1818 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:37.405369 kubelet[1818]: E1213 01:57:37.405361 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.34:6443: connect: connection refused Dec 13 01:57:37.411488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3874073442.mount: Deactivated successfully. Dec 13 01:57:37.420720 env[1313]: time="2024-12-13T01:57:37.420682735Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.423854 env[1313]: time="2024-12-13T01:57:37.423821392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.424774 env[1313]: time="2024-12-13T01:57:37.424739263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.426552 env[1313]: time="2024-12-13T01:57:37.426502480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.428507 env[1313]: time="2024-12-13T01:57:37.428472755Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.430081 env[1313]: time="2024-12-13T01:57:37.430052227Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.431434 env[1313]: time="2024-12-13T01:57:37.431406216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.432766 env[1313]: time="2024-12-13T01:57:37.432743614Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.436118 env[1313]: time="2024-12-13T01:57:37.436054263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.437378 env[1313]: time="2024-12-13T01:57:37.437345143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.439032 env[1313]: time="2024-12-13T01:57:37.439001329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.441185 env[1313]: time="2024-12-13T01:57:37.441144889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:37.457443 env[1313]: time="2024-12-13T01:57:37.457375798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:37.457443 env[1313]: time="2024-12-13T01:57:37.457418648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:37.457443 env[1313]: time="2024-12-13T01:57:37.457431372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:37.457762 env[1313]: time="2024-12-13T01:57:37.457688625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea1774262c9b76ff1008a577346d6243242e0aeea03b8f6644ada046d40261a3 pid=1858 runtime=io.containerd.runc.v2 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.482524922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.482605713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.482640328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.482905876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99f22c44918d07de955e4070c00a497b78ba292c0158caae5cd86bcf4d85a4f0 pid=1890 runtime=io.containerd.runc.v2 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.485703282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.485756342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:37.485826 env[1313]: time="2024-12-13T01:57:37.485770168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:37.486990 env[1313]: time="2024-12-13T01:57:37.485933845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/647566b7868496280c01b5da415f6a4ea137fa0d0a3232cfe167f5417ab68df2 pid=1909 runtime=io.containerd.runc.v2 Dec 13 01:57:37.526138 env[1313]: time="2024-12-13T01:57:37.526069934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea1774262c9b76ff1008a577346d6243242e0aeea03b8f6644ada046d40261a3\"" Dec 13 01:57:37.527689 kubelet[1818]: E1213 01:57:37.527657 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.532487 env[1313]: time="2024-12-13T01:57:37.532454347Z" level=info msg="CreateContainer within sandbox \"ea1774262c9b76ff1008a577346d6243242e0aeea03b8f6644ada046d40261a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:57:37.533937 env[1313]: time="2024-12-13T01:57:37.533908965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0d38e2a127342d5641584882efbb35d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"99f22c44918d07de955e4070c00a497b78ba292c0158caae5cd86bcf4d85a4f0\"" Dec 13 01:57:37.534693 kubelet[1818]: E1213 01:57:37.534673 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.538001 env[1313]: time="2024-12-13T01:57:37.537960714Z" level=info msg="CreateContainer within sandbox \"99f22c44918d07de955e4070c00a497b78ba292c0158caae5cd86bcf4d85a4f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:57:37.544032 env[1313]: time="2024-12-13T01:57:37.543972989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"647566b7868496280c01b5da415f6a4ea137fa0d0a3232cfe167f5417ab68df2\"" Dec 13 01:57:37.544533 kubelet[1818]: E1213 01:57:37.544509 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.546227 env[1313]: time="2024-12-13T01:57:37.546197661Z" level=info msg="CreateContainer within sandbox \"647566b7868496280c01b5da415f6a4ea137fa0d0a3232cfe167f5417ab68df2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:57:37.560239 env[1313]: time="2024-12-13T01:57:37.560193758Z" level=info msg="CreateContainer within sandbox \"ea1774262c9b76ff1008a577346d6243242e0aeea03b8f6644ada046d40261a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0a3f02f0e0e3d76d908fe4410bc4f20078a56d7a03aad1edc8fe7acca9597509\"" Dec 13 01:57:37.560712 env[1313]: time="2024-12-13T01:57:37.560677636Z" level=info msg="StartContainer for \"0a3f02f0e0e3d76d908fe4410bc4f20078a56d7a03aad1edc8fe7acca9597509\"" Dec 13 01:57:37.566058 env[1313]: time="2024-12-13T01:57:37.566013783Z" level=info msg="CreateContainer within sandbox \"99f22c44918d07de955e4070c00a497b78ba292c0158caae5cd86bcf4d85a4f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9dcad5a6ddc5685b62a3b4f306a6553d8729eac639c18f18bc294e4a3b8dc2f3\"" Dec 13 01:57:37.566514 env[1313]: time="2024-12-13T01:57:37.566477964Z" level=info msg="StartContainer for \"9dcad5a6ddc5685b62a3b4f306a6553d8729eac639c18f18bc294e4a3b8dc2f3\"" Dec 13 01:57:37.571175 env[1313]: time="2024-12-13T01:57:37.571128956Z" level=info msg="CreateContainer within sandbox \"647566b7868496280c01b5da415f6a4ea137fa0d0a3232cfe167f5417ab68df2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d674d5ce2f6f71e762b4661c0deb7dfd6c10bc55f100466ea513b5a2ace14012\"" Dec 13 01:57:37.571913 env[1313]: time="2024-12-13T01:57:37.571879012Z" level=info msg="StartContainer for \"d674d5ce2f6f71e762b4661c0deb7dfd6c10bc55f100466ea513b5a2ace14012\"" Dec 13 01:57:37.628574 env[1313]: time="2024-12-13T01:57:37.628443371Z" level=info msg="StartContainer for \"0a3f02f0e0e3d76d908fe4410bc4f20078a56d7a03aad1edc8fe7acca9597509\" returns successfully" Dec 13 01:57:37.645973 env[1313]: time="2024-12-13T01:57:37.645690736Z" level=info msg="StartContainer for \"9dcad5a6ddc5685b62a3b4f306a6553d8729eac639c18f18bc294e4a3b8dc2f3\" returns successfully" Dec 13 01:57:37.661797 env[1313]: time="2024-12-13T01:57:37.661754561Z" level=info msg="StartContainer for \"d674d5ce2f6f71e762b4661c0deb7dfd6c10bc55f100466ea513b5a2ace14012\" returns successfully" Dec 13 01:57:37.839634 kubelet[1818]: E1213 01:57:37.839603 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.841558 kubelet[1818]: E1213 01:57:37.841534 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:37.843245 kubelet[1818]: E1213 01:57:37.843224 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:38.824815 kubelet[1818]: E1213 01:57:38.824780 1818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:57:38.843494 kubelet[1818]: E1213 01:57:38.843466 1818 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:57:38.844606 kubelet[1818]: E1213 01:57:38.844589 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:38.844665 kubelet[1818]: E1213 01:57:38.844593 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:38.926734 kubelet[1818]: I1213 01:57:38.926709 1818 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:38.933880 kubelet[1818]: I1213 01:57:38.933844 1818 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:57:38.948055 kubelet[1818]: E1213 01:57:38.948006 1818 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:39.049174 kubelet[1818]: E1213 01:57:39.049135 1818 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:39.149722 kubelet[1818]: E1213 01:57:39.149640 1818 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:39.250379 kubelet[1818]: E1213 01:57:39.250354 1818 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:39.351230 kubelet[1818]: E1213 01:57:39.351193 1818 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:57:39.580954 kubelet[1818]: E1213 01:57:39.580845 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:39.803934 kubelet[1818]: I1213 01:57:39.803894 1818 apiserver.go:52] "Watching apiserver" Dec 13 01:57:39.818829 kubelet[1818]: I1213 01:57:39.818798 1818 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:57:39.844628 kubelet[1818]: E1213 01:57:39.844525 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:39.850013 kubelet[1818]: E1213 01:57:39.849987 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:40.846227 kubelet[1818]: E1213 01:57:40.846198 1818 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:41.104209 systemd[1]: Reloading. Dec 13 01:57:41.162425 /usr/lib/systemd/system-generators/torcx-generator[2117]: time="2024-12-13T01:57:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:41.162448 /usr/lib/systemd/system-generators/torcx-generator[2117]: time="2024-12-13T01:57:41Z" level=info msg="torcx already run" Dec 13 01:57:41.226142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:41.226157 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:41.244769 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:41.315449 systemd[1]: Stopping kubelet.service... Dec 13 01:57:41.331854 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:57:41.332094 systemd[1]: Stopped kubelet.service. Dec 13 01:57:41.333500 systemd[1]: Starting kubelet.service... Dec 13 01:57:41.412662 systemd[1]: Started kubelet.service. Dec 13 01:57:41.464557 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:41.464557 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:57:41.464557 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:57:41.464970 kubelet[2173]: I1213 01:57:41.464600 2173 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:57:41.469401 kubelet[2173]: I1213 01:57:41.469375 2173 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:57:41.469401 kubelet[2173]: I1213 01:57:41.469401 2173 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:57:41.469636 kubelet[2173]: I1213 01:57:41.469626 2173 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:57:41.470880 kubelet[2173]: I1213 01:57:41.470865 2173 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:57:41.472588 kubelet[2173]: I1213 01:57:41.472533 2173 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:57:41.484095 kubelet[2173]: I1213 01:57:41.484068 2173 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:57:41.485302 kubelet[2173]: I1213 01:57:41.485275 2173 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:57:41.485679 kubelet[2173]: I1213 01:57:41.485658 2173 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:57:41.485807 kubelet[2173]: I1213 01:57:41.485694 2173 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:57:41.485807 kubelet[2173]: I1213 01:57:41.485711 2173 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:57:41.485807 kubelet[2173]: I1213 01:57:41.485750 2173 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:41.485910 kubelet[2173]: I1213 01:57:41.485842 2173 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:57:41.485910 kubelet[2173]: I1213 01:57:41.485870 2173 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:57:41.485910 kubelet[2173]: I1213 01:57:41.485905 2173 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:57:41.485992 kubelet[2173]: I1213 01:57:41.485919 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:57:41.487698 kubelet[2173]: I1213 01:57:41.487680 2173 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:57:41.488048 kubelet[2173]: I1213 01:57:41.488034 2173 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:57:41.488875 kubelet[2173]: I1213 01:57:41.488857 2173 server.go:1256] "Started kubelet" Dec 13 01:57:41.489346 kubelet[2173]: I1213 01:57:41.489331 2173 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:57:41.489672 kubelet[2173]: I1213 01:57:41.489657 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:57:41.490048 kubelet[2173]: I1213 01:57:41.490032 2173 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:57:41.492324 kubelet[2173]: I1213 01:57:41.492305 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:57:41.494256 kubelet[2173]: I1213 01:57:41.494241 2173 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:57:41.494381 kubelet[2173]: I1213 01:57:41.494347 2173 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:57:41.494933 kubelet[2173]: I1213 01:57:41.494573 2173 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:57:41.495771 kubelet[2173]: I1213 01:57:41.495748 2173 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:57:41.496161 kubelet[2173]: I1213 01:57:41.496146 2173 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:57:41.504807 kubelet[2173]: I1213 01:57:41.500817 2173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:57:41.504807 kubelet[2173]: I1213 01:57:41.503340 2173 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:57:41.507391 kubelet[2173]: E1213 01:57:41.507367 2173 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:57:41.519096 kubelet[2173]: I1213 01:57:41.519054 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:57:41.520286 kubelet[2173]: I1213 01:57:41.520239 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:57:41.520370 kubelet[2173]: I1213 01:57:41.520304 2173 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:57:41.520370 kubelet[2173]: I1213 01:57:41.520330 2173 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:57:41.520487 kubelet[2173]: E1213 01:57:41.520398 2173 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:57:41.551611 kubelet[2173]: I1213 01:57:41.551348 2173 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:57:41.551611 kubelet[2173]: I1213 01:57:41.551378 2173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:57:41.551611 kubelet[2173]: I1213 01:57:41.551396 2173 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:57:41.551611 kubelet[2173]: I1213 01:57:41.551635 2173 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:57:41.551854 kubelet[2173]: I1213 01:57:41.551663 2173 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:57:41.551854 kubelet[2173]: I1213 01:57:41.551672 2173 policy_none.go:49] "None policy: Start" Dec 13 01:57:41.552236 kubelet[2173]: I1213 01:57:41.552217 2173 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:57:41.552311 kubelet[2173]: I1213 01:57:41.552242 2173 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:57:41.552427 kubelet[2173]: I1213 01:57:41.552411 2173 state_mem.go:75] "Updated machine memory state" Dec 13 01:57:41.553567 kubelet[2173]: I1213 01:57:41.553527 2173 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:57:41.553883 kubelet[2173]: I1213 01:57:41.553845 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:57:41.597257 kubelet[2173]: I1213 01:57:41.597213 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:57:41.620675 kubelet[2173]: I1213 01:57:41.620640 2173 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:57:41.620774 kubelet[2173]: I1213 01:57:41.620720 2173 topology_manager.go:215] "Topology Admit Handler" podUID="0d38e2a127342d5641584882efbb35d2" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:57:41.620774 kubelet[2173]: I1213 01:57:41.620754 2173 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:57:41.666058 kubelet[2173]: E1213 01:57:41.665948 2173 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:57:41.666284 kubelet[2173]: E1213 01:57:41.666262 2173 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:41.667591 kubelet[2173]: I1213 01:57:41.667074 2173 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:57:41.667591 kubelet[2173]: I1213 01:57:41.667181 2173 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:57:41.695394 kubelet[2173]: I1213 01:57:41.695358 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:41.695394 kubelet[2173]: I1213 01:57:41.695390 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:41.695576 kubelet[2173]: I1213 01:57:41.695407 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:41.695576 kubelet[2173]: I1213 01:57:41.695424 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:41.695576 kubelet[2173]: I1213 01:57:41.695440 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:41.695576 kubelet[2173]: I1213 01:57:41.695457 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:57:41.695576 kubelet[2173]: I1213 01:57:41.695475 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:41.695688 kubelet[2173]: I1213 01:57:41.695493 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:57:41.695688 kubelet[2173]: I1213 01:57:41.695509 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d38e2a127342d5641584882efbb35d2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0d38e2a127342d5641584882efbb35d2\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:57:41.965303 kubelet[2173]: E1213 01:57:41.965199 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:41.966678 kubelet[2173]: E1213 01:57:41.966656 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:41.966881 kubelet[2173]: E1213 01:57:41.966862 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:42.309579 sudo[1434]: pam_unix(sudo:session): session closed for user root Dec 13 01:57:42.310888 sshd[1429]: pam_unix(sshd:session): session closed for user core Dec 13 01:57:42.312862 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:34118.service: Deactivated successfully. Dec 13 01:57:42.313689 systemd-logind[1294]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:57:42.313736 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:57:42.314525 systemd-logind[1294]: Removed session 5. Dec 13 01:57:42.487457 kubelet[2173]: I1213 01:57:42.487413 2173 apiserver.go:52] "Watching apiserver" Dec 13 01:57:42.494897 kubelet[2173]: I1213 01:57:42.494852 2173 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:57:42.533486 kubelet[2173]: E1213 01:57:42.533444 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:42.533753 kubelet[2173]: E1213 01:57:42.533734 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:42.539762 kubelet[2173]: E1213 01:57:42.539713 2173 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:57:42.540013 kubelet[2173]: E1213 01:57:42.539991 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:42.577705 kubelet[2173]: I1213 01:57:42.576634 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.5765656249999997 podStartE2EDuration="3.576565625s" podCreationTimestamp="2024-12-13 01:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:42.552300439 +0000 UTC m=+1.135248535" watchObservedRunningTime="2024-12-13 01:57:42.576565625 +0000 UTC m=+1.159513721" Dec 13 01:57:42.583479 kubelet[2173]: I1213 01:57:42.583452 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.583403839 podStartE2EDuration="1.583403839s" podCreationTimestamp="2024-12-13 01:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:42.576524567 +0000 UTC m=+1.159472663" watchObservedRunningTime="2024-12-13 01:57:42.583403839 +0000 UTC m=+1.166351925" Dec 13 01:57:42.591429 kubelet[2173]: I1213 01:57:42.591396 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.591357556 podStartE2EDuration="3.591357556s" podCreationTimestamp="2024-12-13 01:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:42.583557424 +0000 UTC m=+1.166505520" watchObservedRunningTime="2024-12-13 01:57:42.591357556 +0000 UTC m=+1.174305652" Dec 13 01:57:43.534128 kubelet[2173]: E1213 01:57:43.534099 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:43.534583 kubelet[2173]: E1213 01:57:43.534234 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:45.056527 kubelet[2173]: E1213 01:57:45.056497 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:45.627284 kubelet[2173]: E1213 01:57:45.627233 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:46.949275 kubelet[2173]: E1213 01:57:46.949233 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:51.532792 update_engine[1298]: I1213 01:57:51.532742 1298 update_attempter.cc:509] Updating boot flags... Dec 13 01:57:53.794442 kubelet[2173]: I1213 01:57:53.794414 2173 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:57:53.794834 env[1313]: time="2024-12-13T01:57:53.794704774Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:57:53.795000 kubelet[2173]: I1213 01:57:53.794858 2173 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:57:54.768520 kubelet[2173]: I1213 01:57:54.768486 2173 topology_manager.go:215] "Topology Admit Handler" podUID="fdaeb81a-7aa0-4684-91d1-84f1fb32c673" podNamespace="kube-system" podName="kube-proxy-8knbd" Dec 13 01:57:54.771598 kubelet[2173]: I1213 01:57:54.771579 2173 topology_manager.go:215] "Topology Admit Handler" podUID="31a1728b-624c-496c-9b88-e9e137c927ba" podNamespace="kube-flannel" podName="kube-flannel-ds-x22g2" Dec 13 01:57:54.785610 kubelet[2173]: I1213 01:57:54.785535 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp6r5\" (UniqueName: \"kubernetes.io/projected/fdaeb81a-7aa0-4684-91d1-84f1fb32c673-kube-api-access-rp6r5\") pod \"kube-proxy-8knbd\" (UID: \"fdaeb81a-7aa0-4684-91d1-84f1fb32c673\") " pod="kube-system/kube-proxy-8knbd" Dec 13 01:57:54.785610 kubelet[2173]: I1213 01:57:54.785613 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a1728b-624c-496c-9b88-e9e137c927ba-xtables-lock\") pod \"kube-flannel-ds-x22g2\" (UID: \"31a1728b-624c-496c-9b88-e9e137c927ba\") " pod="kube-flannel/kube-flannel-ds-x22g2" Dec 13 01:57:54.785805 kubelet[2173]: I1213 01:57:54.785649 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdaeb81a-7aa0-4684-91d1-84f1fb32c673-lib-modules\") pod \"kube-proxy-8knbd\" (UID: \"fdaeb81a-7aa0-4684-91d1-84f1fb32c673\") " pod="kube-system/kube-proxy-8knbd" Dec 13 01:57:54.785805 kubelet[2173]: I1213 01:57:54.785677 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/31a1728b-624c-496c-9b88-e9e137c927ba-cni\") pod \"kube-flannel-ds-x22g2\" (UID: \"31a1728b-624c-496c-9b88-e9e137c927ba\") " pod="kube-flannel/kube-flannel-ds-x22g2" Dec 13 01:57:54.785805 kubelet[2173]: I1213 01:57:54.785707 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd9np\" (UniqueName: \"kubernetes.io/projected/31a1728b-624c-496c-9b88-e9e137c927ba-kube-api-access-fd9np\") pod \"kube-flannel-ds-x22g2\" (UID: \"31a1728b-624c-496c-9b88-e9e137c927ba\") " pod="kube-flannel/kube-flannel-ds-x22g2" Dec 13 01:57:54.785805 kubelet[2173]: I1213 01:57:54.785732 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdaeb81a-7aa0-4684-91d1-84f1fb32c673-kube-proxy\") pod \"kube-proxy-8knbd\" (UID: \"fdaeb81a-7aa0-4684-91d1-84f1fb32c673\") " pod="kube-system/kube-proxy-8knbd" Dec 13 01:57:54.785805 kubelet[2173]: I1213 01:57:54.785757 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/31a1728b-624c-496c-9b88-e9e137c927ba-run\") pod \"kube-flannel-ds-x22g2\" (UID: \"31a1728b-624c-496c-9b88-e9e137c927ba\") " pod="kube-flannel/kube-flannel-ds-x22g2" Dec 13 01:57:54.785916 kubelet[2173]: I1213 01:57:54.785824 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/31a1728b-624c-496c-9b88-e9e137c927ba-cni-plugin\") pod \"kube-flannel-ds-x22g2\" (UID: \"31a1728b-624c-496c-9b88-e9e137c927ba\") " pod="kube-flannel/kube-flannel-ds-x22g2" Dec 13 01:57:54.785916 kubelet[2173]: I1213 01:57:54.785858 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdaeb81a-7aa0-4684-91d1-84f1fb32c673-xtables-lock\") pod \"kube-proxy-8knbd\" (UID: \"fdaeb81a-7aa0-4684-91d1-84f1fb32c673\") " pod="kube-system/kube-proxy-8knbd" Dec 13 01:57:54.785916 kubelet[2173]: I1213 01:57:54.785877 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/31a1728b-624c-496c-9b88-e9e137c927ba-flannel-cfg\") pod \"kube-flannel-ds-x22g2\" (UID: \"31a1728b-624c-496c-9b88-e9e137c927ba\") " pod="kube-flannel/kube-flannel-ds-x22g2" Dec 13 01:57:55.060461 kubelet[2173]: E1213 01:57:55.060371 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.072118 kubelet[2173]: E1213 01:57:55.072068 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.072796 env[1313]: time="2024-12-13T01:57:55.072747731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8knbd,Uid:fdaeb81a-7aa0-4684-91d1-84f1fb32c673,Namespace:kube-system,Attempt:0,}" Dec 13 01:57:55.077207 kubelet[2173]: E1213 01:57:55.077144 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.077660 env[1313]: time="2024-12-13T01:57:55.077618831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-x22g2,Uid:31a1728b-624c-496c-9b88-e9e137c927ba,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:57:55.095009 env[1313]: time="2024-12-13T01:57:55.094771244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:55.095009 env[1313]: time="2024-12-13T01:57:55.094825727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:55.095009 env[1313]: time="2024-12-13T01:57:55.094838562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:55.095234 env[1313]: time="2024-12-13T01:57:55.095051987Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92a10b11d78041f38ec309e418ee226467ee1f1d556ddbab98204a757e4eac62 pid=2264 runtime=io.containerd.runc.v2 Dec 13 01:57:55.106581 env[1313]: time="2024-12-13T01:57:55.106467729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:57:55.106581 env[1313]: time="2024-12-13T01:57:55.106560063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:57:55.106738 env[1313]: time="2024-12-13T01:57:55.106597364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:57:55.107012 env[1313]: time="2024-12-13T01:57:55.106974309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f pid=2289 runtime=io.containerd.runc.v2 Dec 13 01:57:55.139349 env[1313]: time="2024-12-13T01:57:55.139296337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8knbd,Uid:fdaeb81a-7aa0-4684-91d1-84f1fb32c673,Namespace:kube-system,Attempt:0,} returns sandbox id \"92a10b11d78041f38ec309e418ee226467ee1f1d556ddbab98204a757e4eac62\"" Dec 13 01:57:55.139899 kubelet[2173]: E1213 01:57:55.139877 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.141939 env[1313]: time="2024-12-13T01:57:55.141896113Z" level=info msg="CreateContainer within sandbox \"92a10b11d78041f38ec309e418ee226467ee1f1d556ddbab98204a757e4eac62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:57:55.155985 env[1313]: time="2024-12-13T01:57:55.155162092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-x22g2,Uid:31a1728b-624c-496c-9b88-e9e137c927ba,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\"" Dec 13 01:57:55.156133 kubelet[2173]: E1213 01:57:55.155864 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.156949 env[1313]: time="2024-12-13T01:57:55.156920183Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:57:55.164776 env[1313]: time="2024-12-13T01:57:55.164730895Z" level=info msg="CreateContainer within sandbox \"92a10b11d78041f38ec309e418ee226467ee1f1d556ddbab98204a757e4eac62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a93813ebd27f5be9b16986ada7f9b45e60afc0fcdb88efd6133bd8c62b1f1ae\"" Dec 13 01:57:55.165260 env[1313]: time="2024-12-13T01:57:55.165221384Z" level=info msg="StartContainer for \"5a93813ebd27f5be9b16986ada7f9b45e60afc0fcdb88efd6133bd8c62b1f1ae\"" Dec 13 01:57:55.206980 env[1313]: time="2024-12-13T01:57:55.206926123Z" level=info msg="StartContainer for \"5a93813ebd27f5be9b16986ada7f9b45e60afc0fcdb88efd6133bd8c62b1f1ae\" returns successfully" Dec 13 01:57:55.551879 kubelet[2173]: E1213 01:57:55.551831 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.632255 kubelet[2173]: E1213 01:57:55.632226 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:55.641265 kubelet[2173]: I1213 01:57:55.640310 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8knbd" podStartSLOduration=1.640266021 podStartE2EDuration="1.640266021s" podCreationTimestamp="2024-12-13 01:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:57:55.560105149 +0000 UTC m=+14.143053265" watchObservedRunningTime="2024-12-13 01:57:55.640266021 +0000 UTC m=+14.223214117" Dec 13 01:57:56.956351 kubelet[2173]: E1213 01:57:56.954392 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:57.005348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836839716.mount: Deactivated successfully. Dec 13 01:57:57.062564 env[1313]: time="2024-12-13T01:57:57.062495501Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:57.064639 env[1313]: time="2024-12-13T01:57:57.064591858Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:57.066648 env[1313]: time="2024-12-13T01:57:57.066608645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:57.068067 env[1313]: time="2024-12-13T01:57:57.068029885Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:57:57.068590 env[1313]: time="2024-12-13T01:57:57.068535462Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:57:57.070140 env[1313]: time="2024-12-13T01:57:57.070105733Z" level=info msg="CreateContainer within sandbox \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:57:57.083259 env[1313]: time="2024-12-13T01:57:57.083219152Z" level=info msg="CreateContainer within sandbox \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"5648d03e47227133b627e3f14eba5952de23e62bd789daf65b0cabdb7a080345\"" Dec 13 01:57:57.083707 env[1313]: time="2024-12-13T01:57:57.083676958Z" level=info msg="StartContainer for \"5648d03e47227133b627e3f14eba5952de23e62bd789daf65b0cabdb7a080345\"" Dec 13 01:57:57.282906 env[1313]: time="2024-12-13T01:57:57.282757705Z" level=info msg="StartContainer for \"5648d03e47227133b627e3f14eba5952de23e62bd789daf65b0cabdb7a080345\" returns successfully" Dec 13 01:57:57.555722 kubelet[2173]: E1213 01:57:57.555516 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:57.555997 kubelet[2173]: E1213 01:57:57.555764 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:57.769285 env[1313]: time="2024-12-13T01:57:57.769239339Z" level=info msg="shim disconnected" id=5648d03e47227133b627e3f14eba5952de23e62bd789daf65b0cabdb7a080345 Dec 13 01:57:57.769285 env[1313]: time="2024-12-13T01:57:57.769281900Z" level=warning msg="cleaning up after shim disconnected" id=5648d03e47227133b627e3f14eba5952de23e62bd789daf65b0cabdb7a080345 namespace=k8s.io Dec 13 01:57:57.769285 env[1313]: time="2024-12-13T01:57:57.769291918Z" level=info msg="cleaning up dead shim" Dec 13 01:57:57.775059 env[1313]: time="2024-12-13T01:57:57.775006353Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:57:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2541 runtime=io.containerd.runc.v2\n" Dec 13 01:57:57.917795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5648d03e47227133b627e3f14eba5952de23e62bd789daf65b0cabdb7a080345-rootfs.mount: Deactivated successfully. Dec 13 01:57:58.557716 kubelet[2173]: E1213 01:57:58.557690 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:57:58.559046 env[1313]: time="2024-12-13T01:57:58.558956049Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:58:00.375895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421987290.mount: Deactivated successfully. Dec 13 01:58:02.668140 env[1313]: time="2024-12-13T01:58:02.668065987Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:02.703365 env[1313]: time="2024-12-13T01:58:02.703303509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:02.736432 env[1313]: time="2024-12-13T01:58:02.736368741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:02.754490 env[1313]: time="2024-12-13T01:58:02.754437592Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:02.755141 env[1313]: time="2024-12-13T01:58:02.755059917Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:58:02.756783 env[1313]: time="2024-12-13T01:58:02.756752391Z" level=info msg="CreateContainer within sandbox \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:58:02.879034 env[1313]: time="2024-12-13T01:58:02.878931255Z" level=info msg="CreateContainer within sandbox \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"06909b347ced5b6f08c11cc48633b70c0cc1d062ca4ac88f56bbbf77c837af81\"" Dec 13 01:58:02.879581 env[1313]: time="2024-12-13T01:58:02.879531108Z" level=info msg="StartContainer for \"06909b347ced5b6f08c11cc48633b70c0cc1d062ca4ac88f56bbbf77c837af81\"" Dec 13 01:58:02.988456 env[1313]: time="2024-12-13T01:58:02.988304212Z" level=info msg="StartContainer for \"06909b347ced5b6f08c11cc48633b70c0cc1d062ca4ac88f56bbbf77c837af81\" returns successfully" Dec 13 01:58:03.003459 kubelet[2173]: I1213 01:58:03.003216 2173 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:58:03.567132 kubelet[2173]: E1213 01:58:03.567056 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:03.586643 kubelet[2173]: I1213 01:58:03.586588 2173 topology_manager.go:215] "Topology Admit Handler" podUID="467bda58-d956-4a7d-a7cc-44507a2dbca2" podNamespace="kube-system" podName="coredns-76f75df574-5846h" Dec 13 01:58:03.647806 kubelet[2173]: I1213 01:58:03.647747 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/467bda58-d956-4a7d-a7cc-44507a2dbca2-config-volume\") pod \"coredns-76f75df574-5846h\" (UID: \"467bda58-d956-4a7d-a7cc-44507a2dbca2\") " pod="kube-system/coredns-76f75df574-5846h" Dec 13 01:58:03.648000 kubelet[2173]: I1213 01:58:03.647837 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p5cw\" (UniqueName: \"kubernetes.io/projected/467bda58-d956-4a7d-a7cc-44507a2dbca2-kube-api-access-2p5cw\") pod \"coredns-76f75df574-5846h\" (UID: \"467bda58-d956-4a7d-a7cc-44507a2dbca2\") " pod="kube-system/coredns-76f75df574-5846h" Dec 13 01:58:03.763242 kubelet[2173]: I1213 01:58:03.763045 2173 topology_manager.go:215] "Topology Admit Handler" podUID="4d7097ad-9eb2-43dc-a737-0fa5798f74fe" podNamespace="kube-system" podName="coredns-76f75df574-ckthv" Dec 13 01:58:03.769458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06909b347ced5b6f08c11cc48633b70c0cc1d062ca4ac88f56bbbf77c837af81-rootfs.mount: Deactivated successfully. Dec 13 01:58:03.966105 env[1313]: time="2024-12-13T01:58:03.966047302Z" level=info msg="shim disconnected" id=06909b347ced5b6f08c11cc48633b70c0cc1d062ca4ac88f56bbbf77c837af81 Dec 13 01:58:03.966105 env[1313]: time="2024-12-13T01:58:03.966102286Z" level=warning msg="cleaning up after shim disconnected" id=06909b347ced5b6f08c11cc48633b70c0cc1d062ca4ac88f56bbbf77c837af81 namespace=k8s.io Dec 13 01:58:03.966105 env[1313]: time="2024-12-13T01:58:03.966110702Z" level=info msg="cleaning up dead shim" Dec 13 01:58:03.972803 env[1313]: time="2024-12-13T01:58:03.972763326Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:58:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2596 runtime=io.containerd.runc.v2\n" Dec 13 01:58:04.038020 kubelet[2173]: I1213 01:58:04.037951 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grwwz\" (UniqueName: \"kubernetes.io/projected/4d7097ad-9eb2-43dc-a737-0fa5798f74fe-kube-api-access-grwwz\") pod \"coredns-76f75df574-ckthv\" (UID: \"4d7097ad-9eb2-43dc-a737-0fa5798f74fe\") " pod="kube-system/coredns-76f75df574-ckthv" Dec 13 01:58:04.038020 kubelet[2173]: I1213 01:58:04.038025 2173 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d7097ad-9eb2-43dc-a737-0fa5798f74fe-config-volume\") pod \"coredns-76f75df574-ckthv\" (UID: \"4d7097ad-9eb2-43dc-a737-0fa5798f74fe\") " pod="kube-system/coredns-76f75df574-ckthv" Dec 13 01:58:04.241600 kubelet[2173]: E1213 01:58:04.241437 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:04.242087 env[1313]: time="2024-12-13T01:58:04.242010202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ckthv,Uid:4d7097ad-9eb2-43dc-a737-0fa5798f74fe,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:04.245604 kubelet[2173]: E1213 01:58:04.245574 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:04.245997 env[1313]: time="2024-12-13T01:58:04.245961535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5846h,Uid:467bda58-d956-4a7d-a7cc-44507a2dbca2,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:04.276655 env[1313]: time="2024-12-13T01:58:04.276584310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ckthv,Uid:4d7097ad-9eb2-43dc-a737-0fa5798f74fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"845ba23349754a13c0cbd52c3c22600bb382cb1fd64ea70ebf08968db0ce4f96\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:58:04.276901 kubelet[2173]: E1213 01:58:04.276871 2173 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845ba23349754a13c0cbd52c3c22600bb382cb1fd64ea70ebf08968db0ce4f96\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:58:04.277009 kubelet[2173]: E1213 01:58:04.276941 2173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845ba23349754a13c0cbd52c3c22600bb382cb1fd64ea70ebf08968db0ce4f96\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-ckthv" Dec 13 01:58:04.277009 kubelet[2173]: E1213 01:58:04.276959 2173 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"845ba23349754a13c0cbd52c3c22600bb382cb1fd64ea70ebf08968db0ce4f96\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-ckthv" Dec 13 01:58:04.277073 kubelet[2173]: E1213 01:58:04.277021 2173 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ckthv_kube-system(4d7097ad-9eb2-43dc-a737-0fa5798f74fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ckthv_kube-system(4d7097ad-9eb2-43dc-a737-0fa5798f74fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"845ba23349754a13c0cbd52c3c22600bb382cb1fd64ea70ebf08968db0ce4f96\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-ckthv" podUID="4d7097ad-9eb2-43dc-a737-0fa5798f74fe" Dec 13 01:58:04.279870 env[1313]: time="2024-12-13T01:58:04.279789356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5846h,Uid:467bda58-d956-4a7d-a7cc-44507a2dbca2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2dd454aee2e920881ca76d2bf831c1e18a67dcf5dc35c0c4ca12bb3090c1c450\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:58:04.280038 kubelet[2173]: E1213 01:58:04.280019 2173 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd454aee2e920881ca76d2bf831c1e18a67dcf5dc35c0c4ca12bb3090c1c450\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:58:04.280103 kubelet[2173]: E1213 01:58:04.280065 2173 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd454aee2e920881ca76d2bf831c1e18a67dcf5dc35c0c4ca12bb3090c1c450\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-5846h" Dec 13 01:58:04.280103 kubelet[2173]: E1213 01:58:04.280088 2173 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd454aee2e920881ca76d2bf831c1e18a67dcf5dc35c0c4ca12bb3090c1c450\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-5846h" Dec 13 01:58:04.280170 kubelet[2173]: E1213 01:58:04.280127 2173 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5846h_kube-system(467bda58-d956-4a7d-a7cc-44507a2dbca2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5846h_kube-system(467bda58-d956-4a7d-a7cc-44507a2dbca2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dd454aee2e920881ca76d2bf831c1e18a67dcf5dc35c0c4ca12bb3090c1c450\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-5846h" podUID="467bda58-d956-4a7d-a7cc-44507a2dbca2" Dec 13 01:58:04.570939 kubelet[2173]: E1213 01:58:04.570310 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:04.573473 env[1313]: time="2024-12-13T01:58:04.573432160Z" level=info msg="CreateContainer within sandbox \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:58:04.666199 env[1313]: time="2024-12-13T01:58:04.666141108Z" level=info msg="CreateContainer within sandbox \"3701ebbc2011d16e11571b8ff31aae949ade1f897d6a2e40248563ef843cc01f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"b14e1e23548be628200cf85d823185a5a72d35d3e021ae01381c2a0ef3e81335\"" Dec 13 01:58:04.666712 env[1313]: time="2024-12-13T01:58:04.666667792Z" level=info msg="StartContainer for \"b14e1e23548be628200cf85d823185a5a72d35d3e021ae01381c2a0ef3e81335\"" Dec 13 01:58:04.711123 env[1313]: time="2024-12-13T01:58:04.711066632Z" level=info msg="StartContainer for \"b14e1e23548be628200cf85d823185a5a72d35d3e021ae01381c2a0ef3e81335\" returns successfully" Dec 13 01:58:04.770133 systemd[1]: run-netns-cni\x2d5b64b450\x2d1b40\x2dc1fa\x2da856\x2ddcc71aa6b98e.mount: Deactivated successfully. Dec 13 01:58:04.770262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-845ba23349754a13c0cbd52c3c22600bb382cb1fd64ea70ebf08968db0ce4f96-shm.mount: Deactivated successfully. Dec 13 01:58:05.574032 kubelet[2173]: E1213 01:58:05.573986 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:05.815520 systemd-networkd[1083]: flannel.1: Link UP Dec 13 01:58:05.815529 systemd-networkd[1083]: flannel.1: Gained carrier Dec 13 01:58:06.402453 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:34542.service. Dec 13 01:58:06.443851 sshd[2783]: Accepted publickey for core from 10.0.0.1 port 34542 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:06.445028 sshd[2783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:06.448601 systemd-logind[1294]: New session 6 of user core. Dec 13 01:58:06.449615 systemd[1]: Started session-6.scope. Dec 13 01:58:06.553832 sshd[2783]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:06.555924 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:34542.service: Deactivated successfully. Dec 13 01:58:06.556789 systemd-logind[1294]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:58:06.556817 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:58:06.557680 systemd-logind[1294]: Removed session 6. Dec 13 01:58:06.575372 kubelet[2173]: E1213 01:58:06.575347 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:07.449779 systemd-networkd[1083]: flannel.1: Gained IPv6LL Dec 13 01:58:11.556910 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:34554.service. Dec 13 01:58:11.597669 sshd[2819]: Accepted publickey for core from 10.0.0.1 port 34554 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:11.598980 sshd[2819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:11.602877 systemd-logind[1294]: New session 7 of user core. Dec 13 01:58:11.603908 systemd[1]: Started session-7.scope. Dec 13 01:58:11.711151 sshd[2819]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:11.713498 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:34554.service: Deactivated successfully. Dec 13 01:58:11.714218 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:58:11.715008 systemd-logind[1294]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:58:11.715740 systemd-logind[1294]: Removed session 7. Dec 13 01:58:15.521788 kubelet[2173]: E1213 01:58:15.521730 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:15.522327 env[1313]: time="2024-12-13T01:58:15.522281289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ckthv,Uid:4d7097ad-9eb2-43dc-a737-0fa5798f74fe,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:15.827254 systemd-networkd[1083]: cni0: Link UP Dec 13 01:58:15.827263 systemd-networkd[1083]: cni0: Gained carrier Dec 13 01:58:15.829923 systemd-networkd[1083]: cni0: Lost carrier Dec 13 01:58:15.834778 systemd-networkd[1083]: vethe594925a: Link UP Dec 13 01:58:15.838932 kernel: cni0: port 1(vethe594925a) entered blocking state Dec 13 01:58:15.839010 kernel: cni0: port 1(vethe594925a) entered disabled state Dec 13 01:58:15.840111 kernel: device vethe594925a entered promiscuous mode Dec 13 01:58:15.841397 kernel: cni0: port 1(vethe594925a) entered blocking state Dec 13 01:58:15.841435 kernel: cni0: port 1(vethe594925a) entered forwarding state Dec 13 01:58:15.843601 kernel: cni0: port 1(vethe594925a) entered disabled state Dec 13 01:58:15.848899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe594925a: link becomes ready Dec 13 01:58:15.848963 kernel: cni0: port 1(vethe594925a) entered blocking state Dec 13 01:58:15.848979 kernel: cni0: port 1(vethe594925a) entered forwarding state Dec 13 01:58:15.849917 systemd-networkd[1083]: vethe594925a: Gained carrier Dec 13 01:58:15.850113 systemd-networkd[1083]: cni0: Gained carrier Dec 13 01:58:15.851658 env[1313]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a928), "name":"cbr0", "type":"bridge"} Dec 13 01:58:15.851658 env[1313]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:58:15.860615 env[1313]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:58:15.860538626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:15.860785 env[1313]: time="2024-12-13T01:58:15.860599290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:15.860900 env[1313]: time="2024-12-13T01:58:15.860863527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:15.861219 env[1313]: time="2024-12-13T01:58:15.861159143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5664f47df161821ca7a4ced90a01421044fe77e140d03707266c6d68bbd04974 pid=2881 runtime=io.containerd.runc.v2 Dec 13 01:58:15.881331 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:58:15.907811 env[1313]: time="2024-12-13T01:58:15.907757974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ckthv,Uid:4d7097ad-9eb2-43dc-a737-0fa5798f74fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"5664f47df161821ca7a4ced90a01421044fe77e140d03707266c6d68bbd04974\"" Dec 13 01:58:15.908387 kubelet[2173]: E1213 01:58:15.908364 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:15.910589 env[1313]: time="2024-12-13T01:58:15.910362111Z" level=info msg="CreateContainer within sandbox \"5664f47df161821ca7a4ced90a01421044fe77e140d03707266c6d68bbd04974\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:58:15.927385 env[1313]: time="2024-12-13T01:58:15.927325005Z" level=info msg="CreateContainer within sandbox \"5664f47df161821ca7a4ced90a01421044fe77e140d03707266c6d68bbd04974\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"47479bbac580b34ad191fde87b48031cbb6d51557454413f48b54ae52b3fa3ab\"" Dec 13 01:58:15.928148 env[1313]: time="2024-12-13T01:58:15.928065588Z" level=info msg="StartContainer for \"47479bbac580b34ad191fde87b48031cbb6d51557454413f48b54ae52b3fa3ab\"" Dec 13 01:58:15.967118 env[1313]: time="2024-12-13T01:58:15.967065135Z" level=info msg="StartContainer for \"47479bbac580b34ad191fde87b48031cbb6d51557454413f48b54ae52b3fa3ab\" returns successfully" Dec 13 01:58:16.593598 kubelet[2173]: E1213 01:58:16.592881 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:16.642513 kubelet[2173]: I1213 01:58:16.642451 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-x22g2" podStartSLOduration=15.043533688 podStartE2EDuration="22.642407419s" podCreationTimestamp="2024-12-13 01:57:54 +0000 UTC" firstStartedPulling="2024-12-13 01:57:55.15649057 +0000 UTC m=+13.739438666" lastFinishedPulling="2024-12-13 01:58:02.755364311 +0000 UTC m=+21.338312397" observedRunningTime="2024-12-13 01:58:05.811015062 +0000 UTC m=+24.393963168" watchObservedRunningTime="2024-12-13 01:58:16.642407419 +0000 UTC m=+35.225355515" Dec 13 01:58:16.642721 kubelet[2173]: I1213 01:58:16.642650 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ckthv" podStartSLOduration=22.642632672 podStartE2EDuration="22.642632672s" podCreationTimestamp="2024-12-13 01:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:16.64226492 +0000 UTC m=+35.225213007" watchObservedRunningTime="2024-12-13 01:58:16.642632672 +0000 UTC m=+35.225580758" Dec 13 01:58:16.714232 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:54630.service. Dec 13 01:58:16.753436 sshd[2978]: Accepted publickey for core from 10.0.0.1 port 54630 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:16.754537 sshd[2978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:16.758481 systemd-logind[1294]: New session 8 of user core. Dec 13 01:58:16.759682 systemd[1]: Started session-8.scope. Dec 13 01:58:16.866478 sshd[2978]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:16.868876 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:54630.service: Deactivated successfully. Dec 13 01:58:16.869765 systemd-logind[1294]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:58:16.869872 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:58:16.870637 systemd-logind[1294]: Removed session 8. Dec 13 01:58:17.497714 systemd-networkd[1083]: vethe594925a: Gained IPv6LL Dec 13 01:58:17.521210 kubelet[2173]: E1213 01:58:17.521170 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:17.521601 env[1313]: time="2024-12-13T01:58:17.521564280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5846h,Uid:467bda58-d956-4a7d-a7cc-44507a2dbca2,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:17.543533 systemd-networkd[1083]: vethc6c8b542: Link UP Dec 13 01:58:17.546005 kernel: cni0: port 2(vethc6c8b542) entered blocking state Dec 13 01:58:17.546077 kernel: cni0: port 2(vethc6c8b542) entered disabled state Dec 13 01:58:17.548008 kernel: device vethc6c8b542 entered promiscuous mode Dec 13 01:58:17.554511 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:58:17.554638 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc6c8b542: link becomes ready Dec 13 01:58:17.554658 kernel: cni0: port 2(vethc6c8b542) entered blocking state Dec 13 01:58:17.554675 kernel: cni0: port 2(vethc6c8b542) entered forwarding state Dec 13 01:58:17.554690 systemd-networkd[1083]: vethc6c8b542: Gained carrier Dec 13 01:58:17.557176 env[1313]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:58:17.557176 env[1313]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:58:17.568540 env[1313]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:58:17.568465357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:17.568540 env[1313]: time="2024-12-13T01:58:17.568510883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:17.568540 env[1313]: time="2024-12-13T01:58:17.568522165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:17.568770 env[1313]: time="2024-12-13T01:58:17.568712933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7368413b1dba7354938f391c85b1c8258e5b1daf14be90130a3cfeabc971283d pid=3032 runtime=io.containerd.runc.v2 Dec 13 01:58:17.591064 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:58:17.595147 kubelet[2173]: E1213 01:58:17.595071 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:17.615568 env[1313]: time="2024-12-13T01:58:17.615486622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5846h,Uid:467bda58-d956-4a7d-a7cc-44507a2dbca2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7368413b1dba7354938f391c85b1c8258e5b1daf14be90130a3cfeabc971283d\"" Dec 13 01:58:17.616262 kubelet[2173]: E1213 01:58:17.616239 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:17.618088 env[1313]: time="2024-12-13T01:58:17.618050042Z" level=info msg="CreateContainer within sandbox \"7368413b1dba7354938f391c85b1c8258e5b1daf14be90130a3cfeabc971283d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:58:17.625747 systemd-networkd[1083]: cni0: Gained IPv6LL Dec 13 01:58:18.312408 env[1313]: time="2024-12-13T01:58:18.312327828Z" level=info msg="CreateContainer within sandbox \"7368413b1dba7354938f391c85b1c8258e5b1daf14be90130a3cfeabc971283d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dbf904d28e187b1da4a3e036944f9d9fc142371ca7f2740d393cc68cb50386fc\"" Dec 13 01:58:18.312917 env[1313]: time="2024-12-13T01:58:18.312889142Z" level=info msg="StartContainer for \"dbf904d28e187b1da4a3e036944f9d9fc142371ca7f2740d393cc68cb50386fc\"" Dec 13 01:58:18.446919 env[1313]: time="2024-12-13T01:58:18.446853880Z" level=info msg="StartContainer for \"dbf904d28e187b1da4a3e036944f9d9fc142371ca7f2740d393cc68cb50386fc\" returns successfully" Dec 13 01:58:18.598180 kubelet[2173]: E1213 01:58:18.597706 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:18.600341 kubelet[2173]: E1213 01:58:18.599740 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:19.161704 systemd-networkd[1083]: vethc6c8b542: Gained IPv6LL Dec 13 01:58:21.869480 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:54642.service. Dec 13 01:58:21.929214 sshd[3126]: Accepted publickey for core from 10.0.0.1 port 54642 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:21.930442 sshd[3126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:21.933851 systemd-logind[1294]: New session 9 of user core. Dec 13 01:58:21.934728 systemd[1]: Started session-9.scope. Dec 13 01:58:22.076094 sshd[3126]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:22.078915 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:54656.service. Dec 13 01:58:22.079522 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:54642.service: Deactivated successfully. Dec 13 01:58:22.080592 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:58:22.081059 systemd-logind[1294]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:58:22.081990 systemd-logind[1294]: Removed session 9. Dec 13 01:58:22.117909 sshd[3141]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:22.119039 sshd[3141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:22.122801 systemd-logind[1294]: New session 10 of user core. Dec 13 01:58:22.124920 systemd[1]: Started session-10.scope. Dec 13 01:58:22.263455 sshd[3141]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:22.266699 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:54666.service. Dec 13 01:58:22.269995 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:54656.service: Deactivated successfully. Dec 13 01:58:22.271029 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:58:22.275883 systemd-logind[1294]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:58:22.276892 systemd-logind[1294]: Removed session 10. Dec 13 01:58:22.307888 sshd[3152]: Accepted publickey for core from 10.0.0.1 port 54666 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:22.308909 sshd[3152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:22.312477 systemd-logind[1294]: New session 11 of user core. Dec 13 01:58:22.313475 systemd[1]: Started session-11.scope. Dec 13 01:58:22.414774 sshd[3152]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:22.417611 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:54666.service: Deactivated successfully. Dec 13 01:58:22.418725 systemd-logind[1294]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:58:22.418736 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:58:22.419781 systemd-logind[1294]: Removed session 11. Dec 13 01:58:24.246599 kubelet[2173]: E1213 01:58:24.246557 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:24.255599 kubelet[2173]: I1213 01:58:24.255540 2173 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5846h" podStartSLOduration=30.255499406 podStartE2EDuration="30.255499406s" podCreationTimestamp="2024-12-13 01:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:18.769923766 +0000 UTC m=+37.352871862" watchObservedRunningTime="2024-12-13 01:58:24.255499406 +0000 UTC m=+42.838447502" Dec 13 01:58:24.606717 kubelet[2173]: E1213 01:58:24.606676 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:27.419378 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:58722.service. Dec 13 01:58:27.457528 sshd[3195]: Accepted publickey for core from 10.0.0.1 port 58722 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:27.458587 sshd[3195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:27.462339 systemd-logind[1294]: New session 12 of user core. Dec 13 01:58:27.463013 systemd[1]: Started session-12.scope. Dec 13 01:58:27.572666 sshd[3195]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:27.574896 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:58722.service: Deactivated successfully. Dec 13 01:58:27.575613 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:58:27.576280 systemd-logind[1294]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:58:27.577049 systemd-logind[1294]: Removed session 12. Dec 13 01:58:32.579899 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:58736.service. Dec 13 01:58:32.637153 sshd[3231]: Accepted publickey for core from 10.0.0.1 port 58736 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:32.640682 sshd[3231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:32.648260 systemd[1]: Started session-13.scope. Dec 13 01:58:32.648912 systemd-logind[1294]: New session 13 of user core. Dec 13 01:58:32.780776 sshd[3231]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:32.783348 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:58736.service: Deactivated successfully. Dec 13 01:58:32.784523 systemd-logind[1294]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:58:32.784590 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:58:32.785392 systemd-logind[1294]: Removed session 13. Dec 13 01:58:37.783111 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:33152.service. Dec 13 01:58:37.825994 sshd[3266]: Accepted publickey for core from 10.0.0.1 port 33152 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:37.827196 sshd[3266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:37.830803 systemd-logind[1294]: New session 14 of user core. Dec 13 01:58:37.831724 systemd[1]: Started session-14.scope. Dec 13 01:58:37.938078 sshd[3266]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:37.940039 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:33152.service: Deactivated successfully. Dec 13 01:58:37.941162 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:58:37.941169 systemd-logind[1294]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:58:37.942264 systemd-logind[1294]: Removed session 14. Dec 13 01:58:42.940883 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:33156.service. Dec 13 01:58:42.981822 sshd[3304]: Accepted publickey for core from 10.0.0.1 port 33156 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:42.982917 sshd[3304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:42.986415 systemd-logind[1294]: New session 15 of user core. Dec 13 01:58:42.987294 systemd[1]: Started session-15.scope. Dec 13 01:58:43.085822 sshd[3304]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:43.088190 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:33172.service. Dec 13 01:58:43.088589 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:33156.service: Deactivated successfully. Dec 13 01:58:43.089640 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:58:43.089846 systemd-logind[1294]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:58:43.091024 systemd-logind[1294]: Removed session 15. Dec 13 01:58:43.126730 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 33172 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:43.127781 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:43.131183 systemd-logind[1294]: New session 16 of user core. Dec 13 01:58:43.132070 systemd[1]: Started session-16.scope. Dec 13 01:58:43.277717 sshd[3317]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:43.280133 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:33178.service. Dec 13 01:58:43.280527 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:33172.service: Deactivated successfully. Dec 13 01:58:43.281399 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:58:43.281489 systemd-logind[1294]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:58:43.282506 systemd-logind[1294]: Removed session 16. Dec 13 01:58:43.320806 sshd[3329]: Accepted publickey for core from 10.0.0.1 port 33178 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:43.321947 sshd[3329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:43.325296 systemd-logind[1294]: New session 17 of user core. Dec 13 01:58:43.326272 systemd[1]: Started session-17.scope. Dec 13 01:58:44.696655 sshd[3329]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:44.698136 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:33180.service. Dec 13 01:58:44.703610 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:33178.service: Deactivated successfully. Dec 13 01:58:44.704753 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:58:44.705269 systemd-logind[1294]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:58:44.706108 systemd-logind[1294]: Removed session 17. Dec 13 01:58:44.743320 sshd[3349]: Accepted publickey for core from 10.0.0.1 port 33180 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:44.744696 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:44.748601 systemd-logind[1294]: New session 18 of user core. Dec 13 01:58:44.749588 systemd[1]: Started session-18.scope. Dec 13 01:58:44.960585 sshd[3349]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:44.963182 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:33184.service. Dec 13 01:58:44.964990 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:33180.service: Deactivated successfully. Dec 13 01:58:44.966316 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:58:44.966739 systemd-logind[1294]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:58:44.967527 systemd-logind[1294]: Removed session 18. Dec 13 01:58:45.006191 sshd[3363]: Accepted publickey for core from 10.0.0.1 port 33184 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:45.007328 sshd[3363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:45.010860 systemd-logind[1294]: New session 19 of user core. Dec 13 01:58:45.011838 systemd[1]: Started session-19.scope. Dec 13 01:58:45.105015 sshd[3363]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:45.107137 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:33184.service: Deactivated successfully. Dec 13 01:58:45.108158 systemd-logind[1294]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:58:45.108226 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:58:45.108992 systemd-logind[1294]: Removed session 19. Dec 13 01:58:50.108969 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:50284.service. Dec 13 01:58:50.146350 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:50.147657 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:50.150993 systemd-logind[1294]: New session 20 of user core. Dec 13 01:58:50.151719 systemd[1]: Started session-20.scope. Dec 13 01:58:50.249212 sshd[3400]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:50.251559 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:50284.service: Deactivated successfully. Dec 13 01:58:50.252430 systemd-logind[1294]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:58:50.252460 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:58:50.253332 systemd-logind[1294]: Removed session 20. Dec 13 01:58:55.252975 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:50298.service. Dec 13 01:58:55.292329 sshd[3438]: Accepted publickey for core from 10.0.0.1 port 50298 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:55.293438 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:55.296579 systemd-logind[1294]: New session 21 of user core. Dec 13 01:58:55.297243 systemd[1]: Started session-21.scope. Dec 13 01:58:55.399370 sshd[3438]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:55.401668 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:50298.service: Deactivated successfully. Dec 13 01:58:55.402337 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:58:55.403042 systemd-logind[1294]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:58:55.403786 systemd-logind[1294]: Removed session 21. Dec 13 01:59:00.402207 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:39030.service. Dec 13 01:59:00.439990 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 39030 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:00.441083 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:00.445011 systemd-logind[1294]: New session 22 of user core. Dec 13 01:59:00.446224 systemd[1]: Started session-22.scope. Dec 13 01:59:00.545786 sshd[3475]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:00.547713 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:39030.service: Deactivated successfully. Dec 13 01:59:00.548866 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:59:00.549074 systemd-logind[1294]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:59:00.549811 systemd-logind[1294]: Removed session 22. Dec 13 01:59:05.549629 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:39032.service. Dec 13 01:59:05.588877 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 39032 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:05.590011 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:05.593366 systemd-logind[1294]: New session 23 of user core. Dec 13 01:59:05.594169 systemd[1]: Started session-23.scope. Dec 13 01:59:05.694215 sshd[3510]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:05.696985 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:39032.service: Deactivated successfully. Dec 13 01:59:05.698099 systemd-logind[1294]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:59:05.698160 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:59:05.698989 systemd-logind[1294]: Removed session 23.