Dec 13 02:07:47.030246 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:07:47.030271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:07:47.030279 kernel: BIOS-provided physical RAM map: Dec 13 02:07:47.030285 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:07:47.030290 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:07:47.030296 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:07:47.030303 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 02:07:47.030309 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 02:07:47.030316 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 02:07:47.030321 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 02:07:47.030327 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:07:47.030333 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:07:47.030338 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 02:07:47.030344 kernel: NX (Execute Disable) protection: active Dec 13 02:07:47.030353 kernel: SMBIOS 2.8 present. Dec 13 02:07:47.030359 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 02:07:47.030365 kernel: Hypervisor detected: KVM Dec 13 02:07:47.030371 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:07:47.030377 kernel: kvm-clock: cpu 0, msr 3b19b001, primary cpu clock Dec 13 02:07:47.030383 kernel: kvm-clock: using sched offset of 3375518954 cycles Dec 13 02:07:47.030390 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:07:47.030396 kernel: tsc: Detected 2794.748 MHz processor Dec 13 02:07:47.030403 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:07:47.030411 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:07:47.030417 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 02:07:47.030423 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:07:47.030429 kernel: Using GB pages for direct mapping Dec 13 02:07:47.030436 kernel: ACPI: Early table checksum verification disabled Dec 13 02:07:47.030442 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 02:07:47.030448 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030455 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030461 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030468 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 02:07:47.030474 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030481 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030487 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030493 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:47.030499 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 02:07:47.030505 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 02:07:47.030512 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 02:07:47.030522 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 02:07:47.030528 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 02:07:47.030535 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 02:07:47.030542 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 02:07:47.030548 kernel: No NUMA configuration found Dec 13 02:07:47.030555 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 02:07:47.030563 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 02:07:47.030569 kernel: Zone ranges: Dec 13 02:07:47.030576 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:07:47.030583 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 02:07:47.030589 kernel: Normal empty Dec 13 02:07:47.030596 kernel: Movable zone start for each node Dec 13 02:07:47.030602 kernel: Early memory node ranges Dec 13 02:07:47.030609 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:07:47.030623 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 02:07:47.030631 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 02:07:47.030659 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:07:47.030680 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:07:47.030692 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 02:07:47.030699 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:07:47.030705 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:07:47.030712 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:07:47.030718 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:07:47.030725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:07:47.030732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:07:47.030741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:07:47.030748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:07:47.030754 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:07:47.030761 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:07:47.030767 kernel: TSC deadline timer available Dec 13 02:07:47.030774 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 02:07:47.030780 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 02:07:47.030787 kernel: kvm-guest: setup PV sched yield Dec 13 02:07:47.030794 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 02:07:47.030802 kernel: Booting paravirtualized kernel on KVM Dec 13 02:07:47.030809 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:07:47.030816 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 02:07:47.030822 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 02:07:47.030829 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 02:07:47.030835 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 02:07:47.030842 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 02:07:47.030848 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 02:07:47.030855 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:07:47.030871 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:07:47.030879 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 02:07:47.030894 kernel: Policy zone: DMA32 Dec 13 02:07:47.030902 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:07:47.030910 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:07:47.030916 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:07:47.030923 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:07:47.030930 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:07:47.030939 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 02:07:47.030946 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 02:07:47.030952 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:07:47.030959 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:07:47.030965 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:07:47.030992 kernel: rcu: RCU event tracing is enabled. Dec 13 02:07:47.030999 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 02:07:47.031006 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:07:47.031012 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:07:47.031021 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:07:47.031028 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 02:07:47.031034 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 02:07:47.031042 kernel: random: crng init done Dec 13 02:07:47.031051 kernel: Console: colour VGA+ 80x25 Dec 13 02:07:47.031059 kernel: printk: console [ttyS0] enabled Dec 13 02:07:47.031066 kernel: ACPI: Core revision 20210730 Dec 13 02:07:47.031074 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 02:07:47.031082 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:07:47.031093 kernel: x2apic enabled Dec 13 02:07:47.031101 kernel: Switched APIC routing to physical x2apic. Dec 13 02:07:47.031110 kernel: kvm-guest: setup PV IPIs Dec 13 02:07:47.031118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:07:47.031127 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:07:47.031139 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 02:07:47.031148 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 02:07:47.031156 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 02:07:47.031165 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 02:07:47.031179 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:07:47.031186 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:07:47.031193 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:07:47.031202 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:07:47.031209 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 02:07:47.031216 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 02:07:47.031223 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:07:47.031230 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:07:47.031237 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:07:47.031246 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:07:47.031253 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:07:47.031260 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:07:47.031267 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:07:47.031274 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:07:47.031281 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:07:47.031288 kernel: LSM: Security Framework initializing Dec 13 02:07:47.031296 kernel: SELinux: Initializing. Dec 13 02:07:47.031303 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:47.031310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:47.031317 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 02:07:47.031324 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 02:07:47.031331 kernel: ... version: 0 Dec 13 02:07:47.031338 kernel: ... bit width: 48 Dec 13 02:07:47.031345 kernel: ... generic registers: 6 Dec 13 02:07:47.031352 kernel: ... value mask: 0000ffffffffffff Dec 13 02:07:47.031360 kernel: ... max period: 00007fffffffffff Dec 13 02:07:47.031367 kernel: ... fixed-purpose events: 0 Dec 13 02:07:47.031374 kernel: ... event mask: 000000000000003f Dec 13 02:07:47.031381 kernel: signal: max sigframe size: 1776 Dec 13 02:07:47.031388 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:07:47.031395 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:07:47.031402 kernel: x86: Booting SMP configuration: Dec 13 02:07:47.031408 kernel: .... node #0, CPUs: #1 Dec 13 02:07:47.031415 kernel: kvm-clock: cpu 1, msr 3b19b041, secondary cpu clock Dec 13 02:07:47.031422 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 02:07:47.031434 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 02:07:47.031441 kernel: #2 Dec 13 02:07:47.031448 kernel: kvm-clock: cpu 2, msr 3b19b081, secondary cpu clock Dec 13 02:07:47.031455 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 02:07:47.031462 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 02:07:47.031470 kernel: #3 Dec 13 02:07:47.031485 kernel: kvm-clock: cpu 3, msr 3b19b0c1, secondary cpu clock Dec 13 02:07:47.031493 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 02:07:47.031502 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 02:07:47.031514 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 02:07:47.031522 kernel: smpboot: Max logical packages: 1 Dec 13 02:07:47.031531 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 02:07:47.031539 kernel: devtmpfs: initialized Dec 13 02:07:47.031548 kernel: x86/mm: Memory block size: 128MB Dec 13 02:07:47.031556 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:07:47.031565 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 02:07:47.031573 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:07:47.031582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:07:47.031593 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:07:47.031601 kernel: audit: type=2000 audit(1734055666.519:1): state=initialized audit_enabled=0 res=1 Dec 13 02:07:47.031610 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:07:47.031629 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:07:47.031638 kernel: cpuidle: using governor menu Dec 13 02:07:47.031647 kernel: ACPI: bus type PCI registered Dec 13 02:07:47.031656 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:07:47.031665 kernel: dca service started, version 1.12.1 Dec 13 02:07:47.031674 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 02:07:47.031686 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 02:07:47.031693 kernel: PCI: Using configuration type 1 for base access Dec 13 02:07:47.031700 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:07:47.031707 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:07:47.031714 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:07:47.031721 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:07:47.031728 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:07:47.031735 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:07:47.031742 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:07:47.031750 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:07:47.031757 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:07:47.031764 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:07:47.031771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:07:47.031778 kernel: ACPI: Interpreter enabled Dec 13 02:07:47.031785 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:07:47.031793 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:07:47.031802 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:07:47.031811 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 02:07:47.031822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:07:47.032056 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:07:47.032171 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 02:07:47.032262 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 02:07:47.032272 kernel: PCI host bridge to bus 0000:00 Dec 13 02:07:47.032380 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:07:47.032483 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:07:47.032585 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:07:47.032797 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 02:07:47.032922 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 02:07:47.033032 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 02:07:47.033122 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:07:47.033254 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 02:07:47.033380 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 02:07:47.033479 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 02:07:47.033572 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 02:07:47.033681 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 02:07:47.033831 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:07:47.033964 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:07:47.034101 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 02:07:47.034213 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 02:07:47.034312 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 02:07:47.034434 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:07:47.034536 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:07:47.034647 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 02:07:47.034753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 02:07:47.034868 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:07:47.034994 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 02:07:47.035101 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 02:07:47.035235 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 02:07:47.035357 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 02:07:47.035526 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 02:07:47.035682 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 02:07:47.035833 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 02:07:47.035956 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 02:07:47.036115 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 02:07:47.036257 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 02:07:47.036379 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 02:07:47.036410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:07:47.036422 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:07:47.036432 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:07:47.036447 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:07:47.036458 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 02:07:47.036483 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 02:07:47.036494 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 02:07:47.036504 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 02:07:47.036529 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 02:07:47.036542 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 02:07:47.036552 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 02:07:47.036563 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 02:07:47.036576 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 02:07:47.036601 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 02:07:47.036621 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 02:07:47.036631 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 02:07:47.036641 kernel: iommu: Default domain type: Translated Dec 13 02:07:47.036650 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:07:47.036838 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 02:07:47.037035 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:07:47.037198 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 02:07:47.037230 kernel: vgaarb: loaded Dec 13 02:07:47.037242 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:07:47.037252 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:07:47.037263 kernel: PTP clock support registered Dec 13 02:07:47.037288 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:07:47.037299 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:07:47.037309 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:07:47.037328 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 02:07:47.037349 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 02:07:47.037359 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 02:07:47.037369 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:07:47.037379 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:07:47.037406 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:07:47.037416 kernel: pnp: PnP ACPI init Dec 13 02:07:47.037562 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 02:07:47.037580 kernel: pnp: PnP ACPI: found 6 devices Dec 13 02:07:47.037595 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:07:47.037605 kernel: NET: Registered PF_INET protocol family Dec 13 02:07:47.037625 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:07:47.037635 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 02:07:47.037645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:07:47.037655 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:07:47.037666 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 02:07:47.037676 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 02:07:47.037688 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:47.037702 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:47.037714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:07:47.037724 kernel: NET: Registered PF_XDP protocol family Dec 13 02:07:47.037814 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:07:47.037905 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:07:47.038053 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:07:47.038146 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 02:07:47.038233 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 02:07:47.038321 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 02:07:47.038339 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:07:47.038350 kernel: Initialise system trusted keyrings Dec 13 02:07:47.038361 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 02:07:47.038372 kernel: Key type asymmetric registered Dec 13 02:07:47.038382 kernel: Asymmetric key parser 'x509' registered Dec 13 02:07:47.038393 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:07:47.038403 kernel: io scheduler mq-deadline registered Dec 13 02:07:47.038413 kernel: io scheduler kyber registered Dec 13 02:07:47.038423 kernel: io scheduler bfq registered Dec 13 02:07:47.038435 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:07:47.038447 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 02:07:47.038457 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 02:07:47.038467 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 02:07:47.038478 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:07:47.038488 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:07:47.038498 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:07:47.038509 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:07:47.038519 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:07:47.038531 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:07:47.038665 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:07:47.038758 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:07:47.038848 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:07:46 UTC (1734055666) Dec 13 02:07:47.038938 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 02:07:47.038952 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:07:47.038962 kernel: Segment Routing with IPv6 Dec 13 02:07:47.038973 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:07:47.039012 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:07:47.039022 kernel: Key type dns_resolver registered Dec 13 02:07:47.039032 kernel: IPI shorthand broadcast: enabled Dec 13 02:07:47.039043 kernel: sched_clock: Marking stable (460352858, 103313842)->(587116513, -23449813) Dec 13 02:07:47.039053 kernel: registered taskstats version 1 Dec 13 02:07:47.039063 kernel: Loading compiled-in X.509 certificates Dec 13 02:07:47.039073 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:07:47.039083 kernel: Key type .fscrypt registered Dec 13 02:07:47.039093 kernel: Key type fscrypt-provisioning registered Dec 13 02:07:47.039105 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:07:47.039115 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:07:47.039125 kernel: ima: No architecture policies found Dec 13 02:07:47.039136 kernel: clk: Disabling unused clocks Dec 13 02:07:47.039146 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:07:47.039156 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:07:47.039166 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:07:47.039177 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:07:47.039189 kernel: Run /init as init process Dec 13 02:07:47.039199 kernel: with arguments: Dec 13 02:07:47.039209 kernel: /init Dec 13 02:07:47.039219 kernel: with environment: Dec 13 02:07:47.039228 kernel: HOME=/ Dec 13 02:07:47.039238 kernel: TERM=linux Dec 13 02:07:47.039248 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:07:47.039262 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:07:47.039276 systemd[1]: Detected virtualization kvm. Dec 13 02:07:47.039289 systemd[1]: Detected architecture x86-64. Dec 13 02:07:47.039300 systemd[1]: Running in initrd. Dec 13 02:07:47.039311 systemd[1]: No hostname configured, using default hostname. Dec 13 02:07:47.039321 systemd[1]: Hostname set to . Dec 13 02:07:47.039333 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:47.039344 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:07:47.039355 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:07:47.039366 systemd[1]: Reached target cryptsetup.target. Dec 13 02:07:47.039379 systemd[1]: Reached target paths.target. Dec 13 02:07:47.039398 systemd[1]: Reached target slices.target. Dec 13 02:07:47.039410 systemd[1]: Reached target swap.target. Dec 13 02:07:47.039422 systemd[1]: Reached target timers.target. Dec 13 02:07:47.039433 systemd[1]: Listening on iscsid.socket. Dec 13 02:07:47.039446 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:07:47.039458 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:07:47.039469 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:07:47.039480 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:07:47.039491 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:07:47.039503 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:07:47.039514 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:07:47.039525 systemd[1]: Reached target sockets.target. Dec 13 02:07:47.039536 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:07:47.039549 systemd[1]: Finished network-cleanup.service. Dec 13 02:07:47.039560 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:07:47.039572 systemd[1]: Starting systemd-journald.service... Dec 13 02:07:47.039583 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:07:47.039595 systemd[1]: Starting systemd-resolved.service... Dec 13 02:07:47.039607 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:07:47.039630 systemd-journald[198]: Journal started Dec 13 02:07:47.039693 systemd-journald[198]: Runtime Journal (/run/log/journal/4a1198873fc44700961c73a648a1f668) is 6.0M, max 48.5M, 42.5M free. Dec 13 02:07:47.030910 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 02:07:47.082523 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:07:47.082562 kernel: audit: type=1130 audit(1734055667.070:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.082577 systemd[1]: Started systemd-journald.service. Dec 13 02:07:47.082590 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:07:47.082602 kernel: audit: type=1130 audit(1734055667.077:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.046507 systemd-resolved[200]: Positive Trust Anchors: Dec 13 02:07:47.087351 kernel: Bridge firewalling registered Dec 13 02:07:47.087367 kernel: audit: type=1130 audit(1734055667.083:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.046517 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:47.046556 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:07:47.115861 kernel: audit: type=1130 audit(1734055667.087:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.049550 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 02:07:47.081693 systemd[1]: Started systemd-resolved.service. Dec 13 02:07:47.137941 kernel: SCSI subsystem initialized Dec 13 02:07:47.137965 kernel: audit: type=1130 audit(1734055667.132:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.087139 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:07:47.143719 kernel: audit: type=1130 audit(1734055667.137:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.088470 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 02:07:47.148391 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:07:47.148407 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:07:47.148418 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:07:47.088637 systemd[1]: Reached target nss-lookup.target. Dec 13 02:07:47.093327 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:07:47.132740 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:07:47.133846 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:07:47.139245 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:07:47.157255 kernel: audit: type=1130 audit(1734055667.152:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.151853 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 02:07:47.152633 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:07:47.156326 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:07:47.160227 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:07:47.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.162761 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:07:47.166268 kernel: audit: type=1130 audit(1734055667.160:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.165292 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:07:47.170665 kernel: audit: type=1130 audit(1734055667.165:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.176960 dracut-cmdline[219]: dracut-dracut-053 Dec 13 02:07:47.179674 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:07:47.246035 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:07:47.262030 kernel: iscsi: registered transport (tcp) Dec 13 02:07:47.285022 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:07:47.285099 kernel: QLogic iSCSI HBA Driver Dec 13 02:07:47.308746 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:07:47.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.310692 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:07:47.358024 kernel: raid6: avx2x4 gen() 30101 MB/s Dec 13 02:07:47.375017 kernel: raid6: avx2x4 xor() 7642 MB/s Dec 13 02:07:47.392023 kernel: raid6: avx2x2 gen() 32049 MB/s Dec 13 02:07:47.409024 kernel: raid6: avx2x2 xor() 19159 MB/s Dec 13 02:07:47.426016 kernel: raid6: avx2x1 gen() 25746 MB/s Dec 13 02:07:47.443016 kernel: raid6: avx2x1 xor() 15301 MB/s Dec 13 02:07:47.460017 kernel: raid6: sse2x4 gen() 14783 MB/s Dec 13 02:07:47.477019 kernel: raid6: sse2x4 xor() 7179 MB/s Dec 13 02:07:47.494010 kernel: raid6: sse2x2 gen() 16476 MB/s Dec 13 02:07:47.511025 kernel: raid6: sse2x2 xor() 9837 MB/s Dec 13 02:07:47.528021 kernel: raid6: sse2x1 gen() 11935 MB/s Dec 13 02:07:47.557553 kernel: raid6: sse2x1 xor() 7784 MB/s Dec 13 02:07:47.557638 kernel: raid6: using algorithm avx2x2 gen() 32049 MB/s Dec 13 02:07:47.557648 kernel: raid6: .... xor() 19159 MB/s, rmw enabled Dec 13 02:07:47.557669 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:07:47.569999 kernel: xor: automatically using best checksumming function avx Dec 13 02:07:47.660040 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:07:47.668168 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:07:47.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.669000 audit: BPF prog-id=7 op=LOAD Dec 13 02:07:47.669000 audit: BPF prog-id=8 op=LOAD Dec 13 02:07:47.670545 systemd[1]: Starting systemd-udevd.service... Dec 13 02:07:47.683554 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 02:07:47.688533 systemd[1]: Started systemd-udevd.service. Dec 13 02:07:47.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.690312 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:07:47.701071 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 02:07:47.727101 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:07:47.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.728406 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:07:47.766631 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:07:47.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:47.804002 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:07:47.815002 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:07:47.817520 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 02:07:47.827153 kernel: AES CTR mode by8 optimization enabled Dec 13 02:07:47.827172 kernel: libata version 3.00 loaded. Dec 13 02:07:47.827188 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:07:47.827199 kernel: GPT:9289727 != 19775487 Dec 13 02:07:47.827209 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:07:47.827221 kernel: GPT:9289727 != 19775487 Dec 13 02:07:47.827232 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:07:47.827243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:47.836619 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 02:07:47.941716 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 02:07:47.941743 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 02:07:47.941855 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 02:07:47.941951 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (451) Dec 13 02:07:47.941961 kernel: scsi host0: ahci Dec 13 02:07:47.942093 kernel: scsi host1: ahci Dec 13 02:07:47.942188 kernel: scsi host2: ahci Dec 13 02:07:47.942280 kernel: scsi host3: ahci Dec 13 02:07:47.942378 kernel: scsi host4: ahci Dec 13 02:07:47.942474 kernel: scsi host5: ahci Dec 13 02:07:47.942577 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 02:07:47.942596 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 02:07:47.942606 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 02:07:47.942614 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 02:07:47.942623 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 02:07:47.942637 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 02:07:47.937000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:07:47.973350 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:07:47.977507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:07:47.983043 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:07:47.986464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:07:47.987714 systemd[1]: Starting disk-uuid.service... Dec 13 02:07:47.996584 disk-uuid[529]: Primary Header is updated. Dec 13 02:07:47.996584 disk-uuid[529]: Secondary Entries is updated. Dec 13 02:07:47.996584 disk-uuid[529]: Secondary Header is updated. Dec 13 02:07:48.000998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:48.004994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:48.007995 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:48.253417 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:48.253507 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:48.253517 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:48.255002 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:48.256016 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 02:07:48.257002 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:07:48.258005 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 02:07:48.259565 kernel: ata3.00: applying bridge limits Dec 13 02:07:48.259596 kernel: ata3.00: configured for UDMA/100 Dec 13 02:07:48.260005 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:07:48.295059 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 02:07:48.312734 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:07:48.312754 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:07:49.005016 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:07:49.005459 disk-uuid[530]: The operation has completed successfully. Dec 13 02:07:49.029865 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:07:49.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.029950 systemd[1]: Finished disk-uuid.service. Dec 13 02:07:49.039158 systemd[1]: Starting verity-setup.service... Dec 13 02:07:49.052014 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 02:07:49.071410 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:07:49.073448 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:07:49.075587 systemd[1]: Finished verity-setup.service. Dec 13 02:07:49.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.135802 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:07:49.137228 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:07:49.136332 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:07:49.136989 systemd[1]: Starting ignition-setup.service... Dec 13 02:07:49.139452 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:07:49.147741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:07:49.147777 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:07:49.147787 kernel: BTRFS info (device vda6): has skinny extents Dec 13 02:07:49.156775 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:07:49.164167 systemd[1]: Finished ignition-setup.service. Dec 13 02:07:49.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.165655 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:07:49.212608 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:07:49.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.240000 audit: BPF prog-id=9 op=LOAD Dec 13 02:07:49.243210 systemd[1]: Starting systemd-networkd.service... Dec 13 02:07:49.266397 systemd-networkd[719]: lo: Link UP Dec 13 02:07:49.266407 systemd-networkd[719]: lo: Gained carrier Dec 13 02:07:49.266834 systemd-networkd[719]: Enumeration completed Dec 13 02:07:49.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.267046 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:49.267991 systemd-networkd[719]: eth0: Link UP Dec 13 02:07:49.267993 systemd-networkd[719]: eth0: Gained carrier Dec 13 02:07:49.268990 systemd[1]: Started systemd-networkd.service. Dec 13 02:07:49.271082 systemd[1]: Reached target network.target. Dec 13 02:07:49.272722 systemd[1]: Starting iscsiuio.service... Dec 13 02:07:49.314492 ignition[644]: Ignition 2.14.0 Dec 13 02:07:49.314504 ignition[644]: Stage: fetch-offline Dec 13 02:07:49.314604 ignition[644]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:49.314614 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:49.314798 ignition[644]: parsed url from cmdline: "" Dec 13 02:07:49.314802 ignition[644]: no config URL provided Dec 13 02:07:49.314807 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:07:49.314815 ignition[644]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:07:49.314835 ignition[644]: op(1): [started] loading QEMU firmware config module Dec 13 02:07:49.314839 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 02:07:49.320125 ignition[644]: op(1): [finished] loading QEMU firmware config module Dec 13 02:07:49.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.336338 systemd[1]: Started iscsiuio.service. Dec 13 02:07:49.338213 systemd[1]: Starting iscsid.service... Dec 13 02:07:49.343773 iscsid[730]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:07:49.343773 iscsid[730]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:07:49.343773 iscsid[730]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:07:49.343773 iscsid[730]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:07:49.343773 iscsid[730]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:07:49.352793 iscsid[730]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:07:49.352136 systemd[1]: Started iscsid.service. Dec 13 02:07:49.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.358239 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:07:49.370834 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:07:49.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.372909 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:07:49.374805 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:07:49.376768 systemd[1]: Reached target remote-fs.target. Dec 13 02:07:49.379299 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:07:49.380429 ignition[644]: parsing config with SHA512: 7f30e89fff8673ef236cc9cdcf94f1d1cdd4a804684f1be9c1abce01a2d3e1112dde48f6003be77821b6c25cce6f05174467873a3a373d9bff1c9bac1e496951 Dec 13 02:07:49.381800 systemd-networkd[719]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:49.388729 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:07:49.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.394710 unknown[644]: fetched base config from "system" Dec 13 02:07:49.394726 unknown[644]: fetched user config from "qemu" Dec 13 02:07:49.395538 ignition[644]: fetch-offline: fetch-offline passed Dec 13 02:07:49.395628 ignition[644]: Ignition finished successfully Dec 13 02:07:49.398903 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:07:49.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.399414 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 02:07:49.400131 systemd[1]: Starting ignition-kargs.service... Dec 13 02:07:49.465422 ignition[745]: Ignition 2.14.0 Dec 13 02:07:49.465433 ignition[745]: Stage: kargs Dec 13 02:07:49.465584 ignition[745]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:49.465598 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:49.466876 ignition[745]: kargs: kargs passed Dec 13 02:07:49.466922 ignition[745]: Ignition finished successfully Dec 13 02:07:49.471189 systemd[1]: Finished ignition-kargs.service. Dec 13 02:07:49.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.472560 systemd[1]: Starting ignition-disks.service... Dec 13 02:07:49.483999 ignition[751]: Ignition 2.14.0 Dec 13 02:07:49.484010 ignition[751]: Stage: disks Dec 13 02:07:49.484120 ignition[751]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:49.484129 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:49.485674 ignition[751]: disks: disks passed Dec 13 02:07:49.485718 ignition[751]: Ignition finished successfully Dec 13 02:07:49.489408 systemd[1]: Finished ignition-disks.service. Dec 13 02:07:49.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.489945 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:07:49.491318 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:07:49.493264 systemd[1]: Reached target local-fs.target. Dec 13 02:07:49.494737 systemd[1]: Reached target sysinit.target. Dec 13 02:07:49.496134 systemd[1]: Reached target basic.target. Dec 13 02:07:49.498381 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:07:49.526184 systemd-fsck[759]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:07:49.564366 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:07:49.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.566710 systemd[1]: Mounting sysroot.mount... Dec 13 02:07:49.573990 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:07:49.573960 systemd[1]: Mounted sysroot.mount. Dec 13 02:07:49.575675 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:07:49.578851 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:07:49.581062 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:07:49.582867 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:07:49.585087 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:07:49.588335 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:07:49.591067 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:07:49.594746 initrd-setup-root[769]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:07:49.599240 initrd-setup-root[777]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:07:49.601675 initrd-setup-root[785]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:07:49.605422 initrd-setup-root[793]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:07:49.629307 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:07:49.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.632275 systemd[1]: Starting ignition-mount.service... Dec 13 02:07:49.634747 systemd[1]: Starting sysroot-boot.service... Dec 13 02:07:49.636869 bash[810]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 02:07:49.645231 ignition[811]: INFO : Ignition 2.14.0 Dec 13 02:07:49.646305 ignition[811]: INFO : Stage: mount Dec 13 02:07:49.647035 ignition[811]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:49.647035 ignition[811]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:49.666718 ignition[811]: INFO : mount: mount passed Dec 13 02:07:49.666718 ignition[811]: INFO : Ignition finished successfully Dec 13 02:07:49.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:49.649134 systemd[1]: Finished ignition-mount.service. Dec 13 02:07:49.672811 systemd[1]: Finished sysroot-boot.service. Dec 13 02:07:49.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:50.082401 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:07:50.089014 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (820) Dec 13 02:07:50.089044 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:07:50.089054 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:07:50.096608 kernel: BTRFS info (device vda6): has skinny extents Dec 13 02:07:50.099805 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:07:50.102507 systemd[1]: Starting ignition-files.service... Dec 13 02:07:50.115951 ignition[840]: INFO : Ignition 2.14.0 Dec 13 02:07:50.115951 ignition[840]: INFO : Stage: files Dec 13 02:07:50.117845 ignition[840]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:50.117845 ignition[840]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:50.117845 ignition[840]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:07:50.122096 ignition[840]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:07:50.122096 ignition[840]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:07:50.126329 ignition[840]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:07:50.127930 ignition[840]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:07:50.129873 unknown[840]: wrote ssh authorized keys file for user: core Dec 13 02:07:50.131080 ignition[840]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:07:50.133008 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:07:50.135096 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:07:50.182938 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:07:50.394567 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:07:50.397119 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:07:50.397119 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:07:50.533121 systemd-networkd[719]: eth0: Gained IPv6LL Dec 13 02:07:50.784462 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 02:07:50.940403 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:07:50.940403 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:50.944509 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:50.946313 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:07:50.948043 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:07:50.949707 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:07:50.951398 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:07:50.953067 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:07:50.954781 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:07:50.956492 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:50.958204 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:50.959878 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:07:50.962255 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:07:50.964646 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:07:50.966661 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 02:07:51.361008 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 02:07:52.302273 ignition[840]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:07:52.302273 ignition[840]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 02:07:52.306911 ignition[840]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 02:07:52.335943 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 13 02:07:52.335971 kernel: audit: type=1130 audit(1734055672.329:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.336065 ignition[840]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 02:07:52.336065 ignition[840]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 02:07:52.336065 ignition[840]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:52.336065 ignition[840]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:52.336065 ignition[840]: INFO : files: files passed Dec 13 02:07:52.336065 ignition[840]: INFO : Ignition finished successfully Dec 13 02:07:52.357625 kernel: audit: type=1130 audit(1734055672.341:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.357644 kernel: audit: type=1130 audit(1734055672.346:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.357654 kernel: audit: type=1131 audit(1734055672.346:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.327654 systemd[1]: Finished ignition-files.service. Dec 13 02:07:52.330013 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:07:52.359739 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 02:07:52.335928 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:07:52.363184 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:52.336481 systemd[1]: Starting ignition-quench.service... Dec 13 02:07:52.338934 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:07:52.341346 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:07:52.341407 systemd[1]: Finished ignition-quench.service. Dec 13 02:07:52.346957 systemd[1]: Reached target ignition-complete.target. Dec 13 02:07:52.355828 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:07:52.369909 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:07:52.378570 kernel: audit: type=1130 audit(1734055672.370:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.378589 kernel: audit: type=1131 audit(1734055672.370:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.369991 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:07:52.371133 systemd[1]: Reached target initrd-fs.target. Dec 13 02:07:52.378576 systemd[1]: Reached target initrd.target. Dec 13 02:07:52.379343 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:07:52.380047 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:07:52.391086 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:07:52.396351 kernel: audit: type=1130 audit(1734055672.390:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.392533 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:07:52.401683 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:07:52.402659 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:07:52.404365 systemd[1]: Stopped target timers.target. Dec 13 02:07:52.406065 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:07:52.412192 kernel: audit: type=1131 audit(1734055672.406:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.406188 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:07:52.407737 systemd[1]: Stopped target initrd.target. Dec 13 02:07:52.412269 systemd[1]: Stopped target basic.target. Dec 13 02:07:52.413889 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:07:52.415533 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:07:52.417169 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:07:52.418966 systemd[1]: Stopped target remote-fs.target. Dec 13 02:07:52.420644 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:07:52.422444 systemd[1]: Stopped target sysinit.target. Dec 13 02:07:52.424034 systemd[1]: Stopped target local-fs.target. Dec 13 02:07:52.425632 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:07:52.427249 systemd[1]: Stopped target swap.target. Dec 13 02:07:52.434830 kernel: audit: type=1131 audit(1734055672.430:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.428779 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:07:52.428878 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:07:52.441244 kernel: audit: type=1131 audit(1734055672.436:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.430531 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:07:52.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.434872 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:07:52.434959 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:07:52.436818 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:07:52.436924 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:07:52.441364 systemd[1]: Stopped target paths.target. Dec 13 02:07:52.442885 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:07:52.446035 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:07:52.447912 systemd[1]: Stopped target slices.target. Dec 13 02:07:52.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.449446 systemd[1]: Stopped target sockets.target. Dec 13 02:07:52.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.451409 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:07:52.451564 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:07:52.459379 iscsid[730]: iscsid shutting down. Dec 13 02:07:52.453405 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:07:52.453537 systemd[1]: Stopped ignition-files.service. Dec 13 02:07:52.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.463789 ignition[880]: INFO : Ignition 2.14.0 Dec 13 02:07:52.463789 ignition[880]: INFO : Stage: umount Dec 13 02:07:52.463789 ignition[880]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:52.463789 ignition[880]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:07:52.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.456004 systemd[1]: Stopping ignition-mount.service... Dec 13 02:07:52.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.472414 ignition[880]: INFO : umount: umount passed Dec 13 02:07:52.472414 ignition[880]: INFO : Ignition finished successfully Dec 13 02:07:52.457899 systemd[1]: Stopping iscsid.service... Dec 13 02:07:52.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.459405 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:07:52.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.459611 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:07:52.462336 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:07:52.463805 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:07:52.464002 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:07:52.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.465755 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:07:52.465888 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:07:52.469214 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:07:52.469323 systemd[1]: Stopped iscsid.service. Dec 13 02:07:52.470914 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:07:52.471024 systemd[1]: Stopped ignition-mount.service. Dec 13 02:07:52.472743 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:07:52.472839 systemd[1]: Closed iscsid.socket. Dec 13 02:07:52.474077 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:07:52.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.474122 systemd[1]: Stopped ignition-disks.service. Dec 13 02:07:52.475959 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:07:52.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.476023 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:07:52.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.477539 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:07:52.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.477587 systemd[1]: Stopped ignition-setup.service. Dec 13 02:07:52.480024 systemd[1]: Stopping iscsiuio.service... Dec 13 02:07:52.482640 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:07:52.483163 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:07:52.483258 systemd[1]: Stopped iscsiuio.service. Dec 13 02:07:52.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.484508 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:07:52.484600 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:07:52.487151 systemd[1]: Stopped target network.target. Dec 13 02:07:52.516000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:07:52.488225 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:07:52.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.488266 systemd[1]: Closed iscsiuio.socket. Dec 13 02:07:52.489812 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:07:52.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.491523 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:07:52.495031 systemd-networkd[719]: eth0: DHCPv6 lease lost Dec 13 02:07:52.524000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:07:52.496053 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:07:52.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.496156 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:07:52.498072 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:07:52.498104 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:07:52.500171 systemd[1]: Stopping network-cleanup.service... Dec 13 02:07:52.501426 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:07:52.501482 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:07:52.503447 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:07:52.503494 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:07:52.505334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:07:52.505373 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:07:52.507312 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:07:52.511170 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:07:52.512137 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:07:52.512242 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:07:52.518036 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:07:52.518181 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:07:52.521150 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:07:52.521246 systemd[1]: Stopped network-cleanup.service. Dec 13 02:07:52.522850 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:07:52.522891 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:07:52.524702 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:07:52.524743 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:07:52.526522 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:07:52.526565 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:07:52.528400 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:07:52.528440 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:07:52.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.549479 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:07:52.549520 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:07:52.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.552763 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:07:52.554489 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:52.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.554533 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:07:52.557832 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:07:52.558901 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:07:52.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.584437 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:07:52.585463 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:07:52.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.587187 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:07:52.588974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:07:52.589925 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:07:52.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:52.592197 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:07:52.609265 systemd[1]: Switching root. Dec 13 02:07:52.627781 systemd-journald[198]: Journal stopped Dec 13 02:07:55.488217 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 02:07:55.488265 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:07:55.488281 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:07:55.488291 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:07:55.488300 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:07:55.488314 kernel: SELinux: policy capability open_perms=1 Dec 13 02:07:55.488324 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:07:55.488336 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:07:55.488345 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:07:55.488356 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:07:55.488366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:07:55.488384 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:07:55.488395 systemd[1]: Successfully loaded SELinux policy in 39.708ms. Dec 13 02:07:55.488414 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.387ms. Dec 13 02:07:55.488426 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:07:55.488436 systemd[1]: Detected virtualization kvm. Dec 13 02:07:55.488447 systemd[1]: Detected architecture x86-64. Dec 13 02:07:55.488458 systemd[1]: Detected first boot. Dec 13 02:07:55.488468 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:55.488480 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:07:55.488490 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:07:55.488502 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:07:55.488514 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:07:55.488526 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:55.488536 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:07:55.488547 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:07:55.488557 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:07:55.488568 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:07:55.488579 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:07:55.488590 systemd[1]: Created slice system-getty.slice. Dec 13 02:07:55.488601 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:07:55.488612 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:07:55.488624 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:07:55.488636 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:07:55.488651 systemd[1]: Created slice user.slice. Dec 13 02:07:55.488662 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:07:55.488672 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:07:55.488684 systemd[1]: Set up automount boot.automount. Dec 13 02:07:55.488695 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:07:55.488705 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:07:55.488716 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:07:55.488726 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:07:55.488739 systemd[1]: Reached target integritysetup.target. Dec 13 02:07:55.488749 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:07:55.488759 systemd[1]: Reached target remote-fs.target. Dec 13 02:07:55.488771 systemd[1]: Reached target slices.target. Dec 13 02:07:55.488782 systemd[1]: Reached target swap.target. Dec 13 02:07:55.488792 systemd[1]: Reached target torcx.target. Dec 13 02:07:55.488802 systemd[1]: Reached target veritysetup.target. Dec 13 02:07:55.488812 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:07:55.488823 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:07:55.488834 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:07:55.488845 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:07:55.488856 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:07:55.488866 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:07:55.488878 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:07:55.488888 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:07:55.488899 systemd[1]: Mounting media.mount... Dec 13 02:07:55.488909 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:55.488920 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:07:55.488930 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:07:55.488940 systemd[1]: Mounting tmp.mount... Dec 13 02:07:55.488951 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:07:55.488962 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:55.488986 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:07:55.488996 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:07:55.489006 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:55.489017 systemd[1]: Starting modprobe@drm.service... Dec 13 02:07:55.489028 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:55.489038 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:07:55.489048 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:55.489059 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:07:55.489072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:07:55.489083 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:07:55.489093 kernel: loop: module loaded Dec 13 02:07:55.489103 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:07:55.489113 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:07:55.489124 systemd[1]: Stopped systemd-journald.service. Dec 13 02:07:55.489134 kernel: fuse: init (API version 7.34) Dec 13 02:07:55.489144 systemd[1]: Starting systemd-journald.service... Dec 13 02:07:55.489154 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:07:55.489164 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:07:55.489176 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:07:55.489186 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:07:55.489196 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:07:55.489207 systemd[1]: Stopped verity-setup.service. Dec 13 02:07:55.489218 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:55.489228 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:07:55.489239 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:07:55.489251 systemd-journald[988]: Journal started Dec 13 02:07:55.489288 systemd-journald[988]: Runtime Journal (/run/log/journal/4a1198873fc44700961c73a648a1f668) is 6.0M, max 48.5M, 42.5M free. Dec 13 02:07:52.687000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:07:53.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:07:53.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:07:53.184000 audit: BPF prog-id=10 op=LOAD Dec 13 02:07:53.184000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:07:53.184000 audit: BPF prog-id=11 op=LOAD Dec 13 02:07:53.184000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:07:53.217000 audit[914]: AVC avc: denied { associate } for pid=914 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:07:53.217000 audit[914]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0000240f2 a1=c00002a060 a2=c000028040 a3=32 items=0 ppid=897 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:53.217000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:07:53.218000 audit[914]: AVC avc: denied { associate } for pid=914 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:07:53.218000 audit[914]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000241c9 a2=1ed a3=0 items=2 ppid=897 pid=914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:53.218000 audit: CWD cwd="/" Dec 13 02:07:53.218000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:53.218000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:53.218000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:07:55.349000 audit: BPF prog-id=12 op=LOAD Dec 13 02:07:55.349000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:07:55.349000 audit: BPF prog-id=13 op=LOAD Dec 13 02:07:55.349000 audit: BPF prog-id=14 op=LOAD Dec 13 02:07:55.349000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:07:55.349000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:07:55.349000 audit: BPF prog-id=15 op=LOAD Dec 13 02:07:55.349000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:07:55.349000 audit: BPF prog-id=16 op=LOAD Dec 13 02:07:55.349000 audit: BPF prog-id=17 op=LOAD Dec 13 02:07:55.349000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:07:55.349000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:07:55.350000 audit: BPF prog-id=18 op=LOAD Dec 13 02:07:55.350000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:07:55.350000 audit: BPF prog-id=19 op=LOAD Dec 13 02:07:55.350000 audit: BPF prog-id=20 op=LOAD Dec 13 02:07:55.350000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:07:55.350000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:07:55.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.362000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:07:55.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.464000 audit: BPF prog-id=21 op=LOAD Dec 13 02:07:55.464000 audit: BPF prog-id=22 op=LOAD Dec 13 02:07:55.464000 audit: BPF prog-id=23 op=LOAD Dec 13 02:07:55.464000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:07:55.464000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:07:55.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.486000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:07:55.486000 audit[988]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcc3ad2f30 a2=4000 a3=7ffcc3ad2fcc items=0 ppid=1 pid=988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:55.486000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:07:55.348006 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:07:53.216400 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:07:55.348019 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 02:07:53.216650 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:07:55.351948 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:07:53.216668 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:07:53.216699 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:07:53.216708 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:07:53.216739 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:07:53.216753 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:07:53.216947 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:07:53.217001 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:07:55.491254 systemd[1]: Started systemd-journald.service. Dec 13 02:07:53.217014 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:07:55.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:53.217585 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:07:53.217619 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:07:53.217635 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:07:53.217650 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:07:53.217664 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:07:55.491788 systemd[1]: Mounted media.mount. Dec 13 02:07:53.217677 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:07:55.071118 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:55Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:55.071374 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:55Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:55.071477 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:55Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:55.071632 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:55Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:07:55.071678 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:55Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:07:55.071730 /usr/lib/systemd/system-generators/torcx-generator[914]: time="2024-12-13T02:07:55Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:07:55.492691 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:07:55.493585 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:07:55.494495 systemd[1]: Mounted tmp.mount. Dec 13 02:07:55.495534 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:07:55.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.496742 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:07:55.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.497880 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:07:55.498142 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:07:55.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.499366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:55.499574 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:55.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.500625 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:55.500816 systemd[1]: Finished modprobe@drm.service. Dec 13 02:07:55.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.501819 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:55.502033 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:55.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.503116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:07:55.503264 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:07:55.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.504277 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:55.504444 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:55.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.505544 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:07:55.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.506721 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:07:55.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.507894 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:07:55.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.509211 systemd[1]: Reached target network-pre.target. Dec 13 02:07:55.511307 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:07:55.513169 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:07:55.513950 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:07:55.516336 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:07:55.518387 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:07:55.519259 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:55.520244 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:07:55.521150 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:55.522143 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:07:55.526125 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:07:55.528872 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:07:55.529842 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:07:55.531468 systemd-journald[988]: Time spent on flushing to /var/log/journal/4a1198873fc44700961c73a648a1f668 is 13.500ms for 1112 entries. Dec 13 02:07:55.531468 systemd-journald[988]: System Journal (/var/log/journal/4a1198873fc44700961c73a648a1f668) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:07:55.556916 systemd-journald[988]: Received client request to flush runtime journal. Dec 13 02:07:55.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.531497 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:07:55.534244 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:07:55.557429 udevadm[1018]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:07:55.536232 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:07:55.538411 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:07:55.541084 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:07:55.542521 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:07:55.557829 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:07:55.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.947855 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:07:55.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.948000 audit: BPF prog-id=24 op=LOAD Dec 13 02:07:55.948000 audit: BPF prog-id=25 op=LOAD Dec 13 02:07:55.948000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:07:55.948000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:07:55.950177 systemd[1]: Starting systemd-udevd.service... Dec 13 02:07:55.965614 systemd-udevd[1020]: Using default interface naming scheme 'v252'. Dec 13 02:07:55.979398 systemd[1]: Started systemd-udevd.service. Dec 13 02:07:55.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:55.980000 audit: BPF prog-id=26 op=LOAD Dec 13 02:07:55.982042 systemd[1]: Starting systemd-networkd.service... Dec 13 02:07:55.989645 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:07:55.987000 audit: BPF prog-id=27 op=LOAD Dec 13 02:07:55.987000 audit: BPF prog-id=28 op=LOAD Dec 13 02:07:55.988000 audit: BPF prog-id=29 op=LOAD Dec 13 02:07:56.003092 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:07:56.016676 systemd[1]: Started systemd-userdbd.service. Dec 13 02:07:56.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.035829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:07:56.047030 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:07:56.058019 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:07:56.062054 systemd-networkd[1030]: lo: Link UP Dec 13 02:07:56.062067 systemd-networkd[1030]: lo: Gained carrier Dec 13 02:07:56.062477 systemd-networkd[1030]: Enumeration completed Dec 13 02:07:56.062588 systemd[1]: Started systemd-networkd.service. Dec 13 02:07:56.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.063728 systemd-networkd[1030]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:56.064844 systemd-networkd[1030]: eth0: Link UP Dec 13 02:07:56.064853 systemd-networkd[1030]: eth0: Gained carrier Dec 13 02:07:56.063000 audit[1021]: AVC avc: denied { confidentiality } for pid=1021 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:07:56.078171 systemd-networkd[1030]: eth0: DHCPv4 address 10.0.0.140/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:56.063000 audit[1021]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559736da9900 a1=337fc a2=7f8e3cb4cbc5 a3=5 items=110 ppid=1020 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:56.063000 audit: CWD cwd="/" Dec 13 02:07:56.063000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=1 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=2 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=3 name=(null) inode=13249 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=4 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=5 name=(null) inode=13250 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=6 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=7 name=(null) inode=13251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=8 name=(null) inode=13251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=9 name=(null) inode=13252 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=10 name=(null) inode=13251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=11 name=(null) inode=13253 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=12 name=(null) inode=13251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=13 name=(null) inode=13254 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=14 name=(null) inode=13251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=15 name=(null) inode=13255 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=16 name=(null) inode=13251 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=17 name=(null) inode=13256 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=18 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=19 name=(null) inode=13257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=20 name=(null) inode=13257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=21 name=(null) inode=13258 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=22 name=(null) inode=13257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=23 name=(null) inode=13259 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=24 name=(null) inode=13257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=25 name=(null) inode=13260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=26 name=(null) inode=13257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=27 name=(null) inode=13261 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=28 name=(null) inode=13257 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=29 name=(null) inode=13262 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=30 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=31 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=32 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=33 name=(null) inode=13264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=34 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=35 name=(null) inode=13265 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=36 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=37 name=(null) inode=13266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=38 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=39 name=(null) inode=13267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=40 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=41 name=(null) inode=13268 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=42 name=(null) inode=13248 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=43 name=(null) inode=13269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=44 name=(null) inode=13269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=45 name=(null) inode=13270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=46 name=(null) inode=13269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=47 name=(null) inode=13271 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=48 name=(null) inode=13269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=49 name=(null) inode=13272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=50 name=(null) inode=13269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=51 name=(null) inode=13273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=52 name=(null) inode=13269 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=53 name=(null) inode=13274 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=55 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=56 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=57 name=(null) inode=13276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=58 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=59 name=(null) inode=13277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=60 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=61 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=62 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=63 name=(null) inode=13279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=64 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=65 name=(null) inode=13280 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=66 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=67 name=(null) inode=13281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=68 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=69 name=(null) inode=13282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=70 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=71 name=(null) inode=13283 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=72 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.108022 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 02:07:56.063000 audit: PATH item=73 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=74 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=75 name=(null) inode=13285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=76 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=77 name=(null) inode=13286 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=78 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=79 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=80 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=81 name=(null) inode=13288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=82 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=83 name=(null) inode=13289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=84 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=85 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=86 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=87 name=(null) inode=13291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=88 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=89 name=(null) inode=13292 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=90 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=91 name=(null) inode=13293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=92 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=93 name=(null) inode=13294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=94 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=95 name=(null) inode=13295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=96 name=(null) inode=13275 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=97 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=98 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=99 name=(null) inode=13297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=100 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=101 name=(null) inode=13298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=102 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=103 name=(null) inode=13299 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=104 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=105 name=(null) inode=13300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=106 name=(null) inode=13296 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=107 name=(null) inode=13301 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PATH item=109 name=(null) inode=13304 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:07:56.063000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:07:56.111006 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:07:56.131996 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 02:07:56.140357 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 02:07:56.140618 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 02:07:56.140830 kernel: kvm: Nested Virtualization enabled Dec 13 02:07:56.140862 kernel: SVM: kvm: Nested Paging enabled Dec 13 02:07:56.140900 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 02:07:56.140940 kernel: SVM: Virtual GIF supported Dec 13 02:07:56.156009 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:07:56.183390 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:07:56.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.185573 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:07:56.193541 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:56.221833 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:07:56.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.222847 systemd[1]: Reached target cryptsetup.target. Dec 13 02:07:56.224587 systemd[1]: Starting lvm2-activation.service... Dec 13 02:07:56.228587 lvm[1056]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:56.253519 systemd[1]: Finished lvm2-activation.service. Dec 13 02:07:56.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.254464 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:07:56.255341 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:07:56.255374 systemd[1]: Reached target local-fs.target. Dec 13 02:07:56.256188 systemd[1]: Reached target machines.target. Dec 13 02:07:56.257900 systemd[1]: Starting ldconfig.service... Dec 13 02:07:56.258821 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.258876 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:56.259799 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:07:56.261512 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:07:56.263814 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:07:56.265738 systemd[1]: Starting systemd-sysext.service... Dec 13 02:07:56.267021 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1058 (bootctl) Dec 13 02:07:56.268130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:07:56.270415 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:07:56.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.278192 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:07:56.281532 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:07:56.281659 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:07:56.294050 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 02:07:56.302517 systemd-fsck[1066]: fsck.fat 4.2 (2021-01-31) Dec 13 02:07:56.302517 systemd-fsck[1066]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 02:07:56.303842 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:07:56.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.308626 systemd[1]: Mounting boot.mount... Dec 13 02:07:56.316366 systemd[1]: Mounted boot.mount. Dec 13 02:07:56.471755 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:07:56.472351 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:07:56.474027 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:07:56.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.474701 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:07:56.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.489997 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 02:07:56.494724 (sd-sysext)[1071]: Using extensions 'kubernetes'. Dec 13 02:07:56.495137 (sd-sysext)[1071]: Merged extensions into '/usr'. Dec 13 02:07:56.511421 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.513046 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:07:56.514089 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.515459 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:56.517765 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:56.519616 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:56.520479 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.520595 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:56.520709 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.523591 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:07:56.524694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:56.524826 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:56.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.526117 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:56.526216 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:56.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.527555 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:56.527656 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:56.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.529212 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:56.529305 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.530178 systemd[1]: Finished systemd-sysext.service. Dec 13 02:07:56.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.532249 systemd[1]: Starting ensure-sysext.service... Dec 13 02:07:56.534177 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:07:56.536041 ldconfig[1057]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:07:56.540453 systemd[1]: Finished ldconfig.service. Dec 13 02:07:56.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.541370 systemd[1]: Reloading. Dec 13 02:07:56.545953 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:07:56.548172 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:07:56.551057 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:07:56.590161 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2024-12-13T02:07:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:07:56.590189 /usr/lib/systemd/system-generators/torcx-generator[1098]: time="2024-12-13T02:07:56Z" level=info msg="torcx already run" Dec 13 02:07:56.656807 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:07:56.656828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:07:56.674678 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:56.726000 audit: BPF prog-id=30 op=LOAD Dec 13 02:07:56.726000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:07:56.726000 audit: BPF prog-id=31 op=LOAD Dec 13 02:07:56.726000 audit: BPF prog-id=32 op=LOAD Dec 13 02:07:56.726000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:07:56.726000 audit: BPF prog-id=29 op=UNLOAD Dec 13 02:07:56.728000 audit: BPF prog-id=33 op=LOAD Dec 13 02:07:56.728000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:07:56.728000 audit: BPF prog-id=34 op=LOAD Dec 13 02:07:56.728000 audit: BPF prog-id=35 op=LOAD Dec 13 02:07:56.728000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:07:56.728000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:07:56.728000 audit: BPF prog-id=36 op=LOAD Dec 13 02:07:56.728000 audit: BPF prog-id=37 op=LOAD Dec 13 02:07:56.728000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:07:56.728000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:07:56.729000 audit: BPF prog-id=38 op=LOAD Dec 13 02:07:56.729000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:07:56.732655 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:07:56.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.737583 systemd[1]: Starting audit-rules.service... Dec 13 02:07:56.739362 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:07:56.741470 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:07:56.742000 audit: BPF prog-id=39 op=LOAD Dec 13 02:07:56.743944 systemd[1]: Starting systemd-resolved.service... Dec 13 02:07:56.744000 audit: BPF prog-id=40 op=LOAD Dec 13 02:07:56.746057 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:07:56.747757 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:07:56.749235 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:07:56.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.751000 audit[1147]: SYSTEM_BOOT pid=1147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.752554 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:56.755606 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.755816 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.757419 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:56.759373 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:56.761345 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:56.762225 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.762379 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:56.762518 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:56.762624 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.764140 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:07:56.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.765689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:56.765864 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:56.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.767352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:56.771162 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:56.772670 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:56.772798 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:56.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.774210 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:56.774369 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.775954 systemd[1]: Starting systemd-update-done.service... Dec 13 02:07:56.777514 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:07:56.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:07:56.779838 augenrules[1164]: No rules Dec 13 02:07:56.778000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:07:56.778000 audit[1164]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffff98ea530 a2=420 a3=0 items=0 ppid=1140 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:07:56.778000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:07:56.780442 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.780615 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.781817 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:56.783576 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:56.785463 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:56.786254 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.786354 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:56.786443 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:56.786510 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.787343 systemd[1]: Finished systemd-update-done.service. Dec 13 02:07:56.788705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:56.788806 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:56.790040 systemd[1]: Finished audit-rules.service. Dec 13 02:07:56.791172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:56.791267 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:56.792559 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:56.792654 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:56.793858 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:56.793952 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.796547 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.796830 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.798595 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:07:56.800636 systemd[1]: Starting modprobe@drm.service... Dec 13 02:07:56.802413 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:07:56.804267 systemd[1]: Starting modprobe@loop.service... Dec 13 02:07:56.805400 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:07:56.805500 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:56.806639 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:07:56.807644 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:56.807747 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:07:56.808730 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:56.808844 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:07:56.809964 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:07:57.310796 systemd-timesyncd[1145]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 02:07:57.311052 systemd-timesyncd[1145]: Initial clock synchronization to Fri 2024-12-13 02:07:57.310725 UTC. Dec 13 02:07:57.312017 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:57.312125 systemd[1]: Finished modprobe@drm.service. Dec 13 02:07:57.313316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:57.313417 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:07:57.314678 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:57.314793 systemd[1]: Finished modprobe@loop.service. Dec 13 02:07:57.314883 systemd-resolved[1144]: Positive Trust Anchors: Dec 13 02:07:57.314898 systemd-resolved[1144]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:57.314925 systemd-resolved[1144]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:07:57.316197 systemd[1]: Reached target time-set.target. Dec 13 02:07:57.317195 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:57.317274 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:07:57.318256 systemd[1]: Finished ensure-sysext.service. Dec 13 02:07:57.322780 systemd-resolved[1144]: Defaulting to hostname 'linux'. Dec 13 02:07:57.324164 systemd[1]: Started systemd-resolved.service. Dec 13 02:07:57.325129 systemd[1]: Reached target network.target. Dec 13 02:07:57.325931 systemd[1]: Reached target nss-lookup.target. Dec 13 02:07:57.326771 systemd[1]: Reached target sysinit.target. Dec 13 02:07:57.327655 systemd[1]: Started motdgen.path. Dec 13 02:07:57.328393 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:07:57.329643 systemd[1]: Started logrotate.timer. Dec 13 02:07:57.330447 systemd[1]: Started mdadm.timer. Dec 13 02:07:57.331139 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:07:57.331994 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:07:57.332019 systemd[1]: Reached target paths.target. Dec 13 02:07:57.332756 systemd[1]: Reached target timers.target. Dec 13 02:07:57.333813 systemd[1]: Listening on dbus.socket. Dec 13 02:07:57.335520 systemd[1]: Starting docker.socket... Dec 13 02:07:57.338371 systemd[1]: Listening on sshd.socket. Dec 13 02:07:57.339244 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:57.339626 systemd[1]: Listening on docker.socket. Dec 13 02:07:57.340453 systemd[1]: Reached target sockets.target. Dec 13 02:07:57.341273 systemd[1]: Reached target basic.target. Dec 13 02:07:57.342072 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:07:57.342097 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:07:57.342984 systemd[1]: Starting containerd.service... Dec 13 02:07:57.344676 systemd[1]: Starting dbus.service... Dec 13 02:07:57.346412 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:07:57.348224 systemd[1]: Starting extend-filesystems.service... Dec 13 02:07:57.353693 jq[1182]: false Dec 13 02:07:57.349200 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:07:57.350282 systemd[1]: Starting motdgen.service... Dec 13 02:07:57.351994 systemd[1]: Starting prepare-helm.service... Dec 13 02:07:57.353673 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:07:57.355625 systemd[1]: Starting sshd-keygen.service... Dec 13 02:07:57.358779 systemd[1]: Starting systemd-logind.service... Dec 13 02:07:57.361684 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:07:57.361738 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:07:57.362502 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:07:57.363202 systemd[1]: Starting update-engine.service... Dec 13 02:07:57.365064 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:07:57.367692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:07:57.367839 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:07:57.368138 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:07:57.368458 systemd[1]: Finished motdgen.service. Dec 13 02:07:57.370047 jq[1200]: true Dec 13 02:07:57.370266 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:07:57.370419 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:07:57.373541 systemd[1]: Started dbus.service. Dec 13 02:07:57.373425 dbus-daemon[1181]: [system] SELinux support is enabled Dec 13 02:07:57.378880 tar[1202]: linux-amd64/helm Dec 13 02:07:57.376250 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:07:57.376270 systemd[1]: Reached target system-config.target. Dec 13 02:07:57.377319 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:07:57.377337 systemd[1]: Reached target user-config.target. Dec 13 02:07:57.382602 jq[1203]: true Dec 13 02:07:57.384019 extend-filesystems[1183]: Found loop1 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found sr0 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda1 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda2 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda3 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found usr Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda4 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda6 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda7 Dec 13 02:07:57.384019 extend-filesystems[1183]: Found vda9 Dec 13 02:07:57.384019 extend-filesystems[1183]: Checking size of /dev/vda9 Dec 13 02:07:57.424527 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 02:07:57.433837 update_engine[1198]: I1213 02:07:57.412683 1198 main.cc:92] Flatcar Update Engine starting Dec 13 02:07:57.433837 update_engine[1198]: I1213 02:07:57.414502 1198 update_check_scheduler.cc:74] Next update check in 10m18s Dec 13 02:07:57.414475 systemd[1]: Started update-engine.service. Dec 13 02:07:57.434166 extend-filesystems[1183]: Resized partition /dev/vda9 Dec 13 02:07:57.417276 systemd[1]: Started locksmithd.service. Dec 13 02:07:57.438544 env[1205]: time="2024-12-13T02:07:57.434155700Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:07:57.438741 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:07:57.430349 systemd-logind[1196]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:07:57.430365 systemd-logind[1196]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:07:57.430607 systemd-logind[1196]: New seat seat0. Dec 13 02:07:57.432026 systemd[1]: Started systemd-logind.service. Dec 13 02:07:57.441927 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 02:07:57.464698 env[1205]: time="2024-12-13T02:07:57.452790687Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:07:57.464791 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:07:57.464791 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:07:57.464791 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 02:07:57.470729 extend-filesystems[1183]: Resized filesystem in /dev/vda9 Dec 13 02:07:57.473147 bash[1236]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:07:57.465179 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.466144710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.469804123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.469840832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470069952Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470085621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470102463Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470113293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470178716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470373711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:57.473287 env[1205]: time="2024-12-13T02:07:57.470472086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:57.465342 systemd[1]: Finished extend-filesystems.service. Dec 13 02:07:57.473570 env[1205]: time="2024-12-13T02:07:57.470484900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:07:57.473570 env[1205]: time="2024-12-13T02:07:57.470523031Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:07:57.473570 env[1205]: time="2024-12-13T02:07:57.470534062Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:07:57.470569 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.475902149Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.475937185Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.475950309Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.475973854Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.475987559Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476000293Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476011695Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476025080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476038605Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476051569Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476062830Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476072969Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476154843Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:07:57.477233 env[1205]: time="2024-12-13T02:07:57.476218843Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476426663Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476449506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476461608Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476502896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476515109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476528353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476539144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476550715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476561826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476572316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476614665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476630164Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476751732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476766911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477551 env[1205]: time="2024-12-13T02:07:57.476778262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477884 env[1205]: time="2024-12-13T02:07:57.476788331Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:07:57.477884 env[1205]: time="2024-12-13T02:07:57.476801706Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:07:57.477884 env[1205]: time="2024-12-13T02:07:57.476811785Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:07:57.477884 env[1205]: time="2024-12-13T02:07:57.476827394Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:07:57.477884 env[1205]: time="2024-12-13T02:07:57.476865636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:07:57.477996 env[1205]: time="2024-12-13T02:07:57.477042277Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:07:57.477996 env[1205]: time="2024-12-13T02:07:57.477085879Z" level=info msg="Connect containerd service" Dec 13 02:07:57.477996 env[1205]: time="2024-12-13T02:07:57.477116987Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:07:57.479917 env[1205]: time="2024-12-13T02:07:57.479799077Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:07:57.484332 env[1205]: time="2024-12-13T02:07:57.484220900Z" level=info msg="Start subscribing containerd event" Dec 13 02:07:57.484332 env[1205]: time="2024-12-13T02:07:57.484285120Z" level=info msg="Start recovering state" Dec 13 02:07:57.484408 env[1205]: time="2024-12-13T02:07:57.484358157Z" level=info msg="Start event monitor" Dec 13 02:07:57.484408 env[1205]: time="2024-12-13T02:07:57.484376482Z" level=info msg="Start snapshots syncer" Dec 13 02:07:57.484408 env[1205]: time="2024-12-13T02:07:57.484388634Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:07:57.484408 env[1205]: time="2024-12-13T02:07:57.484397431Z" level=info msg="Start streaming server" Dec 13 02:07:57.484773 env[1205]: time="2024-12-13T02:07:57.484738140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:07:57.484824 env[1205]: time="2024-12-13T02:07:57.484808201Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:07:57.484961 systemd[1]: Started containerd.service. Dec 13 02:07:57.488645 env[1205]: time="2024-12-13T02:07:57.484909010Z" level=info msg="containerd successfully booted in 0.051254s" Dec 13 02:07:57.495390 locksmithd[1235]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:07:57.809886 tar[1202]: linux-amd64/LICENSE Dec 13 02:07:57.809886 tar[1202]: linux-amd64/README.md Dec 13 02:07:57.815201 systemd[1]: Finished prepare-helm.service. Dec 13 02:07:57.929533 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:07:57.948308 systemd[1]: Finished sshd-keygen.service. Dec 13 02:07:57.950624 systemd[1]: Starting issuegen.service... Dec 13 02:07:57.955768 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:07:57.955894 systemd[1]: Finished issuegen.service. Dec 13 02:07:57.958287 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:07:57.963672 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:07:57.966185 systemd[1]: Started getty@tty1.service. Dec 13 02:07:57.968224 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:07:57.969324 systemd[1]: Reached target getty.target. Dec 13 02:07:58.265785 systemd-networkd[1030]: eth0: Gained IPv6LL Dec 13 02:07:58.267667 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:07:58.269393 systemd[1]: Reached target network-online.target. Dec 13 02:07:58.271791 systemd[1]: Starting kubelet.service... Dec 13 02:07:58.836685 systemd[1]: Started kubelet.service. Dec 13 02:07:58.837993 systemd[1]: Reached target multi-user.target. Dec 13 02:07:58.840078 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:07:58.847391 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:07:58.847527 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:07:58.848723 systemd[1]: Startup finished in 826ms (kernel) + 5.773s (initrd) + 5.701s (userspace) = 12.301s. Dec 13 02:07:59.326595 kubelet[1262]: E1213 02:07:59.326508 1262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:59.328228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:59.328396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:07.109157 systemd[1]: Created slice system-sshd.slice. Dec 13 02:08:07.110218 systemd[1]: Started sshd@0-10.0.0.140:22-10.0.0.1:48676.service. Dec 13 02:08:07.142517 sshd[1271]: Accepted publickey for core from 10.0.0.1 port 48676 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:08:07.144002 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:07.151339 systemd[1]: Created slice user-500.slice. Dec 13 02:08:07.152361 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:08:07.153745 systemd-logind[1196]: New session 1 of user core. Dec 13 02:08:07.159841 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:08:07.160964 systemd[1]: Starting user@500.service... Dec 13 02:08:07.163313 (systemd)[1274]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:07.228327 systemd[1274]: Queued start job for default target default.target. Dec 13 02:08:07.228777 systemd[1274]: Reached target paths.target. Dec 13 02:08:07.228796 systemd[1274]: Reached target sockets.target. Dec 13 02:08:07.228808 systemd[1274]: Reached target timers.target. Dec 13 02:08:07.228818 systemd[1274]: Reached target basic.target. Dec 13 02:08:07.228851 systemd[1274]: Reached target default.target. Dec 13 02:08:07.228872 systemd[1274]: Startup finished in 60ms. Dec 13 02:08:07.228943 systemd[1]: Started user@500.service. Dec 13 02:08:07.229851 systemd[1]: Started session-1.scope. Dec 13 02:08:07.280670 systemd[1]: Started sshd@1-10.0.0.140:22-10.0.0.1:48684.service. Dec 13 02:08:07.310559 sshd[1283]: Accepted publickey for core from 10.0.0.1 port 48684 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:08:07.312060 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:07.315588 systemd-logind[1196]: New session 2 of user core. Dec 13 02:08:07.316971 systemd[1]: Started session-2.scope. Dec 13 02:08:07.369709 sshd[1283]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:07.372451 systemd[1]: sshd@1-10.0.0.140:22-10.0.0.1:48684.service: Deactivated successfully. Dec 13 02:08:07.373101 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:08:07.373720 systemd-logind[1196]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:08:07.374680 systemd[1]: Started sshd@2-10.0.0.140:22-10.0.0.1:48696.service. Dec 13 02:08:07.375441 systemd-logind[1196]: Removed session 2. Dec 13 02:08:07.402527 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 48696 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:08:07.403473 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:07.406651 systemd-logind[1196]: New session 3 of user core. Dec 13 02:08:07.407400 systemd[1]: Started session-3.scope. Dec 13 02:08:07.455953 sshd[1289]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:07.458664 systemd[1]: sshd@2-10.0.0.140:22-10.0.0.1:48696.service: Deactivated successfully. Dec 13 02:08:07.459137 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:08:07.459619 systemd-logind[1196]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:08:07.460556 systemd[1]: Started sshd@3-10.0.0.140:22-10.0.0.1:48702.service. Dec 13 02:08:07.461180 systemd-logind[1196]: Removed session 3. Dec 13 02:08:07.488956 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 48702 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:08:07.490074 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:07.493438 systemd-logind[1196]: New session 4 of user core. Dec 13 02:08:07.494224 systemd[1]: Started session-4.scope. Dec 13 02:08:07.548555 sshd[1295]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:07.551250 systemd[1]: sshd@3-10.0.0.140:22-10.0.0.1:48702.service: Deactivated successfully. Dec 13 02:08:07.551805 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:08:07.552250 systemd-logind[1196]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:08:07.553336 systemd[1]: Started sshd@4-10.0.0.140:22-10.0.0.1:48714.service. Dec 13 02:08:07.553986 systemd-logind[1196]: Removed session 4. Dec 13 02:08:07.581743 sshd[1301]: Accepted publickey for core from 10.0.0.1 port 48714 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:08:07.582893 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:08:07.586248 systemd-logind[1196]: New session 5 of user core. Dec 13 02:08:07.587066 systemd[1]: Started session-5.scope. Dec 13 02:08:07.732313 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:08:07.732510 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:08:07.764742 systemd[1]: Starting docker.service... Dec 13 02:08:07.813073 env[1317]: time="2024-12-13T02:08:07.813015486Z" level=info msg="Starting up" Dec 13 02:08:07.814285 env[1317]: time="2024-12-13T02:08:07.814254270Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:08:07.814285 env[1317]: time="2024-12-13T02:08:07.814273135Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:08:07.814379 env[1317]: time="2024-12-13T02:08:07.814295757Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:08:07.814379 env[1317]: time="2024-12-13T02:08:07.814315434Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:08:07.817805 env[1317]: time="2024-12-13T02:08:07.817775664Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:08:07.817805 env[1317]: time="2024-12-13T02:08:07.817796723Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:08:07.817900 env[1317]: time="2024-12-13T02:08:07.817812753Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:08:07.817900 env[1317]: time="2024-12-13T02:08:07.817821750Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:08:07.850550 env[1317]: time="2024-12-13T02:08:07.850501115Z" level=info msg="Loading containers: start." Dec 13 02:08:08.618603 kernel: Initializing XFRM netlink socket Dec 13 02:08:08.677417 env[1317]: time="2024-12-13T02:08:08.677369920Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:08:08.724226 systemd-networkd[1030]: docker0: Link UP Dec 13 02:08:08.854959 env[1317]: time="2024-12-13T02:08:08.854922584Z" level=info msg="Loading containers: done." Dec 13 02:08:08.960991 env[1317]: time="2024-12-13T02:08:08.960902795Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:08:08.961113 env[1317]: time="2024-12-13T02:08:08.961053037Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:08:08.961159 env[1317]: time="2024-12-13T02:08:08.961146011Z" level=info msg="Daemon has completed initialization" Dec 13 02:08:09.178403 systemd[1]: Started docker.service. Dec 13 02:08:09.186537 env[1317]: time="2024-12-13T02:08:09.186464584Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:08:09.594357 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:08:09.594627 systemd[1]: Stopped kubelet.service. Dec 13 02:08:09.596471 systemd[1]: Starting kubelet.service... Dec 13 02:08:09.748090 systemd[1]: Started kubelet.service. Dec 13 02:08:09.831169 kubelet[1447]: E1213 02:08:09.831080 1447 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:09.835002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:09.835191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:10.554605 env[1205]: time="2024-12-13T02:08:10.554508685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 02:08:11.802200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615954827.mount: Deactivated successfully. Dec 13 02:08:13.113090 env[1205]: time="2024-12-13T02:08:13.113022281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:13.114912 env[1205]: time="2024-12-13T02:08:13.114866299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:13.116467 env[1205]: time="2024-12-13T02:08:13.116436294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:13.118052 env[1205]: time="2024-12-13T02:08:13.118007761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:13.118671 env[1205]: time="2024-12-13T02:08:13.118642240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 02:08:13.119951 env[1205]: time="2024-12-13T02:08:13.119923804Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 02:08:14.797409 env[1205]: time="2024-12-13T02:08:14.797340870Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:14.799104 env[1205]: time="2024-12-13T02:08:14.799078037Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:14.800950 env[1205]: time="2024-12-13T02:08:14.800897119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:14.802546 env[1205]: time="2024-12-13T02:08:14.802501598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:14.803825 env[1205]: time="2024-12-13T02:08:14.803784554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 02:08:14.804643 env[1205]: time="2024-12-13T02:08:14.804615282Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 02:08:17.203406 env[1205]: time="2024-12-13T02:08:17.203332182Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:17.205297 env[1205]: time="2024-12-13T02:08:17.205247534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:17.207082 env[1205]: time="2024-12-13T02:08:17.207031249Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:17.208649 env[1205]: time="2024-12-13T02:08:17.208615631Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:17.209399 env[1205]: time="2024-12-13T02:08:17.209367871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 02:08:17.209922 env[1205]: time="2024-12-13T02:08:17.209894659Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 02:08:18.399096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2909083427.mount: Deactivated successfully. Dec 13 02:08:20.051140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:08:20.051420 systemd[1]: Stopped kubelet.service. Dec 13 02:08:20.053567 systemd[1]: Starting kubelet.service... Dec 13 02:08:20.136015 systemd[1]: Started kubelet.service. Dec 13 02:08:20.169779 kubelet[1463]: E1213 02:08:20.169722 1463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:20.171598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:20.171723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:22.460323 env[1205]: time="2024-12-13T02:08:22.460252324Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:22.592954 env[1205]: time="2024-12-13T02:08:22.592873075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:22.717874 env[1205]: time="2024-12-13T02:08:22.717739464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:22.793352 env[1205]: time="2024-12-13T02:08:22.793285229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:22.793937 env[1205]: time="2024-12-13T02:08:22.793890705Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 02:08:22.794404 env[1205]: time="2024-12-13T02:08:22.794375444Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:08:26.344453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991471965.mount: Deactivated successfully. Dec 13 02:08:27.472689 env[1205]: time="2024-12-13T02:08:27.472608190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:27.474629 env[1205]: time="2024-12-13T02:08:27.474594666Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:27.476410 env[1205]: time="2024-12-13T02:08:27.476370256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:27.478138 env[1205]: time="2024-12-13T02:08:27.478100150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:27.478817 env[1205]: time="2024-12-13T02:08:27.478790524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:08:27.479301 env[1205]: time="2024-12-13T02:08:27.479269362Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 02:08:28.004339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2043362047.mount: Deactivated successfully. Dec 13 02:08:28.010305 env[1205]: time="2024-12-13T02:08:28.010253547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:28.012297 env[1205]: time="2024-12-13T02:08:28.012246154Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:28.013699 env[1205]: time="2024-12-13T02:08:28.013665897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:28.015111 env[1205]: time="2024-12-13T02:08:28.015070771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:28.015648 env[1205]: time="2024-12-13T02:08:28.015612317Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 02:08:28.016085 env[1205]: time="2024-12-13T02:08:28.016064485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 02:08:28.812017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount306780408.mount: Deactivated successfully. Dec 13 02:08:30.300813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:08:30.301025 systemd[1]: Stopped kubelet.service. Dec 13 02:08:30.302572 systemd[1]: Starting kubelet.service... Dec 13 02:08:30.377061 systemd[1]: Started kubelet.service. Dec 13 02:08:30.581840 kubelet[1475]: E1213 02:08:30.581686 1475 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:30.583930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:30.584091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:34.100701 env[1205]: time="2024-12-13T02:08:34.100627795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:34.102381 env[1205]: time="2024-12-13T02:08:34.102349434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:34.104381 env[1205]: time="2024-12-13T02:08:34.104340231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:34.105981 env[1205]: time="2024-12-13T02:08:34.105955826Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:34.106665 env[1205]: time="2024-12-13T02:08:34.106643498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 02:08:36.210496 systemd[1]: Stopped kubelet.service. Dec 13 02:08:36.212440 systemd[1]: Starting kubelet.service... Dec 13 02:08:36.231589 systemd[1]: Reloading. Dec 13 02:08:36.294331 /usr/lib/systemd/system-generators/torcx-generator[1533]: time="2024-12-13T02:08:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:08:36.294362 /usr/lib/systemd/system-generators/torcx-generator[1533]: time="2024-12-13T02:08:36Z" level=info msg="torcx already run" Dec 13 02:08:36.552029 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:08:36.552044 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:08:36.568905 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:08:36.646475 systemd[1]: Started kubelet.service. Dec 13 02:08:36.647681 systemd[1]: Stopping kubelet.service... Dec 13 02:08:36.647906 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:08:36.648040 systemd[1]: Stopped kubelet.service. Dec 13 02:08:36.649205 systemd[1]: Starting kubelet.service... Dec 13 02:08:36.725541 systemd[1]: Started kubelet.service. Dec 13 02:08:36.757887 kubelet[1579]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:08:36.757887 kubelet[1579]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:08:36.757887 kubelet[1579]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:08:36.758355 kubelet[1579]: I1213 02:08:36.757934 1579 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:08:37.205265 kubelet[1579]: I1213 02:08:37.205225 1579 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:08:37.205265 kubelet[1579]: I1213 02:08:37.205255 1579 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:08:37.205531 kubelet[1579]: I1213 02:08:37.205512 1579 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:08:37.223363 kubelet[1579]: I1213 02:08:37.223328 1579 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:08:37.224283 kubelet[1579]: E1213 02:08:37.224254 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:37.230759 kubelet[1579]: E1213 02:08:37.230713 1579 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:08:37.230759 kubelet[1579]: I1213 02:08:37.230750 1579 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:08:37.235079 kubelet[1579]: I1213 02:08:37.235050 1579 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:08:37.236119 kubelet[1579]: I1213 02:08:37.236092 1579 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:08:37.236285 kubelet[1579]: I1213 02:08:37.236248 1579 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:08:37.236462 kubelet[1579]: I1213 02:08:37.236276 1579 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:08:37.236602 kubelet[1579]: I1213 02:08:37.236467 1579 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:08:37.236602 kubelet[1579]: I1213 02:08:37.236478 1579 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:08:37.236684 kubelet[1579]: I1213 02:08:37.236606 1579 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:08:37.239445 kubelet[1579]: I1213 02:08:37.239421 1579 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:08:37.239445 kubelet[1579]: I1213 02:08:37.239445 1579 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:08:37.239524 kubelet[1579]: I1213 02:08:37.239484 1579 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:08:37.239524 kubelet[1579]: I1213 02:08:37.239501 1579 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:08:37.261887 kubelet[1579]: W1213 02:08:37.261856 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:37.261947 kubelet[1579]: E1213 02:08:37.261889 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:37.267871 kubelet[1579]: W1213 02:08:37.267839 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:37.267919 kubelet[1579]: E1213 02:08:37.267878 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:37.270121 kubelet[1579]: I1213 02:08:37.270100 1579 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:08:37.271384 kubelet[1579]: I1213 02:08:37.271360 1579 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:08:37.271975 kubelet[1579]: W1213 02:08:37.271960 1579 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:08:37.272451 kubelet[1579]: I1213 02:08:37.272425 1579 server.go:1269] "Started kubelet" Dec 13 02:08:37.272636 kubelet[1579]: I1213 02:08:37.272570 1579 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:08:37.273383 kubelet[1579]: I1213 02:08:37.272930 1579 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:08:37.273383 kubelet[1579]: I1213 02:08:37.272997 1579 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:08:37.274812 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:08:37.274913 kubelet[1579]: I1213 02:08:37.274895 1579 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:08:37.275416 kubelet[1579]: I1213 02:08:37.275236 1579 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:08:37.275523 kubelet[1579]: I1213 02:08:37.275510 1579 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:08:37.275709 kubelet[1579]: I1213 02:08:37.275673 1579 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:08:37.276559 kubelet[1579]: E1213 02:08:37.276055 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:08:37.276642 kubelet[1579]: I1213 02:08:37.276603 1579 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:08:37.276642 kubelet[1579]: I1213 02:08:37.276627 1579 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:08:37.277595 kubelet[1579]: W1213 02:08:37.276931 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:37.277595 kubelet[1579]: E1213 02:08:37.276982 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:37.277595 kubelet[1579]: E1213 02:08:37.277034 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="200ms" Dec 13 02:08:37.277595 kubelet[1579]: I1213 02:08:37.277126 1579 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:08:37.277595 kubelet[1579]: I1213 02:08:37.277201 1579 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:08:37.278332 kubelet[1579]: E1213 02:08:37.278305 1579 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:08:37.279261 kubelet[1579]: I1213 02:08:37.279237 1579 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:08:37.279471 kubelet[1579]: E1213 02:08:37.278047 1579 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.140:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.140:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109a7cc8c1c665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 02:08:37.272405605 +0000 UTC m=+0.543575881,LastTimestamp:2024-12-13 02:08:37.272405605 +0000 UTC m=+0.543575881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 02:08:37.288017 kubelet[1579]: I1213 02:08:37.287970 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:08:37.290900 kubelet[1579]: I1213 02:08:37.290877 1579 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:08:37.291052 kubelet[1579]: I1213 02:08:37.291037 1579 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:08:37.291151 kubelet[1579]: I1213 02:08:37.291126 1579 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:08:37.291261 kubelet[1579]: E1213 02:08:37.291243 1579 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:08:37.291709 kubelet[1579]: W1213 02:08:37.291668 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:37.291763 kubelet[1579]: E1213 02:08:37.291715 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:37.291823 kubelet[1579]: I1213 02:08:37.291807 1579 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:08:37.291823 kubelet[1579]: I1213 02:08:37.291817 1579 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:08:37.291885 kubelet[1579]: I1213 02:08:37.291829 1579 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:08:37.377120 kubelet[1579]: E1213 02:08:37.377093 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:08:37.392359 kubelet[1579]: E1213 02:08:37.392321 1579 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:08:37.477970 kubelet[1579]: E1213 02:08:37.477838 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:08:37.478316 kubelet[1579]: E1213 02:08:37.478268 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="400ms" Dec 13 02:08:37.578715 kubelet[1579]: E1213 02:08:37.578646 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:08:37.592875 kubelet[1579]: E1213 02:08:37.592830 1579 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:08:37.657947 kubelet[1579]: I1213 02:08:37.657865 1579 policy_none.go:49] "None policy: Start" Dec 13 02:08:37.658749 kubelet[1579]: I1213 02:08:37.658718 1579 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:08:37.658833 kubelet[1579]: I1213 02:08:37.658767 1579 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:08:37.665620 systemd[1]: Created slice kubepods.slice. Dec 13 02:08:37.669218 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:08:37.671795 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:08:37.678309 kubelet[1579]: I1213 02:08:37.678276 1579 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:08:37.678443 kubelet[1579]: I1213 02:08:37.678425 1579 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:08:37.678479 kubelet[1579]: I1213 02:08:37.678441 1579 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:08:37.678991 kubelet[1579]: I1213 02:08:37.678973 1579 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:08:37.679772 kubelet[1579]: E1213 02:08:37.679737 1579 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 02:08:37.780345 kubelet[1579]: I1213 02:08:37.780215 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 02:08:37.780861 kubelet[1579]: E1213 02:08:37.780707 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Dec 13 02:08:37.879372 kubelet[1579]: E1213 02:08:37.879299 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="800ms" Dec 13 02:08:37.981820 kubelet[1579]: I1213 02:08:37.981783 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 02:08:37.982122 kubelet[1579]: E1213 02:08:37.982074 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Dec 13 02:08:37.998884 systemd[1]: Created slice kubepods-burstable-podb43b05949bb08ec14708006ea77cdad4.slice. Dec 13 02:08:38.016465 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice. Dec 13 02:08:38.024077 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice. Dec 13 02:08:38.080645 kubelet[1579]: I1213 02:08:38.080603 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43b05949bb08ec14708006ea77cdad4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b43b05949bb08ec14708006ea77cdad4\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:38.080645 kubelet[1579]: I1213 02:08:38.080635 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 02:08:38.080645 kubelet[1579]: I1213 02:08:38.080652 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b43b05949bb08ec14708006ea77cdad4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b43b05949bb08ec14708006ea77cdad4\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:38.080645 kubelet[1579]: I1213 02:08:38.080665 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:38.080645 kubelet[1579]: I1213 02:08:38.080680 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:38.080934 kubelet[1579]: I1213 02:08:38.080692 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:38.080934 kubelet[1579]: I1213 02:08:38.080718 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:38.080934 kubelet[1579]: I1213 02:08:38.080730 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:38.080934 kubelet[1579]: I1213 02:08:38.080743 1579 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b43b05949bb08ec14708006ea77cdad4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b43b05949bb08ec14708006ea77cdad4\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:38.095022 kubelet[1579]: W1213 02:08:38.094967 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:38.095022 kubelet[1579]: E1213 02:08:38.095014 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:38.315066 kubelet[1579]: E1213 02:08:38.315030 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:38.315538 env[1205]: time="2024-12-13T02:08:38.315482806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b43b05949bb08ec14708006ea77cdad4,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:38.322622 kubelet[1579]: E1213 02:08:38.322576 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:38.322984 env[1205]: time="2024-12-13T02:08:38.322939529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:38.326071 kubelet[1579]: E1213 02:08:38.326045 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:38.326309 env[1205]: time="2024-12-13T02:08:38.326285355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:38.383574 kubelet[1579]: I1213 02:08:38.383480 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 02:08:38.383755 kubelet[1579]: E1213 02:08:38.383723 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Dec 13 02:08:38.478436 kubelet[1579]: W1213 02:08:38.478368 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:38.478536 kubelet[1579]: E1213 02:08:38.478443 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:38.544051 kubelet[1579]: W1213 02:08:38.544026 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:38.544114 kubelet[1579]: E1213 02:08:38.544051 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:38.680751 kubelet[1579]: E1213 02:08:38.680512 1579 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.140:6443: connect: connection refused" interval="1.6s" Dec 13 02:08:38.816644 kubelet[1579]: W1213 02:08:38.816532 1579 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.140:6443: connect: connection refused Dec 13 02:08:38.816644 kubelet[1579]: E1213 02:08:38.816642 1579 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:39.185604 kubelet[1579]: I1213 02:08:39.185522 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 02:08:39.186008 kubelet[1579]: E1213 02:08:39.185959 1579 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.140:6443/api/v1/nodes\": dial tcp 10.0.0.140:6443: connect: connection refused" node="localhost" Dec 13 02:08:39.234949 kubelet[1579]: E1213 02:08:39.234892 1579 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:08:39.368401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725810153.mount: Deactivated successfully. Dec 13 02:08:39.373982 env[1205]: time="2024-12-13T02:08:39.373939147Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.377863 env[1205]: time="2024-12-13T02:08:39.377830657Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.379172 env[1205]: time="2024-12-13T02:08:39.379115731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.379982 env[1205]: time="2024-12-13T02:08:39.379956125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.382099 env[1205]: time="2024-12-13T02:08:39.382059982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.383310 env[1205]: time="2024-12-13T02:08:39.383283748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.384410 env[1205]: time="2024-12-13T02:08:39.384385993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.385554 env[1205]: time="2024-12-13T02:08:39.385527672Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.387509 env[1205]: time="2024-12-13T02:08:39.387486442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.389571 env[1205]: time="2024-12-13T02:08:39.389514314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.391493 env[1205]: time="2024-12-13T02:08:39.391455420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.392618 env[1205]: time="2024-12-13T02:08:39.392571952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:08:39.411335 env[1205]: time="2024-12-13T02:08:39.410798040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:39.411335 env[1205]: time="2024-12-13T02:08:39.410836103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:39.411335 env[1205]: time="2024-12-13T02:08:39.410845240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:39.411335 env[1205]: time="2024-12-13T02:08:39.410950261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a9d5d2ea91180508a35d90fc2ed674ec655fd3a360b326c914727d815224014 pid=1621 runtime=io.containerd.runc.v2 Dec 13 02:08:39.417825 env[1205]: time="2024-12-13T02:08:39.417764220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:39.417825 env[1205]: time="2024-12-13T02:08:39.417797914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:39.417825 env[1205]: time="2024-12-13T02:08:39.417807203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:39.418194 env[1205]: time="2024-12-13T02:08:39.418147211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e76fdf3394c9fd28652e4c6213413aa83f53021f4726b769bad1869c162b5176 pid=1637 runtime=io.containerd.runc.v2 Dec 13 02:08:39.420694 env[1205]: time="2024-12-13T02:08:39.420624611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:39.420694 env[1205]: time="2024-12-13T02:08:39.420659088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:39.420694 env[1205]: time="2024-12-13T02:08:39.420668475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:39.420851 env[1205]: time="2024-12-13T02:08:39.420769718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2873922179e1d823e3bea663c952113f1de6501472e03e125adb8df356306f92 pid=1656 runtime=io.containerd.runc.v2 Dec 13 02:08:39.429608 systemd[1]: Started cri-containerd-e76fdf3394c9fd28652e4c6213413aa83f53021f4726b769bad1869c162b5176.scope. Dec 13 02:08:39.433498 systemd[1]: Started cri-containerd-9a9d5d2ea91180508a35d90fc2ed674ec655fd3a360b326c914727d815224014.scope. Dec 13 02:08:39.439552 systemd[1]: Started cri-containerd-2873922179e1d823e3bea663c952113f1de6501472e03e125adb8df356306f92.scope. Dec 13 02:08:39.471279 env[1205]: time="2024-12-13T02:08:39.471228366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b43b05949bb08ec14708006ea77cdad4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a9d5d2ea91180508a35d90fc2ed674ec655fd3a360b326c914727d815224014\"" Dec 13 02:08:39.473539 kubelet[1579]: E1213 02:08:39.473345 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:39.473999 env[1205]: time="2024-12-13T02:08:39.473967405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"e76fdf3394c9fd28652e4c6213413aa83f53021f4726b769bad1869c162b5176\"" Dec 13 02:08:39.474867 kubelet[1579]: E1213 02:08:39.474731 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:39.476141 env[1205]: time="2024-12-13T02:08:39.476097692Z" level=info msg="CreateContainer within sandbox \"9a9d5d2ea91180508a35d90fc2ed674ec655fd3a360b326c914727d815224014\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:08:39.476594 env[1205]: time="2024-12-13T02:08:39.476458843Z" level=info msg="CreateContainer within sandbox \"e76fdf3394c9fd28652e4c6213413aa83f53021f4726b769bad1869c162b5176\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:08:39.485618 env[1205]: time="2024-12-13T02:08:39.484849842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"2873922179e1d823e3bea663c952113f1de6501472e03e125adb8df356306f92\"" Dec 13 02:08:39.485790 kubelet[1579]: E1213 02:08:39.485286 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:39.486899 env[1205]: time="2024-12-13T02:08:39.486859610Z" level=info msg="CreateContainer within sandbox \"2873922179e1d823e3bea663c952113f1de6501472e03e125adb8df356306f92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:08:39.502691 env[1205]: time="2024-12-13T02:08:39.502658884Z" level=info msg="CreateContainer within sandbox \"e76fdf3394c9fd28652e4c6213413aa83f53021f4726b769bad1869c162b5176\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d80ef85d52b7ecb9df019af005f5d79bda1aae07badae2e3e61766784bba5ffe\"" Dec 13 02:08:39.503358 env[1205]: time="2024-12-13T02:08:39.503327030Z" level=info msg="StartContainer for \"d80ef85d52b7ecb9df019af005f5d79bda1aae07badae2e3e61766784bba5ffe\"" Dec 13 02:08:39.507451 env[1205]: time="2024-12-13T02:08:39.507420625Z" level=info msg="CreateContainer within sandbox \"9a9d5d2ea91180508a35d90fc2ed674ec655fd3a360b326c914727d815224014\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e28daf147c07fdadc03a776dfc274b559dc796a650b4448f4f85eaa7956ad209\"" Dec 13 02:08:39.507795 env[1205]: time="2024-12-13T02:08:39.507769531Z" level=info msg="StartContainer for \"e28daf147c07fdadc03a776dfc274b559dc796a650b4448f4f85eaa7956ad209\"" Dec 13 02:08:39.509837 env[1205]: time="2024-12-13T02:08:39.509802092Z" level=info msg="CreateContainer within sandbox \"2873922179e1d823e3bea663c952113f1de6501472e03e125adb8df356306f92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ef90b6b83cbbdb1659685c0d1942e3567ffed254503d858113aa611a6bfd271d\"" Dec 13 02:08:39.510273 env[1205]: time="2024-12-13T02:08:39.510237994Z" level=info msg="StartContainer for \"ef90b6b83cbbdb1659685c0d1942e3567ffed254503d858113aa611a6bfd271d\"" Dec 13 02:08:39.516665 systemd[1]: Started cri-containerd-d80ef85d52b7ecb9df019af005f5d79bda1aae07badae2e3e61766784bba5ffe.scope. Dec 13 02:08:39.521229 systemd[1]: Started cri-containerd-e28daf147c07fdadc03a776dfc274b559dc796a650b4448f4f85eaa7956ad209.scope. Dec 13 02:08:39.537602 systemd[1]: Started cri-containerd-ef90b6b83cbbdb1659685c0d1942e3567ffed254503d858113aa611a6bfd271d.scope. Dec 13 02:08:39.566273 env[1205]: time="2024-12-13T02:08:39.566119906Z" level=info msg="StartContainer for \"d80ef85d52b7ecb9df019af005f5d79bda1aae07badae2e3e61766784bba5ffe\" returns successfully" Dec 13 02:08:39.569758 env[1205]: time="2024-12-13T02:08:39.569715982Z" level=info msg="StartContainer for \"e28daf147c07fdadc03a776dfc274b559dc796a650b4448f4f85eaa7956ad209\" returns successfully" Dec 13 02:08:39.579635 env[1205]: time="2024-12-13T02:08:39.579571719Z" level=info msg="StartContainer for \"ef90b6b83cbbdb1659685c0d1942e3567ffed254503d858113aa611a6bfd271d\" returns successfully" Dec 13 02:08:40.298062 kubelet[1579]: E1213 02:08:40.298018 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:40.299771 kubelet[1579]: E1213 02:08:40.299747 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:40.300491 kubelet[1579]: E1213 02:08:40.300466 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:40.438882 kubelet[1579]: E1213 02:08:40.438831 1579 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 02:08:40.669917 kubelet[1579]: E1213 02:08:40.669811 1579 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18109a7cc8c1c665 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 02:08:37.272405605 +0000 UTC m=+0.543575881,LastTimestamp:2024-12-13 02:08:37.272405605 +0000 UTC m=+0.543575881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 02:08:40.777392 kubelet[1579]: E1213 02:08:40.777348 1579 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 02:08:40.787230 kubelet[1579]: I1213 02:08:40.787187 1579 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 02:08:40.790628 kubelet[1579]: I1213 02:08:40.790602 1579 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 02:08:40.790628 kubelet[1579]: E1213 02:08:40.790628 1579 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 13 02:08:40.798130 kubelet[1579]: E1213 02:08:40.798092 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:08:40.898512 kubelet[1579]: E1213 02:08:40.898466 1579 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:08:41.241617 kubelet[1579]: I1213 02:08:41.241565 1579 apiserver.go:52] "Watching apiserver" Dec 13 02:08:41.277280 kubelet[1579]: I1213 02:08:41.277225 1579 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:08:41.306374 kubelet[1579]: E1213 02:08:41.306321 1579 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:41.306863 kubelet[1579]: E1213 02:08:41.306455 1579 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:42.551370 update_engine[1198]: I1213 02:08:42.551292 1198 update_attempter.cc:509] Updating boot flags... Dec 13 02:08:42.661392 systemd[1]: Reloading. Dec 13 02:08:42.718009 /usr/lib/systemd/system-generators/torcx-generator[1893]: time="2024-12-13T02:08:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:08:42.718459 /usr/lib/systemd/system-generators/torcx-generator[1893]: time="2024-12-13T02:08:42Z" level=info msg="torcx already run" Dec 13 02:08:42.781638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:08:42.781656 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:08:42.799154 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:08:42.890988 kubelet[1579]: I1213 02:08:42.890832 1579 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:08:42.890865 systemd[1]: Stopping kubelet.service... Dec 13 02:08:42.912929 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:08:42.913089 systemd[1]: Stopped kubelet.service. Dec 13 02:08:42.914628 systemd[1]: Starting kubelet.service... Dec 13 02:08:42.993987 systemd[1]: Started kubelet.service. Dec 13 02:08:43.030063 kubelet[1937]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:08:43.030063 kubelet[1937]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:08:43.030063 kubelet[1937]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:08:43.030460 kubelet[1937]: I1213 02:08:43.030099 1937 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:08:43.035215 kubelet[1937]: I1213 02:08:43.035184 1937 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:08:43.035215 kubelet[1937]: I1213 02:08:43.035211 1937 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:08:43.035433 kubelet[1937]: I1213 02:08:43.035418 1937 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:08:43.036504 kubelet[1937]: I1213 02:08:43.036486 1937 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:08:43.038025 kubelet[1937]: I1213 02:08:43.037995 1937 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:08:43.041557 kubelet[1937]: E1213 02:08:43.041520 1937 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:08:43.041557 kubelet[1937]: I1213 02:08:43.041558 1937 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:08:43.045234 kubelet[1937]: I1213 02:08:43.045195 1937 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:08:43.045383 kubelet[1937]: I1213 02:08:43.045289 1937 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:08:43.045435 kubelet[1937]: I1213 02:08:43.045409 1937 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:08:43.045614 kubelet[1937]: I1213 02:08:43.045434 1937 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:08:43.045711 kubelet[1937]: I1213 02:08:43.045620 1937 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:08:43.045711 kubelet[1937]: I1213 02:08:43.045629 1937 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:08:43.045711 kubelet[1937]: I1213 02:08:43.045656 1937 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:08:43.045786 kubelet[1937]: I1213 02:08:43.045745 1937 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:08:43.045786 kubelet[1937]: I1213 02:08:43.045757 1937 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:08:43.045786 kubelet[1937]: I1213 02:08:43.045783 1937 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:08:43.045851 kubelet[1937]: I1213 02:08:43.045796 1937 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:08:43.050112 kubelet[1937]: I1213 02:08:43.050082 1937 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:08:43.052590 kubelet[1937]: I1213 02:08:43.050420 1937 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:08:43.052590 kubelet[1937]: I1213 02:08:43.050831 1937 server.go:1269] "Started kubelet" Dec 13 02:08:43.052590 kubelet[1937]: I1213 02:08:43.051882 1937 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:08:43.052590 kubelet[1937]: I1213 02:08:43.052146 1937 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:08:43.052590 kubelet[1937]: I1213 02:08:43.052148 1937 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:08:43.052590 kubelet[1937]: I1213 02:08:43.052195 1937 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:08:43.053300 kubelet[1937]: I1213 02:08:43.053276 1937 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:08:43.054252 kubelet[1937]: E1213 02:08:43.054231 1937 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:08:43.054460 kubelet[1937]: I1213 02:08:43.054438 1937 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:08:43.056126 kubelet[1937]: I1213 02:08:43.056104 1937 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:08:43.056994 kubelet[1937]: I1213 02:08:43.056968 1937 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:08:43.057232 kubelet[1937]: I1213 02:08:43.057210 1937 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:08:43.057824 kubelet[1937]: I1213 02:08:43.057806 1937 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:08:43.057981 kubelet[1937]: I1213 02:08:43.057956 1937 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:08:43.062198 kubelet[1937]: I1213 02:08:43.062156 1937 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:08:43.065660 kubelet[1937]: I1213 02:08:43.065628 1937 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:08:43.066745 kubelet[1937]: I1213 02:08:43.066720 1937 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:08:43.066745 kubelet[1937]: I1213 02:08:43.066739 1937 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:08:43.066824 kubelet[1937]: I1213 02:08:43.066757 1937 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:08:43.066824 kubelet[1937]: E1213 02:08:43.066791 1937 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:08:43.087824 kubelet[1937]: I1213 02:08:43.087787 1937 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:08:43.087824 kubelet[1937]: I1213 02:08:43.087810 1937 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:08:43.087998 kubelet[1937]: I1213 02:08:43.087849 1937 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:08:43.087998 kubelet[1937]: I1213 02:08:43.087976 1937 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:08:43.087998 kubelet[1937]: I1213 02:08:43.087985 1937 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:08:43.088065 kubelet[1937]: I1213 02:08:43.088000 1937 policy_none.go:49] "None policy: Start" Dec 13 02:08:43.088455 kubelet[1937]: I1213 02:08:43.088427 1937 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:08:43.088455 kubelet[1937]: I1213 02:08:43.088444 1937 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:08:43.088570 kubelet[1937]: I1213 02:08:43.088557 1937 state_mem.go:75] "Updated machine memory state" Dec 13 02:08:43.093721 kubelet[1937]: I1213 02:08:43.093704 1937 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:08:43.093973 kubelet[1937]: I1213 02:08:43.093924 1937 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:08:43.094066 kubelet[1937]: I1213 02:08:43.094035 1937 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:08:43.094557 kubelet[1937]: I1213 02:08:43.094546 1937 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:08:43.202052 kubelet[1937]: I1213 02:08:43.201943 1937 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Dec 13 02:08:43.258682 kubelet[1937]: I1213 02:08:43.258653 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b43b05949bb08ec14708006ea77cdad4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b43b05949bb08ec14708006ea77cdad4\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:43.258762 kubelet[1937]: I1213 02:08:43.258683 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:43.258762 kubelet[1937]: I1213 02:08:43.258701 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:43.258762 kubelet[1937]: I1213 02:08:43.258716 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost" Dec 13 02:08:43.258762 kubelet[1937]: I1213 02:08:43.258730 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b43b05949bb08ec14708006ea77cdad4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b43b05949bb08ec14708006ea77cdad4\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:43.258762 kubelet[1937]: I1213 02:08:43.258757 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43b05949bb08ec14708006ea77cdad4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b43b05949bb08ec14708006ea77cdad4\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:43.258873 kubelet[1937]: I1213 02:08:43.258792 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:43.258873 kubelet[1937]: I1213 02:08:43.258809 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:43.258873 kubelet[1937]: I1213 02:08:43.258837 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:08:43.309731 kubelet[1937]: I1213 02:08:43.309698 1937 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Dec 13 02:08:43.309846 kubelet[1937]: I1213 02:08:43.309772 1937 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Dec 13 02:08:43.545533 kubelet[1937]: E1213 02:08:43.545418 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:43.608057 kubelet[1937]: E1213 02:08:43.608038 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:43.608207 kubelet[1937]: E1213 02:08:43.608144 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:43.658255 sudo[1972]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:08:43.658463 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:08:44.046996 kubelet[1937]: I1213 02:08:44.046969 1937 apiserver.go:52] "Watching apiserver" Dec 13 02:08:44.057593 kubelet[1937]: I1213 02:08:44.057563 1937 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:08:44.078091 kubelet[1937]: E1213 02:08:44.078058 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:44.078293 kubelet[1937]: E1213 02:08:44.078277 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:44.091987 kubelet[1937]: E1213 02:08:44.084558 1937 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 02:08:44.091987 kubelet[1937]: E1213 02:08:44.084773 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:44.100770 kubelet[1937]: I1213 02:08:44.100701 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.100682377 podStartE2EDuration="1.100682377s" podCreationTimestamp="2024-12-13 02:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:08:44.10054668 +0000 UTC m=+1.103019857" watchObservedRunningTime="2024-12-13 02:08:44.100682377 +0000 UTC m=+1.103155554" Dec 13 02:08:44.100999 kubelet[1937]: I1213 02:08:44.100796 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.100792066 podStartE2EDuration="1.100792066s" podCreationTimestamp="2024-12-13 02:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:08:44.093993149 +0000 UTC m=+1.096466316" watchObservedRunningTime="2024-12-13 02:08:44.100792066 +0000 UTC m=+1.103265233" Dec 13 02:08:44.103849 sudo[1972]: pam_unix(sudo:session): session closed for user root Dec 13 02:08:44.107982 kubelet[1937]: I1213 02:08:44.106754 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.106738824 podStartE2EDuration="1.106738824s" podCreationTimestamp="2024-12-13 02:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:08:44.106639525 +0000 UTC m=+1.109112702" watchObservedRunningTime="2024-12-13 02:08:44.106738824 +0000 UTC m=+1.109212001" Dec 13 02:08:45.079116 kubelet[1937]: E1213 02:08:45.079071 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:45.410648 sudo[1305]: pam_unix(sudo:session): session closed for user root Dec 13 02:08:45.411770 sshd[1301]: pam_unix(sshd:session): session closed for user core Dec 13 02:08:45.413872 systemd[1]: sshd@4-10.0.0.140:22-10.0.0.1:48714.service: Deactivated successfully. Dec 13 02:08:45.414543 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:08:45.414699 systemd[1]: session-5.scope: Consumed 4.065s CPU time. Dec 13 02:08:45.415028 systemd-logind[1196]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:08:45.415787 systemd-logind[1196]: Removed session 5. Dec 13 02:08:48.720112 kubelet[1937]: I1213 02:08:48.720064 1937 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:08:48.720567 env[1205]: time="2024-12-13T02:08:48.720425206Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:08:48.720833 kubelet[1937]: I1213 02:08:48.720663 1937 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:08:49.428623 systemd[1]: Created slice kubepods-besteffort-pod00c4d945_b068_4299_abef_421071ec5607.slice. Dec 13 02:08:49.448322 systemd[1]: Created slice kubepods-burstable-podbc207722_7fd8_494b_b54c_35c6f322c23c.slice. Dec 13 02:08:49.501469 kubelet[1937]: I1213 02:08:49.501421 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-config-path\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501469 kubelet[1937]: I1213 02:08:49.501466 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00c4d945-b068-4299-abef-421071ec5607-lib-modules\") pod \"kube-proxy-jxztm\" (UID: \"00c4d945-b068-4299-abef-421071ec5607\") " pod="kube-system/kube-proxy-jxztm" Dec 13 02:08:49.501469 kubelet[1937]: I1213 02:08:49.501484 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-xtables-lock\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501742 kubelet[1937]: I1213 02:08:49.501503 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00c4d945-b068-4299-abef-421071ec5607-kube-proxy\") pod \"kube-proxy-jxztm\" (UID: \"00c4d945-b068-4299-abef-421071ec5607\") " pod="kube-system/kube-proxy-jxztm" Dec 13 02:08:49.501742 kubelet[1937]: I1213 02:08:49.501521 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-net\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501742 kubelet[1937]: I1213 02:08:49.501537 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-kernel\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501742 kubelet[1937]: I1213 02:08:49.501555 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj8qg\" (UniqueName: \"kubernetes.io/projected/00c4d945-b068-4299-abef-421071ec5607-kube-api-access-tj8qg\") pod \"kube-proxy-jxztm\" (UID: \"00c4d945-b068-4299-abef-421071ec5607\") " pod="kube-system/kube-proxy-jxztm" Dec 13 02:08:49.501742 kubelet[1937]: I1213 02:08:49.501590 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-hubble-tls\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501870 kubelet[1937]: I1213 02:08:49.501609 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-etc-cni-netd\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501870 kubelet[1937]: I1213 02:08:49.501627 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc207722-7fd8-494b-b54c-35c6f322c23c-clustermesh-secrets\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501870 kubelet[1937]: I1213 02:08:49.501645 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00c4d945-b068-4299-abef-421071ec5607-xtables-lock\") pod \"kube-proxy-jxztm\" (UID: \"00c4d945-b068-4299-abef-421071ec5607\") " pod="kube-system/kube-proxy-jxztm" Dec 13 02:08:49.501870 kubelet[1937]: I1213 02:08:49.501664 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-bpf-maps\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501870 kubelet[1937]: I1213 02:08:49.501685 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-hostproc\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.501870 kubelet[1937]: I1213 02:08:49.501705 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cni-path\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.502012 kubelet[1937]: I1213 02:08:49.501724 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-run\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.502012 kubelet[1937]: I1213 02:08:49.501743 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-cgroup\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.502012 kubelet[1937]: I1213 02:08:49.501760 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-lib-modules\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.502012 kubelet[1937]: I1213 02:08:49.501778 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zghp\" (UniqueName: \"kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-kube-api-access-8zghp\") pod \"cilium-cffzk\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " pod="kube-system/cilium-cffzk" Dec 13 02:08:49.515080 kubelet[1937]: E1213 02:08:49.515054 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:49.603841 kubelet[1937]: I1213 02:08:49.603802 1937 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 02:08:49.745336 kubelet[1937]: E1213 02:08:49.745199 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:49.745963 env[1205]: time="2024-12-13T02:08:49.745889413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxztm,Uid:00c4d945-b068-4299-abef-421071ec5607,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:49.751873 kubelet[1937]: E1213 02:08:49.751825 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:49.752382 env[1205]: time="2024-12-13T02:08:49.752334825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cffzk,Uid:bc207722-7fd8-494b-b54c-35c6f322c23c,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:49.762277 systemd[1]: Created slice kubepods-besteffort-podf1ff450d_d652_4c48_b237_7719a8b2e9b6.slice. Dec 13 02:08:49.768784 env[1205]: time="2024-12-13T02:08:49.768673655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:49.768784 env[1205]: time="2024-12-13T02:08:49.768739461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:49.768784 env[1205]: time="2024-12-13T02:08:49.768755471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:49.769050 env[1205]: time="2024-12-13T02:08:49.768997919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c8ac1100f491e23c3a6d9664e74a895469e0f959370a2146c3e0d64b31b22d9 pid=2032 runtime=io.containerd.runc.v2 Dec 13 02:08:49.783471 env[1205]: time="2024-12-13T02:08:49.781607081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:49.783471 env[1205]: time="2024-12-13T02:08:49.781659901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:49.783471 env[1205]: time="2024-12-13T02:08:49.781674478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:49.783471 env[1205]: time="2024-12-13T02:08:49.781816436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1 pid=2048 runtime=io.containerd.runc.v2 Dec 13 02:08:49.789474 systemd[1]: Started cri-containerd-7c8ac1100f491e23c3a6d9664e74a895469e0f959370a2146c3e0d64b31b22d9.scope. Dec 13 02:08:49.798349 systemd[1]: Started cri-containerd-f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1.scope. Dec 13 02:08:49.804042 kubelet[1937]: I1213 02:08:49.803999 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1ff450d-d652-4c48-b237-7719a8b2e9b6-cilium-config-path\") pod \"cilium-operator-5d85765b45-bz5c9\" (UID: \"f1ff450d-d652-4c48-b237-7719a8b2e9b6\") " pod="kube-system/cilium-operator-5d85765b45-bz5c9" Dec 13 02:08:49.804042 kubelet[1937]: I1213 02:08:49.804043 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cqr8\" (UniqueName: \"kubernetes.io/projected/f1ff450d-d652-4c48-b237-7719a8b2e9b6-kube-api-access-7cqr8\") pod \"cilium-operator-5d85765b45-bz5c9\" (UID: \"f1ff450d-d652-4c48-b237-7719a8b2e9b6\") " pod="kube-system/cilium-operator-5d85765b45-bz5c9" Dec 13 02:08:49.822559 env[1205]: time="2024-12-13T02:08:49.822508703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cffzk,Uid:bc207722-7fd8-494b-b54c-35c6f322c23c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\"" Dec 13 02:08:49.823108 kubelet[1937]: E1213 02:08:49.823076 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:49.824157 env[1205]: time="2024-12-13T02:08:49.824125023Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:08:49.828560 env[1205]: time="2024-12-13T02:08:49.827486005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jxztm,Uid:00c4d945-b068-4299-abef-421071ec5607,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c8ac1100f491e23c3a6d9664e74a895469e0f959370a2146c3e0d64b31b22d9\"" Dec 13 02:08:49.828666 kubelet[1937]: E1213 02:08:49.827825 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:49.829486 env[1205]: time="2024-12-13T02:08:49.829451456Z" level=info msg="CreateContainer within sandbox \"7c8ac1100f491e23c3a6d9664e74a895469e0f959370a2146c3e0d64b31b22d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:08:49.850758 env[1205]: time="2024-12-13T02:08:49.850694031Z" level=info msg="CreateContainer within sandbox \"7c8ac1100f491e23c3a6d9664e74a895469e0f959370a2146c3e0d64b31b22d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"facfd6ee38928ba868517b9f7272bc25e8c2edcf4883802ad0c736373102d92b\"" Dec 13 02:08:49.851327 env[1205]: time="2024-12-13T02:08:49.851174691Z" level=info msg="StartContainer for \"facfd6ee38928ba868517b9f7272bc25e8c2edcf4883802ad0c736373102d92b\"" Dec 13 02:08:49.866742 systemd[1]: Started cri-containerd-facfd6ee38928ba868517b9f7272bc25e8c2edcf4883802ad0c736373102d92b.scope. Dec 13 02:08:49.894934 env[1205]: time="2024-12-13T02:08:49.894885320Z" level=info msg="StartContainer for \"facfd6ee38928ba868517b9f7272bc25e8c2edcf4883802ad0c736373102d92b\" returns successfully" Dec 13 02:08:50.065622 kubelet[1937]: E1213 02:08:50.065595 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:50.066003 env[1205]: time="2024-12-13T02:08:50.065954954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bz5c9,Uid:f1ff450d-d652-4c48-b237-7719a8b2e9b6,Namespace:kube-system,Attempt:0,}" Dec 13 02:08:50.078283 kubelet[1937]: E1213 02:08:50.076123 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:50.086602 kubelet[1937]: E1213 02:08:50.086549 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:50.086725 kubelet[1937]: E1213 02:08:50.086604 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:50.086725 kubelet[1937]: E1213 02:08:50.086551 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:50.172213 env[1205]: time="2024-12-13T02:08:50.172116163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:08:50.172518 env[1205]: time="2024-12-13T02:08:50.172471014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:08:50.172716 env[1205]: time="2024-12-13T02:08:50.172669169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:08:50.173092 env[1205]: time="2024-12-13T02:08:50.173057845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd pid=2237 runtime=io.containerd.runc.v2 Dec 13 02:08:50.187762 systemd[1]: Started cri-containerd-8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd.scope. Dec 13 02:08:50.191913 kubelet[1937]: I1213 02:08:50.191841 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jxztm" podStartSLOduration=1.191812038 podStartE2EDuration="1.191812038s" podCreationTimestamp="2024-12-13 02:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:08:50.187217154 +0000 UTC m=+7.189690351" watchObservedRunningTime="2024-12-13 02:08:50.191812038 +0000 UTC m=+7.194285325" Dec 13 02:08:50.231651 env[1205]: time="2024-12-13T02:08:50.231598214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bz5c9,Uid:f1ff450d-d652-4c48-b237-7719a8b2e9b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd\"" Dec 13 02:08:50.232157 kubelet[1937]: E1213 02:08:50.232137 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:53.439205 kubelet[1937]: E1213 02:08:53.439148 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:54.091958 kubelet[1937]: E1213 02:08:54.091931 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:08:56.171973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655487080.mount: Deactivated successfully. Dec 13 02:09:00.152985 env[1205]: time="2024-12-13T02:09:00.152911129Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.154835 env[1205]: time="2024-12-13T02:09:00.154784799Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.156834 env[1205]: time="2024-12-13T02:09:00.156794506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:00.157379 env[1205]: time="2024-12-13T02:09:00.157343230Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:09:00.158392 env[1205]: time="2024-12-13T02:09:00.158350697Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:09:00.160303 env[1205]: time="2024-12-13T02:09:00.160269803Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:09:00.174843 env[1205]: time="2024-12-13T02:09:00.174798506Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\"" Dec 13 02:09:00.176475 env[1205]: time="2024-12-13T02:09:00.176412798Z" level=info msg="StartContainer for \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\"" Dec 13 02:09:00.195482 systemd[1]: Started cri-containerd-6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8.scope. Dec 13 02:09:00.216014 env[1205]: time="2024-12-13T02:09:00.215971551Z" level=info msg="StartContainer for \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\" returns successfully" Dec 13 02:09:00.222999 systemd[1]: cri-containerd-6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8.scope: Deactivated successfully. Dec 13 02:09:01.045915 env[1205]: time="2024-12-13T02:09:01.045860335Z" level=info msg="shim disconnected" id=6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8 Dec 13 02:09:01.045915 env[1205]: time="2024-12-13T02:09:01.045904608Z" level=warning msg="cleaning up after shim disconnected" id=6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8 namespace=k8s.io Dec 13 02:09:01.045915 env[1205]: time="2024-12-13T02:09:01.045913676Z" level=info msg="cleaning up dead shim" Dec 13 02:09:01.052519 env[1205]: time="2024-12-13T02:09:01.052474252Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2365 runtime=io.containerd.runc.v2\n" Dec 13 02:09:01.100996 kubelet[1937]: E1213 02:09:01.100959 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:01.102530 env[1205]: time="2024-12-13T02:09:01.102488397Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:09:01.120005 env[1205]: time="2024-12-13T02:09:01.119942592Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\"" Dec 13 02:09:01.120646 env[1205]: time="2024-12-13T02:09:01.120605821Z" level=info msg="StartContainer for \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\"" Dec 13 02:09:01.137291 systemd[1]: Started cri-containerd-76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce.scope. Dec 13 02:09:01.163436 env[1205]: time="2024-12-13T02:09:01.162480615Z" level=info msg="StartContainer for \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\" returns successfully" Dec 13 02:09:01.173318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8-rootfs.mount: Deactivated successfully. Dec 13 02:09:01.178502 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:09:01.178792 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:09:01.179065 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:09:01.181323 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:09:01.188319 systemd[1]: cri-containerd-76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce.scope: Deactivated successfully. Dec 13 02:09:01.193408 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:09:01.203865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce-rootfs.mount: Deactivated successfully. Dec 13 02:09:01.209141 env[1205]: time="2024-12-13T02:09:01.209084776Z" level=info msg="shim disconnected" id=76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce Dec 13 02:09:01.209266 env[1205]: time="2024-12-13T02:09:01.209140551Z" level=warning msg="cleaning up after shim disconnected" id=76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce namespace=k8s.io Dec 13 02:09:01.209266 env[1205]: time="2024-12-13T02:09:01.209159997Z" level=info msg="cleaning up dead shim" Dec 13 02:09:01.215298 env[1205]: time="2024-12-13T02:09:01.215266229Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2427 runtime=io.containerd.runc.v2\n" Dec 13 02:09:02.105857 kubelet[1937]: E1213 02:09:02.105814 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:02.107347 env[1205]: time="2024-12-13T02:09:02.107304342Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:09:02.131189 env[1205]: time="2024-12-13T02:09:02.131132080Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\"" Dec 13 02:09:02.131821 env[1205]: time="2024-12-13T02:09:02.131778467Z" level=info msg="StartContainer for \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\"" Dec 13 02:09:02.145412 systemd[1]: Started cri-containerd-a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80.scope. Dec 13 02:09:02.168480 env[1205]: time="2024-12-13T02:09:02.168443098Z" level=info msg="StartContainer for \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\" returns successfully" Dec 13 02:09:02.169208 systemd[1]: cri-containerd-a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80.scope: Deactivated successfully. Dec 13 02:09:02.172778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432847237.mount: Deactivated successfully. Dec 13 02:09:02.187486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80-rootfs.mount: Deactivated successfully. Dec 13 02:09:02.193347 env[1205]: time="2024-12-13T02:09:02.193303751Z" level=info msg="shim disconnected" id=a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80 Dec 13 02:09:02.193347 env[1205]: time="2024-12-13T02:09:02.193344007Z" level=warning msg="cleaning up after shim disconnected" id=a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80 namespace=k8s.io Dec 13 02:09:02.193471 env[1205]: time="2024-12-13T02:09:02.193352873Z" level=info msg="cleaning up dead shim" Dec 13 02:09:02.199684 env[1205]: time="2024-12-13T02:09:02.199638029Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2484 runtime=io.containerd.runc.v2\n" Dec 13 02:09:03.109182 kubelet[1937]: E1213 02:09:03.109146 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:03.111923 env[1205]: time="2024-12-13T02:09:03.111870080Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:09:03.500601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728541442.mount: Deactivated successfully. Dec 13 02:09:03.818075 env[1205]: time="2024-12-13T02:09:03.817967487Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\"" Dec 13 02:09:03.818933 env[1205]: time="2024-12-13T02:09:03.818887489Z" level=info msg="StartContainer for \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\"" Dec 13 02:09:03.836414 systemd[1]: Started cri-containerd-d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5.scope. Dec 13 02:09:03.854911 systemd[1]: cri-containerd-d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5.scope: Deactivated successfully. Dec 13 02:09:03.894044 env[1205]: time="2024-12-13T02:09:03.893987289Z" level=info msg="StartContainer for \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\" returns successfully" Dec 13 02:09:03.916026 env[1205]: time="2024-12-13T02:09:03.915963981Z" level=info msg="shim disconnected" id=d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5 Dec 13 02:09:03.916026 env[1205]: time="2024-12-13T02:09:03.916012753Z" level=warning msg="cleaning up after shim disconnected" id=d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5 namespace=k8s.io Dec 13 02:09:03.916026 env[1205]: time="2024-12-13T02:09:03.916021871Z" level=info msg="cleaning up dead shim" Dec 13 02:09:03.922381 env[1205]: time="2024-12-13T02:09:03.922317222Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:09:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2541 runtime=io.containerd.runc.v2\n" Dec 13 02:09:04.115939 kubelet[1937]: E1213 02:09:04.115447 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:04.129773 env[1205]: time="2024-12-13T02:09:04.129700004Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:09:04.145601 env[1205]: time="2024-12-13T02:09:04.145525629Z" level=info msg="CreateContainer within sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\"" Dec 13 02:09:04.146127 env[1205]: time="2024-12-13T02:09:04.146100862Z" level=info msg="StartContainer for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\"" Dec 13 02:09:04.162102 systemd[1]: Started cri-containerd-ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1.scope. Dec 13 02:09:04.190649 env[1205]: time="2024-12-13T02:09:04.188556107Z" level=info msg="StartContainer for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" returns successfully" Dec 13 02:09:04.499270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815079890.mount: Deactivated successfully. Dec 13 02:09:04.499363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5-rootfs.mount: Deactivated successfully. Dec 13 02:09:04.548792 kubelet[1937]: I1213 02:09:04.548734 1937 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:09:04.581344 systemd[1]: Created slice kubepods-burstable-pod1e3e8058_5d5a_4607_b565_73c2e7c6a7cb.slice. Dec 13 02:09:04.586810 systemd[1]: Created slice kubepods-burstable-pod9c9abb47_3432_45aa_87e6_07a41452ab60.slice. Dec 13 02:09:04.595846 env[1205]: time="2024-12-13T02:09:04.595811324Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:04.597689 env[1205]: time="2024-12-13T02:09:04.597647210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:04.599437 env[1205]: time="2024-12-13T02:09:04.599406962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:09:04.599840 env[1205]: time="2024-12-13T02:09:04.599791806Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:09:04.602287 env[1205]: time="2024-12-13T02:09:04.602107895Z" level=info msg="CreateContainer within sandbox \"8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:09:04.612262 kubelet[1937]: I1213 02:09:04.612228 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e3e8058-5d5a-4607-b565-73c2e7c6a7cb-config-volume\") pod \"coredns-6f6b679f8f-9zq25\" (UID: \"1e3e8058-5d5a-4607-b565-73c2e7c6a7cb\") " pod="kube-system/coredns-6f6b679f8f-9zq25" Dec 13 02:09:04.612262 kubelet[1937]: I1213 02:09:04.612263 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnhcz\" (UniqueName: \"kubernetes.io/projected/1e3e8058-5d5a-4607-b565-73c2e7c6a7cb-kube-api-access-rnhcz\") pod \"coredns-6f6b679f8f-9zq25\" (UID: \"1e3e8058-5d5a-4607-b565-73c2e7c6a7cb\") " pod="kube-system/coredns-6f6b679f8f-9zq25" Dec 13 02:09:04.612381 kubelet[1937]: I1213 02:09:04.612279 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c9abb47-3432-45aa-87e6-07a41452ab60-config-volume\") pod \"coredns-6f6b679f8f-rgwzb\" (UID: \"9c9abb47-3432-45aa-87e6-07a41452ab60\") " pod="kube-system/coredns-6f6b679f8f-rgwzb" Dec 13 02:09:04.612381 kubelet[1937]: I1213 02:09:04.612297 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpfg\" (UniqueName: \"kubernetes.io/projected/9c9abb47-3432-45aa-87e6-07a41452ab60-kube-api-access-7tpfg\") pod \"coredns-6f6b679f8f-rgwzb\" (UID: \"9c9abb47-3432-45aa-87e6-07a41452ab60\") " pod="kube-system/coredns-6f6b679f8f-rgwzb" Dec 13 02:09:04.613713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409245761.mount: Deactivated successfully. Dec 13 02:09:04.619026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1586043872.mount: Deactivated successfully. Dec 13 02:09:04.625780 env[1205]: time="2024-12-13T02:09:04.625737679Z" level=info msg="CreateContainer within sandbox \"8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\"" Dec 13 02:09:04.627339 env[1205]: time="2024-12-13T02:09:04.627298146Z" level=info msg="StartContainer for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\"" Dec 13 02:09:04.643804 systemd[1]: Started cri-containerd-d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563.scope. Dec 13 02:09:04.674820 env[1205]: time="2024-12-13T02:09:04.674784299Z" level=info msg="StartContainer for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" returns successfully" Dec 13 02:09:04.886505 kubelet[1937]: E1213 02:09:04.886446 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:04.887160 env[1205]: time="2024-12-13T02:09:04.887110520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9zq25,Uid:1e3e8058-5d5a-4607-b565-73c2e7c6a7cb,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:04.889381 kubelet[1937]: E1213 02:09:04.889342 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:04.889876 env[1205]: time="2024-12-13T02:09:04.889823376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rgwzb,Uid:9c9abb47-3432-45aa-87e6-07a41452ab60,Namespace:kube-system,Attempt:0,}" Dec 13 02:09:05.120623 kubelet[1937]: E1213 02:09:05.120562 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:05.122443 kubelet[1937]: E1213 02:09:05.122414 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:05.163142 kubelet[1937]: I1213 02:09:05.162978 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cffzk" podStartSLOduration=5.828540718 podStartE2EDuration="16.162956403s" podCreationTimestamp="2024-12-13 02:08:49 +0000 UTC" firstStartedPulling="2024-12-13 02:08:49.823787975 +0000 UTC m=+6.826261152" lastFinishedPulling="2024-12-13 02:09:00.15820366 +0000 UTC m=+17.160676837" observedRunningTime="2024-12-13 02:09:05.149676054 +0000 UTC m=+22.152149251" watchObservedRunningTime="2024-12-13 02:09:05.162956403 +0000 UTC m=+22.165429590" Dec 13 02:09:06.124885 kubelet[1937]: E1213 02:09:06.124848 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:06.125299 kubelet[1937]: E1213 02:09:06.125260 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:07.126367 kubelet[1937]: E1213 02:09:07.126328 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:08.307667 systemd-networkd[1030]: cilium_host: Link UP Dec 13 02:09:08.307840 systemd-networkd[1030]: cilium_net: Link UP Dec 13 02:09:08.312206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:09:08.312284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:09:08.309104 systemd-networkd[1030]: cilium_net: Gained carrier Dec 13 02:09:08.310366 systemd-networkd[1030]: cilium_host: Gained carrier Dec 13 02:09:08.370691 systemd-networkd[1030]: cilium_host: Gained IPv6LL Dec 13 02:09:08.386613 systemd-networkd[1030]: cilium_vxlan: Link UP Dec 13 02:09:08.386622 systemd-networkd[1030]: cilium_vxlan: Gained carrier Dec 13 02:09:08.563628 kernel: NET: Registered PF_ALG protocol family Dec 13 02:09:09.177884 systemd-networkd[1030]: cilium_net: Gained IPv6LL Dec 13 02:09:09.191455 systemd[1]: Started sshd@5-10.0.0.140:22-10.0.0.1:49766.service. Dec 13 02:09:09.217728 systemd-networkd[1030]: lxc_health: Link UP Dec 13 02:09:09.229703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:09:09.230030 systemd-networkd[1030]: lxc_health: Gained carrier Dec 13 02:09:09.231349 sshd[3090]: Accepted publickey for core from 10.0.0.1 port 49766 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:09.232128 sshd[3090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:09.243013 systemd[1]: Started session-6.scope. Dec 13 02:09:09.243721 systemd-logind[1196]: New session 6 of user core. Dec 13 02:09:09.393617 sshd[3090]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:09.396337 systemd[1]: sshd@5-10.0.0.140:22-10.0.0.1:49766.service: Deactivated successfully. Dec 13 02:09:09.397017 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:09:09.397666 systemd-logind[1196]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:09:09.398352 systemd-logind[1196]: Removed session 6. Dec 13 02:09:09.430428 systemd-networkd[1030]: lxc1d02c1b62191: Link UP Dec 13 02:09:09.433647 systemd-networkd[1030]: lxc67f3b9b42585: Link UP Dec 13 02:09:09.442645 kernel: eth0: renamed from tmp03697 Dec 13 02:09:09.455616 kernel: eth0: renamed from tmpb3450 Dec 13 02:09:09.465053 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:09:09.465181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1d02c1b62191: link becomes ready Dec 13 02:09:09.465205 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc67f3b9b42585: link becomes ready Dec 13 02:09:09.465562 systemd-networkd[1030]: lxc1d02c1b62191: Gained carrier Dec 13 02:09:09.465789 systemd-networkd[1030]: lxc67f3b9b42585: Gained carrier Dec 13 02:09:09.753755 kubelet[1937]: E1213 02:09:09.753481 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:09.771804 kubelet[1937]: I1213 02:09:09.771752 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bz5c9" podStartSLOduration=6.403663666 podStartE2EDuration="20.77172652s" podCreationTimestamp="2024-12-13 02:08:49 +0000 UTC" firstStartedPulling="2024-12-13 02:08:50.232770762 +0000 UTC m=+7.235243939" lastFinishedPulling="2024-12-13 02:09:04.600833626 +0000 UTC m=+21.603306793" observedRunningTime="2024-12-13 02:09:05.165135404 +0000 UTC m=+22.167608581" watchObservedRunningTime="2024-12-13 02:09:09.77172652 +0000 UTC m=+26.774199687" Dec 13 02:09:10.329751 systemd-networkd[1030]: cilium_vxlan: Gained IPv6LL Dec 13 02:09:10.713744 systemd-networkd[1030]: lxc_health: Gained IPv6LL Dec 13 02:09:11.225834 systemd-networkd[1030]: lxc1d02c1b62191: Gained IPv6LL Dec 13 02:09:11.481875 systemd-networkd[1030]: lxc67f3b9b42585: Gained IPv6LL Dec 13 02:09:11.841605 kubelet[1937]: I1213 02:09:11.841542 1937 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:09:11.843675 kubelet[1937]: E1213 02:09:11.843639 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:12.139480 kubelet[1937]: E1213 02:09:12.139362 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:13.127827 env[1205]: time="2024-12-13T02:09:13.127703950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:13.127827 env[1205]: time="2024-12-13T02:09:13.127784241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:13.127827 env[1205]: time="2024-12-13T02:09:13.127795923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:13.128701 env[1205]: time="2024-12-13T02:09:13.128491650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b34500390a3e05f1d149a1af1635e75a3c95fc7fb3e94ea12ddbc355ac5fb35d pid=3166 runtime=io.containerd.runc.v2 Dec 13 02:09:13.136822 env[1205]: time="2024-12-13T02:09:13.135873212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:09:13.136822 env[1205]: time="2024-12-13T02:09:13.135915371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:09:13.136822 env[1205]: time="2024-12-13T02:09:13.135925209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:09:13.136822 env[1205]: time="2024-12-13T02:09:13.136081111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0369771ab08ff1219fcc741d7cb0d4f6f9b79d8b91a18a88a3fe2af5767b5755 pid=3183 runtime=io.containerd.runc.v2 Dec 13 02:09:13.146759 systemd[1]: Started cri-containerd-b34500390a3e05f1d149a1af1635e75a3c95fc7fb3e94ea12ddbc355ac5fb35d.scope. Dec 13 02:09:13.158317 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:09:13.163396 systemd[1]: Started cri-containerd-0369771ab08ff1219fcc741d7cb0d4f6f9b79d8b91a18a88a3fe2af5767b5755.scope. Dec 13 02:09:13.176876 kubelet[1937]: E1213 02:09:13.176844 1937 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c9abb47_3432_45aa_87e6_07a41452ab60.slice/cri-containerd-b34500390a3e05f1d149a1af1635e75a3c95fc7fb3e94ea12ddbc355ac5fb35d.scope\": RecentStats: unable to find data in memory cache]" Dec 13 02:09:13.177816 systemd-resolved[1144]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:09:13.187182 env[1205]: time="2024-12-13T02:09:13.187130840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rgwzb,Uid:9c9abb47-3432-45aa-87e6-07a41452ab60,Namespace:kube-system,Attempt:0,} returns sandbox id \"b34500390a3e05f1d149a1af1635e75a3c95fc7fb3e94ea12ddbc355ac5fb35d\"" Dec 13 02:09:13.189451 kubelet[1937]: E1213 02:09:13.189410 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:13.192051 env[1205]: time="2024-12-13T02:09:13.192012194Z" level=info msg="CreateContainer within sandbox \"b34500390a3e05f1d149a1af1635e75a3c95fc7fb3e94ea12ddbc355ac5fb35d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:09:13.209686 env[1205]: time="2024-12-13T02:09:13.209634849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-9zq25,Uid:1e3e8058-5d5a-4607-b565-73c2e7c6a7cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0369771ab08ff1219fcc741d7cb0d4f6f9b79d8b91a18a88a3fe2af5767b5755\"" Dec 13 02:09:13.210421 kubelet[1937]: E1213 02:09:13.210384 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:13.212714 env[1205]: time="2024-12-13T02:09:13.211301701Z" level=info msg="CreateContainer within sandbox \"b34500390a3e05f1d149a1af1635e75a3c95fc7fb3e94ea12ddbc355ac5fb35d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d63561373730c836cec145a95c7c7cb37266ed5e0959c3b3cdf4790ca9173f15\"" Dec 13 02:09:13.213144 env[1205]: time="2024-12-13T02:09:13.213118115Z" level=info msg="StartContainer for \"d63561373730c836cec145a95c7c7cb37266ed5e0959c3b3cdf4790ca9173f15\"" Dec 13 02:09:13.213678 env[1205]: time="2024-12-13T02:09:13.213629005Z" level=info msg="CreateContainer within sandbox \"0369771ab08ff1219fcc741d7cb0d4f6f9b79d8b91a18a88a3fe2af5767b5755\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:09:13.228468 env[1205]: time="2024-12-13T02:09:13.228395483Z" level=info msg="CreateContainer within sandbox \"0369771ab08ff1219fcc741d7cb0d4f6f9b79d8b91a18a88a3fe2af5767b5755\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a829d0f990bf27c90ed924113c64a2664f53190d47181df8f3aa88ed4cb79ba9\"" Dec 13 02:09:13.231278 env[1205]: time="2024-12-13T02:09:13.230481242Z" level=info msg="StartContainer for \"a829d0f990bf27c90ed924113c64a2664f53190d47181df8f3aa88ed4cb79ba9\"" Dec 13 02:09:13.231101 systemd[1]: Started cri-containerd-d63561373730c836cec145a95c7c7cb37266ed5e0959c3b3cdf4790ca9173f15.scope. Dec 13 02:09:13.247678 systemd[1]: Started cri-containerd-a829d0f990bf27c90ed924113c64a2664f53190d47181df8f3aa88ed4cb79ba9.scope. Dec 13 02:09:13.266032 env[1205]: time="2024-12-13T02:09:13.265990138Z" level=info msg="StartContainer for \"d63561373730c836cec145a95c7c7cb37266ed5e0959c3b3cdf4790ca9173f15\" returns successfully" Dec 13 02:09:13.278197 env[1205]: time="2024-12-13T02:09:13.278128598Z" level=info msg="StartContainer for \"a829d0f990bf27c90ed924113c64a2664f53190d47181df8f3aa88ed4cb79ba9\" returns successfully" Dec 13 02:09:14.133808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3161753994.mount: Deactivated successfully. Dec 13 02:09:14.149504 kubelet[1937]: E1213 02:09:14.149470 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:14.149504 kubelet[1937]: E1213 02:09:14.149485 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:14.157070 kubelet[1937]: I1213 02:09:14.157008 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-9zq25" podStartSLOduration=25.156990318 podStartE2EDuration="25.156990318s" podCreationTimestamp="2024-12-13 02:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:14.15572397 +0000 UTC m=+31.158197147" watchObservedRunningTime="2024-12-13 02:09:14.156990318 +0000 UTC m=+31.159463495" Dec 13 02:09:14.398515 systemd[1]: Started sshd@6-10.0.0.140:22-10.0.0.1:49780.service. Dec 13 02:09:14.429158 sshd[3321]: Accepted publickey for core from 10.0.0.1 port 49780 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:14.430500 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:14.434162 systemd-logind[1196]: New session 7 of user core. Dec 13 02:09:14.435226 systemd[1]: Started session-7.scope. Dec 13 02:09:14.544377 sshd[3321]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:14.547193 systemd[1]: sshd@6-10.0.0.140:22-10.0.0.1:49780.service: Deactivated successfully. Dec 13 02:09:14.548075 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:09:14.548678 systemd-logind[1196]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:09:14.549499 systemd-logind[1196]: Removed session 7. Dec 13 02:09:14.921714 kubelet[1937]: I1213 02:09:14.920691 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rgwzb" podStartSLOduration=25.920667489 podStartE2EDuration="25.920667489s" podCreationTimestamp="2024-12-13 02:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:09:14.166691606 +0000 UTC m=+31.169164783" watchObservedRunningTime="2024-12-13 02:09:14.920667489 +0000 UTC m=+31.923140666" Dec 13 02:09:15.150049 kubelet[1937]: E1213 02:09:15.150008 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:15.150224 kubelet[1937]: E1213 02:09:15.150080 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:16.151672 kubelet[1937]: E1213 02:09:16.151553 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:16.152114 kubelet[1937]: E1213 02:09:16.151985 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:19.549011 systemd[1]: Started sshd@7-10.0.0.140:22-10.0.0.1:47326.service. Dec 13 02:09:19.579919 sshd[3341]: Accepted publickey for core from 10.0.0.1 port 47326 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:19.581049 sshd[3341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:19.584430 systemd-logind[1196]: New session 8 of user core. Dec 13 02:09:19.585427 systemd[1]: Started session-8.scope. Dec 13 02:09:19.694168 sshd[3341]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:19.696112 systemd[1]: sshd@7-10.0.0.140:22-10.0.0.1:47326.service: Deactivated successfully. Dec 13 02:09:19.696850 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:09:19.697320 systemd-logind[1196]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:09:19.698019 systemd-logind[1196]: Removed session 8. Dec 13 02:09:24.698717 systemd[1]: Started sshd@8-10.0.0.140:22-10.0.0.1:47336.service. Dec 13 02:09:24.729073 sshd[3360]: Accepted publickey for core from 10.0.0.1 port 47336 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:24.730184 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:24.733520 systemd-logind[1196]: New session 9 of user core. Dec 13 02:09:24.734145 systemd[1]: Started session-9.scope. Dec 13 02:09:24.838426 sshd[3360]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:24.841670 systemd[1]: sshd@8-10.0.0.140:22-10.0.0.1:47336.service: Deactivated successfully. Dec 13 02:09:24.842199 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:09:24.842996 systemd-logind[1196]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:09:24.844121 systemd[1]: Started sshd@9-10.0.0.140:22-10.0.0.1:47352.service. Dec 13 02:09:24.844800 systemd-logind[1196]: Removed session 9. Dec 13 02:09:24.872834 sshd[3375]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:24.874072 sshd[3375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:24.877238 systemd-logind[1196]: New session 10 of user core. Dec 13 02:09:24.878069 systemd[1]: Started session-10.scope. Dec 13 02:09:25.015011 sshd[3375]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:25.018873 systemd[1]: Started sshd@10-10.0.0.140:22-10.0.0.1:47362.service. Dec 13 02:09:25.021044 systemd[1]: sshd@9-10.0.0.140:22-10.0.0.1:47352.service: Deactivated successfully. Dec 13 02:09:25.021724 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:09:25.023090 systemd-logind[1196]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:09:25.023883 systemd-logind[1196]: Removed session 10. Dec 13 02:09:25.053972 sshd[3386]: Accepted publickey for core from 10.0.0.1 port 47362 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:25.055542 sshd[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:25.058946 systemd-logind[1196]: New session 11 of user core. Dec 13 02:09:25.059713 systemd[1]: Started session-11.scope. Dec 13 02:09:25.163031 sshd[3386]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:25.165389 systemd[1]: sshd@10-10.0.0.140:22-10.0.0.1:47362.service: Deactivated successfully. Dec 13 02:09:25.166041 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:09:25.166494 systemd-logind[1196]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:09:25.167287 systemd-logind[1196]: Removed session 11. Dec 13 02:09:30.168008 systemd[1]: Started sshd@11-10.0.0.140:22-10.0.0.1:35620.service. Dec 13 02:09:30.197088 sshd[3400]: Accepted publickey for core from 10.0.0.1 port 35620 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:30.198373 sshd[3400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:30.202008 systemd-logind[1196]: New session 12 of user core. Dec 13 02:09:30.202952 systemd[1]: Started session-12.scope. Dec 13 02:09:30.308519 sshd[3400]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:30.310685 systemd[1]: sshd@11-10.0.0.140:22-10.0.0.1:35620.service: Deactivated successfully. Dec 13 02:09:30.311482 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:09:30.312074 systemd-logind[1196]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:09:30.312870 systemd-logind[1196]: Removed session 12. Dec 13 02:09:35.313298 systemd[1]: Started sshd@12-10.0.0.140:22-10.0.0.1:35636.service. Dec 13 02:09:35.344485 sshd[3413]: Accepted publickey for core from 10.0.0.1 port 35636 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:35.345457 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:35.349693 systemd-logind[1196]: New session 13 of user core. Dec 13 02:09:35.350974 systemd[1]: Started session-13.scope. Dec 13 02:09:35.451598 sshd[3413]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:35.453828 systemd[1]: sshd@12-10.0.0.140:22-10.0.0.1:35636.service: Deactivated successfully. Dec 13 02:09:35.454513 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:09:35.455284 systemd-logind[1196]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:09:35.455987 systemd-logind[1196]: Removed session 13. Dec 13 02:09:40.456055 systemd[1]: Started sshd@13-10.0.0.140:22-10.0.0.1:33864.service. Dec 13 02:09:40.484266 sshd[3426]: Accepted publickey for core from 10.0.0.1 port 33864 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:40.485368 sshd[3426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:40.488559 systemd-logind[1196]: New session 14 of user core. Dec 13 02:09:40.489286 systemd[1]: Started session-14.scope. Dec 13 02:09:40.589288 sshd[3426]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:40.591732 systemd[1]: sshd@13-10.0.0.140:22-10.0.0.1:33864.service: Deactivated successfully. Dec 13 02:09:40.592210 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:09:40.592748 systemd-logind[1196]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:09:40.593847 systemd[1]: Started sshd@14-10.0.0.140:22-10.0.0.1:33868.service. Dec 13 02:09:40.595004 systemd-logind[1196]: Removed session 14. Dec 13 02:09:40.622002 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 33868 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:40.622983 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:40.625932 systemd-logind[1196]: New session 15 of user core. Dec 13 02:09:40.626696 systemd[1]: Started session-15.scope. Dec 13 02:09:40.858761 sshd[3439]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:40.861128 systemd[1]: sshd@14-10.0.0.140:22-10.0.0.1:33868.service: Deactivated successfully. Dec 13 02:09:40.861632 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:09:40.862137 systemd-logind[1196]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:09:40.863035 systemd[1]: Started sshd@15-10.0.0.140:22-10.0.0.1:33876.service. Dec 13 02:09:40.863914 systemd-logind[1196]: Removed session 15. Dec 13 02:09:40.893072 sshd[3450]: Accepted publickey for core from 10.0.0.1 port 33876 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:40.894158 sshd[3450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:40.897639 systemd-logind[1196]: New session 16 of user core. Dec 13 02:09:40.898567 systemd[1]: Started session-16.scope. Dec 13 02:09:42.191679 sshd[3450]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:42.194843 systemd[1]: sshd@15-10.0.0.140:22-10.0.0.1:33876.service: Deactivated successfully. Dec 13 02:09:42.195380 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:09:42.197403 systemd[1]: Started sshd@16-10.0.0.140:22-10.0.0.1:33886.service. Dec 13 02:09:42.197874 systemd-logind[1196]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:09:42.199016 systemd-logind[1196]: Removed session 16. Dec 13 02:09:42.226732 sshd[3469]: Accepted publickey for core from 10.0.0.1 port 33886 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:42.228023 sshd[3469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:42.231341 systemd-logind[1196]: New session 17 of user core. Dec 13 02:09:42.232130 systemd[1]: Started session-17.scope. Dec 13 02:09:42.447202 sshd[3469]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:42.452265 systemd[1]: Started sshd@17-10.0.0.140:22-10.0.0.1:33890.service. Dec 13 02:09:42.452969 systemd[1]: sshd@16-10.0.0.140:22-10.0.0.1:33886.service: Deactivated successfully. Dec 13 02:09:42.454171 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:09:42.454861 systemd-logind[1196]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:09:42.456469 systemd-logind[1196]: Removed session 17. Dec 13 02:09:42.482841 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 33890 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:42.484268 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:42.488115 systemd-logind[1196]: New session 18 of user core. Dec 13 02:09:42.489197 systemd[1]: Started session-18.scope. Dec 13 02:09:42.593732 sshd[3479]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:42.596454 systemd[1]: sshd@17-10.0.0.140:22-10.0.0.1:33890.service: Deactivated successfully. Dec 13 02:09:42.597144 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:09:42.597845 systemd-logind[1196]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:09:42.598508 systemd-logind[1196]: Removed session 18. Dec 13 02:09:47.599272 systemd[1]: Started sshd@18-10.0.0.140:22-10.0.0.1:47946.service. Dec 13 02:09:47.627403 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 47946 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:47.628299 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:47.631781 systemd-logind[1196]: New session 19 of user core. Dec 13 02:09:47.632558 systemd[1]: Started session-19.scope. Dec 13 02:09:47.739545 sshd[3498]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:47.742476 systemd[1]: sshd@18-10.0.0.140:22-10.0.0.1:47946.service: Deactivated successfully. Dec 13 02:09:47.743146 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:09:47.743676 systemd-logind[1196]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:09:47.744409 systemd-logind[1196]: Removed session 19. Dec 13 02:09:51.068484 kubelet[1937]: E1213 02:09:51.068431 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:09:52.744482 systemd[1]: Started sshd@19-10.0.0.140:22-10.0.0.1:47962.service. Dec 13 02:09:52.774795 sshd[3516]: Accepted publickey for core from 10.0.0.1 port 47962 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:52.775889 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:52.779298 systemd-logind[1196]: New session 20 of user core. Dec 13 02:09:52.780139 systemd[1]: Started session-20.scope. Dec 13 02:09:52.886263 sshd[3516]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:52.888627 systemd[1]: sshd@19-10.0.0.140:22-10.0.0.1:47962.service: Deactivated successfully. Dec 13 02:09:52.889274 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:09:52.889757 systemd-logind[1196]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:09:52.890333 systemd-logind[1196]: Removed session 20. Dec 13 02:09:57.890794 systemd[1]: Started sshd@20-10.0.0.140:22-10.0.0.1:41922.service. Dec 13 02:09:57.918520 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 41922 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:09:57.919524 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:57.922516 systemd-logind[1196]: New session 21 of user core. Dec 13 02:09:57.923246 systemd[1]: Started session-21.scope. Dec 13 02:09:58.015816 sshd[3529]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:58.017718 systemd[1]: sshd@20-10.0.0.140:22-10.0.0.1:41922.service: Deactivated successfully. Dec 13 02:09:58.018470 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:09:58.018962 systemd-logind[1196]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:09:58.019564 systemd-logind[1196]: Removed session 21. Dec 13 02:09:58.068026 kubelet[1937]: E1213 02:09:58.067985 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:03.022217 systemd[1]: Started sshd@21-10.0.0.140:22-10.0.0.1:41932.service. Dec 13 02:10:03.054228 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 41932 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:10:03.055452 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:03.059131 systemd-logind[1196]: New session 22 of user core. Dec 13 02:10:03.060363 systemd[1]: Started session-22.scope. Dec 13 02:10:03.158211 sshd[3542]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:03.161190 systemd[1]: sshd@21-10.0.0.140:22-10.0.0.1:41932.service: Deactivated successfully. Dec 13 02:10:03.161810 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:10:03.162685 systemd-logind[1196]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:10:03.163848 systemd[1]: Started sshd@22-10.0.0.140:22-10.0.0.1:41944.service. Dec 13 02:10:03.164697 systemd-logind[1196]: Removed session 22. Dec 13 02:10:03.192682 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 41944 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:10:03.193893 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:03.197292 systemd-logind[1196]: New session 23 of user core. Dec 13 02:10:03.198356 systemd[1]: Started session-23.scope. Dec 13 02:10:04.529619 env[1205]: time="2024-12-13T02:10:04.529546981Z" level=info msg="StopContainer for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" with timeout 30 (s)" Dec 13 02:10:04.530712 env[1205]: time="2024-12-13T02:10:04.530679920Z" level=info msg="Stop container \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" with signal terminated" Dec 13 02:10:04.543064 systemd[1]: run-containerd-runc-k8s.io-ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1-runc.CC4wOr.mount: Deactivated successfully. Dec 13 02:10:04.546319 systemd[1]: cri-containerd-d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563.scope: Deactivated successfully. Dec 13 02:10:04.558945 env[1205]: time="2024-12-13T02:10:04.558880463Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:10:04.563711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563-rootfs.mount: Deactivated successfully. Dec 13 02:10:04.565859 env[1205]: time="2024-12-13T02:10:04.565821656Z" level=info msg="StopContainer for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" with timeout 2 (s)" Dec 13 02:10:04.566318 env[1205]: time="2024-12-13T02:10:04.566292112Z" level=info msg="Stop container \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" with signal terminated" Dec 13 02:10:04.571656 systemd-networkd[1030]: lxc_health: Link DOWN Dec 13 02:10:04.571666 systemd-networkd[1030]: lxc_health: Lost carrier Dec 13 02:10:04.572931 env[1205]: time="2024-12-13T02:10:04.572535797Z" level=info msg="shim disconnected" id=d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563 Dec 13 02:10:04.572931 env[1205]: time="2024-12-13T02:10:04.572696584Z" level=warning msg="cleaning up after shim disconnected" id=d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563 namespace=k8s.io Dec 13 02:10:04.572931 env[1205]: time="2024-12-13T02:10:04.572710209Z" level=info msg="cleaning up dead shim" Dec 13 02:10:04.579625 env[1205]: time="2024-12-13T02:10:04.579543227Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3610 runtime=io.containerd.runc.v2\n" Dec 13 02:10:04.582713 env[1205]: time="2024-12-13T02:10:04.582671306Z" level=info msg="StopContainer for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" returns successfully" Dec 13 02:10:04.583934 env[1205]: time="2024-12-13T02:10:04.583889056Z" level=info msg="StopPodSandbox for \"8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd\"" Dec 13 02:10:04.584007 env[1205]: time="2024-12-13T02:10:04.583957386Z" level=info msg="Container to stop \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:10:04.587151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd-shm.mount: Deactivated successfully. Dec 13 02:10:04.599343 systemd[1]: cri-containerd-8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd.scope: Deactivated successfully. Dec 13 02:10:04.615035 systemd[1]: cri-containerd-ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1.scope: Deactivated successfully. Dec 13 02:10:04.615279 systemd[1]: cri-containerd-ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1.scope: Consumed 6.439s CPU time. Dec 13 02:10:04.617666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd-rootfs.mount: Deactivated successfully. Dec 13 02:10:04.628115 env[1205]: time="2024-12-13T02:10:04.628071015Z" level=info msg="shim disconnected" id=8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd Dec 13 02:10:04.628819 env[1205]: time="2024-12-13T02:10:04.628799514Z" level=warning msg="cleaning up after shim disconnected" id=8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd namespace=k8s.io Dec 13 02:10:04.628904 env[1205]: time="2024-12-13T02:10:04.628884645Z" level=info msg="cleaning up dead shim" Dec 13 02:10:04.636759 env[1205]: time="2024-12-13T02:10:04.636715534Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3655 runtime=io.containerd.runc.v2\n" Dec 13 02:10:04.637042 env[1205]: time="2024-12-13T02:10:04.637011758Z" level=info msg="TearDown network for sandbox \"8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd\" successfully" Dec 13 02:10:04.637042 env[1205]: time="2024-12-13T02:10:04.637033109Z" level=info msg="StopPodSandbox for \"8f94550ab54f93b9c5f883d2852b39e1eca23385e8d1682824d7cc92a69c5edd\" returns successfully" Dec 13 02:10:04.639392 env[1205]: time="2024-12-13T02:10:04.639189517Z" level=info msg="shim disconnected" id=ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1 Dec 13 02:10:04.639392 env[1205]: time="2024-12-13T02:10:04.639220867Z" level=warning msg="cleaning up after shim disconnected" id=ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1 namespace=k8s.io Dec 13 02:10:04.639392 env[1205]: time="2024-12-13T02:10:04.639228080Z" level=info msg="cleaning up dead shim" Dec 13 02:10:04.646775 env[1205]: time="2024-12-13T02:10:04.646740422Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3670 runtime=io.containerd.runc.v2\n" Dec 13 02:10:04.649536 env[1205]: time="2024-12-13T02:10:04.649453200Z" level=info msg="StopContainer for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" returns successfully" Dec 13 02:10:04.650153 env[1205]: time="2024-12-13T02:10:04.650102116Z" level=info msg="StopPodSandbox for \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\"" Dec 13 02:10:04.650210 env[1205]: time="2024-12-13T02:10:04.650177169Z" level=info msg="Container to stop \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:10:04.650210 env[1205]: time="2024-12-13T02:10:04.650191516Z" level=info msg="Container to stop \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:10:04.650210 env[1205]: time="2024-12-13T02:10:04.650201285Z" level=info msg="Container to stop \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:10:04.650281 env[1205]: time="2024-12-13T02:10:04.650212937Z" level=info msg="Container to stop \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:10:04.650281 env[1205]: time="2024-12-13T02:10:04.650222826Z" level=info msg="Container to stop \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:10:04.655168 systemd[1]: cri-containerd-f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1.scope: Deactivated successfully. Dec 13 02:10:04.672415 env[1205]: time="2024-12-13T02:10:04.672368765Z" level=info msg="shim disconnected" id=f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1 Dec 13 02:10:04.672415 env[1205]: time="2024-12-13T02:10:04.672410775Z" level=warning msg="cleaning up after shim disconnected" id=f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1 namespace=k8s.io Dec 13 02:10:04.672415 env[1205]: time="2024-12-13T02:10:04.672419412Z" level=info msg="cleaning up dead shim" Dec 13 02:10:04.680049 env[1205]: time="2024-12-13T02:10:04.679984043Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\n" Dec 13 02:10:04.680323 env[1205]: time="2024-12-13T02:10:04.680295095Z" level=info msg="TearDown network for sandbox \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" successfully" Dec 13 02:10:04.680323 env[1205]: time="2024-12-13T02:10:04.680319372Z" level=info msg="StopPodSandbox for \"f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1\" returns successfully" Dec 13 02:10:04.841091 kubelet[1937]: I1213 02:10:04.841048 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zghp\" (UniqueName: \"kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-kube-api-access-8zghp\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841091 kubelet[1937]: I1213 02:10:04.841083 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cqr8\" (UniqueName: \"kubernetes.io/projected/f1ff450d-d652-4c48-b237-7719a8b2e9b6-kube-api-access-7cqr8\") pod \"f1ff450d-d652-4c48-b237-7719a8b2e9b6\" (UID: \"f1ff450d-d652-4c48-b237-7719a8b2e9b6\") " Dec 13 02:10:04.841091 kubelet[1937]: I1213 02:10:04.841102 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-config-path\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841091 kubelet[1937]: I1213 02:10:04.841117 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-net\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841611 kubelet[1937]: I1213 02:10:04.841131 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-cgroup\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841611 kubelet[1937]: I1213 02:10:04.841146 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-xtables-lock\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841611 kubelet[1937]: I1213 02:10:04.841158 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-bpf-maps\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841611 kubelet[1937]: I1213 02:10:04.841202 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-run\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841611 kubelet[1937]: I1213 02:10:04.841216 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-lib-modules\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841611 kubelet[1937]: I1213 02:10:04.841232 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-etc-cni-netd\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841927 kubelet[1937]: I1213 02:10:04.841244 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-hostproc\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841927 kubelet[1937]: I1213 02:10:04.841259 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cni-path\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841927 kubelet[1937]: I1213 02:10:04.841275 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1ff450d-d652-4c48-b237-7719a8b2e9b6-cilium-config-path\") pod \"f1ff450d-d652-4c48-b237-7719a8b2e9b6\" (UID: \"f1ff450d-d652-4c48-b237-7719a8b2e9b6\") " Dec 13 02:10:04.841927 kubelet[1937]: I1213 02:10:04.841289 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-kernel\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841927 kubelet[1937]: I1213 02:10:04.841306 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc207722-7fd8-494b-b54c-35c6f322c23c-clustermesh-secrets\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.841927 kubelet[1937]: I1213 02:10:04.841321 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-hubble-tls\") pod \"bc207722-7fd8-494b-b54c-35c6f322c23c\" (UID: \"bc207722-7fd8-494b-b54c-35c6f322c23c\") " Dec 13 02:10:04.842075 kubelet[1937]: I1213 02:10:04.841840 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.842075 kubelet[1937]: I1213 02:10:04.842040 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-hostproc" (OuterVolumeSpecName: "hostproc") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.842075 kubelet[1937]: I1213 02:10:04.842063 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cni-path" (OuterVolumeSpecName: "cni-path") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.843984 kubelet[1937]: I1213 02:10:04.843945 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.844806 kubelet[1937]: I1213 02:10:04.844752 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:10:04.845359 kubelet[1937]: I1213 02:10:04.845317 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.845500 kubelet[1937]: I1213 02:10:04.845365 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.845500 kubelet[1937]: I1213 02:10:04.845396 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.845664 kubelet[1937]: I1213 02:10:04.845642 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.845732 kubelet[1937]: I1213 02:10:04.845675 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.845732 kubelet[1937]: I1213 02:10:04.842265 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:04.847065 kubelet[1937]: I1213 02:10:04.847044 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:10:04.847735 kubelet[1937]: I1213 02:10:04.847700 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ff450d-d652-4c48-b237-7719a8b2e9b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1ff450d-d652-4c48-b237-7719a8b2e9b6" (UID: "f1ff450d-d652-4c48-b237-7719a8b2e9b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:10:04.849015 kubelet[1937]: I1213 02:10:04.848995 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ff450d-d652-4c48-b237-7719a8b2e9b6-kube-api-access-7cqr8" (OuterVolumeSpecName: "kube-api-access-7cqr8") pod "f1ff450d-d652-4c48-b237-7719a8b2e9b6" (UID: "f1ff450d-d652-4c48-b237-7719a8b2e9b6"). InnerVolumeSpecName "kube-api-access-7cqr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:10:04.849081 kubelet[1937]: I1213 02:10:04.849023 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-kube-api-access-8zghp" (OuterVolumeSpecName: "kube-api-access-8zghp") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "kube-api-access-8zghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:10:04.850572 kubelet[1937]: I1213 02:10:04.850551 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc207722-7fd8-494b-b54c-35c6f322c23c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bc207722-7fd8-494b-b54c-35c6f322c23c" (UID: "bc207722-7fd8-494b-b54c-35c6f322c23c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:10:04.941567 kubelet[1937]: I1213 02:10:04.941533 1937 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941567 kubelet[1937]: I1213 02:10:04.941556 1937 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8zghp\" (UniqueName: \"kubernetes.io/projected/bc207722-7fd8-494b-b54c-35c6f322c23c-kube-api-access-8zghp\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941567 kubelet[1937]: I1213 02:10:04.941566 1937 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7cqr8\" (UniqueName: \"kubernetes.io/projected/f1ff450d-d652-4c48-b237-7719a8b2e9b6-kube-api-access-7cqr8\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941574 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941603 1937 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941610 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941616 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941623 1937 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941630 1937 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941636 1937 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941703 kubelet[1937]: I1213 02:10:04.941642 1937 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941878 kubelet[1937]: I1213 02:10:04.941649 1937 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941878 kubelet[1937]: I1213 02:10:04.941656 1937 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941878 kubelet[1937]: I1213 02:10:04.941662 1937 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bc207722-7fd8-494b-b54c-35c6f322c23c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941878 kubelet[1937]: I1213 02:10:04.941668 1937 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bc207722-7fd8-494b-b54c-35c6f322c23c-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:04.941878 kubelet[1937]: I1213 02:10:04.941675 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1ff450d-d652-4c48-b237-7719a8b2e9b6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:05.075808 systemd[1]: Removed slice kubepods-besteffort-podf1ff450d_d652_4c48_b237_7719a8b2e9b6.slice. Dec 13 02:10:05.076905 systemd[1]: Removed slice kubepods-burstable-podbc207722_7fd8_494b_b54c_35c6f322c23c.slice. Dec 13 02:10:05.076981 systemd[1]: kubepods-burstable-podbc207722_7fd8_494b_b54c_35c6f322c23c.slice: Consumed 6.529s CPU time. Dec 13 02:10:05.239270 kubelet[1937]: I1213 02:10:05.239180 1937 scope.go:117] "RemoveContainer" containerID="d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563" Dec 13 02:10:05.241214 env[1205]: time="2024-12-13T02:10:05.241158596Z" level=info msg="RemoveContainer for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\"" Dec 13 02:10:05.247744 env[1205]: time="2024-12-13T02:10:05.247706887Z" level=info msg="RemoveContainer for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" returns successfully" Dec 13 02:10:05.248078 kubelet[1937]: I1213 02:10:05.248030 1937 scope.go:117] "RemoveContainer" containerID="d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563" Dec 13 02:10:05.248337 env[1205]: time="2024-12-13T02:10:05.248244921Z" level=error msg="ContainerStatus for \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\": not found" Dec 13 02:10:05.249101 kubelet[1937]: E1213 02:10:05.248960 1937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\": not found" containerID="d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563" Dec 13 02:10:05.249101 kubelet[1937]: I1213 02:10:05.248984 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563"} err="failed to get container status \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6eb7cc30ce52b2e4ed0d239035cc54634e45929fe80630313efdd6ebf611563\": not found" Dec 13 02:10:05.249101 kubelet[1937]: I1213 02:10:05.249064 1937 scope.go:117] "RemoveContainer" containerID="ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1" Dec 13 02:10:05.250150 env[1205]: time="2024-12-13T02:10:05.250118559Z" level=info msg="RemoveContainer for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\"" Dec 13 02:10:05.253449 env[1205]: time="2024-12-13T02:10:05.253405659Z" level=info msg="RemoveContainer for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" returns successfully" Dec 13 02:10:05.253632 kubelet[1937]: I1213 02:10:05.253561 1937 scope.go:117] "RemoveContainer" containerID="d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5" Dec 13 02:10:05.254470 env[1205]: time="2024-12-13T02:10:05.254434358Z" level=info msg="RemoveContainer for \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\"" Dec 13 02:10:05.259029 env[1205]: time="2024-12-13T02:10:05.258337862Z" level=info msg="RemoveContainer for \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\" returns successfully" Dec 13 02:10:05.259288 kubelet[1937]: I1213 02:10:05.259243 1937 scope.go:117] "RemoveContainer" containerID="a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80" Dec 13 02:10:05.260199 env[1205]: time="2024-12-13T02:10:05.260169099Z" level=info msg="RemoveContainer for \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\"" Dec 13 02:10:05.263472 env[1205]: time="2024-12-13T02:10:05.263441541Z" level=info msg="RemoveContainer for \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\" returns successfully" Dec 13 02:10:05.263625 kubelet[1937]: I1213 02:10:05.263589 1937 scope.go:117] "RemoveContainer" containerID="76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce" Dec 13 02:10:05.264475 env[1205]: time="2024-12-13T02:10:05.264448749Z" level=info msg="RemoveContainer for \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\"" Dec 13 02:10:05.267519 env[1205]: time="2024-12-13T02:10:05.267489049Z" level=info msg="RemoveContainer for \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\" returns successfully" Dec 13 02:10:05.267681 kubelet[1937]: I1213 02:10:05.267654 1937 scope.go:117] "RemoveContainer" containerID="6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8" Dec 13 02:10:05.268396 env[1205]: time="2024-12-13T02:10:05.268367211Z" level=info msg="RemoveContainer for \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\"" Dec 13 02:10:05.271278 env[1205]: time="2024-12-13T02:10:05.271245503Z" level=info msg="RemoveContainer for \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\" returns successfully" Dec 13 02:10:05.271419 kubelet[1937]: I1213 02:10:05.271376 1937 scope.go:117] "RemoveContainer" containerID="ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1" Dec 13 02:10:05.271676 env[1205]: time="2024-12-13T02:10:05.271607081Z" level=error msg="ContainerStatus for \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\": not found" Dec 13 02:10:05.271798 kubelet[1937]: E1213 02:10:05.271775 1937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\": not found" containerID="ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1" Dec 13 02:10:05.271851 kubelet[1937]: I1213 02:10:05.271800 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1"} err="failed to get container status \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1\": not found" Dec 13 02:10:05.271851 kubelet[1937]: I1213 02:10:05.271820 1937 scope.go:117] "RemoveContainer" containerID="d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5" Dec 13 02:10:05.271995 env[1205]: time="2024-12-13T02:10:05.271956055Z" level=error msg="ContainerStatus for \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\": not found" Dec 13 02:10:05.272072 kubelet[1937]: E1213 02:10:05.272057 1937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\": not found" containerID="d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5" Dec 13 02:10:05.272119 kubelet[1937]: I1213 02:10:05.272080 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5"} err="failed to get container status \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d50e4b516bb680f4e792a68d6e876c5c52dee94409a597717c77c99206d96bb5\": not found" Dec 13 02:10:05.272119 kubelet[1937]: I1213 02:10:05.272098 1937 scope.go:117] "RemoveContainer" containerID="a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80" Dec 13 02:10:05.272259 env[1205]: time="2024-12-13T02:10:05.272219507Z" level=error msg="ContainerStatus for \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\": not found" Dec 13 02:10:05.272345 kubelet[1937]: E1213 02:10:05.272325 1937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\": not found" containerID="a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80" Dec 13 02:10:05.272380 kubelet[1937]: I1213 02:10:05.272346 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80"} err="failed to get container status \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\": rpc error: code = NotFound desc = an error occurred when try to find container \"a46ba0c94e6ee8026ee3f80dd04af34dc67692cb59ccf70988f035205ad38f80\": not found" Dec 13 02:10:05.272380 kubelet[1937]: I1213 02:10:05.272360 1937 scope.go:117] "RemoveContainer" containerID="76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce" Dec 13 02:10:05.272560 env[1205]: time="2024-12-13T02:10:05.272498629Z" level=error msg="ContainerStatus for \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\": not found" Dec 13 02:10:05.272660 kubelet[1937]: E1213 02:10:05.272641 1937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\": not found" containerID="76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce" Dec 13 02:10:05.272696 kubelet[1937]: I1213 02:10:05.272661 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce"} err="failed to get container status \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"76d4881536bbed478a08fdac890ad7ca54bcbf3b1b42ce8965bd7ce9d4bd45ce\": not found" Dec 13 02:10:05.272696 kubelet[1937]: I1213 02:10:05.272674 1937 scope.go:117] "RemoveContainer" containerID="6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8" Dec 13 02:10:05.272845 env[1205]: time="2024-12-13T02:10:05.272810753Z" level=error msg="ContainerStatus for \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\": not found" Dec 13 02:10:05.272941 kubelet[1937]: E1213 02:10:05.272921 1937 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\": not found" containerID="6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8" Dec 13 02:10:05.272970 kubelet[1937]: I1213 02:10:05.272943 1937 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8"} err="failed to get container status \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ae7d3840263e0fafeacd5ad919ca03ecf055a9d9501d46ab4ec085a76f0a4e8\": not found" Dec 13 02:10:05.540089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffee83f120e856745e0671390711030dd71040f2d474665735b2d0ff43b7eea1-rootfs.mount: Deactivated successfully. Dec 13 02:10:05.540171 systemd[1]: var-lib-kubelet-pods-f1ff450d\x2dd652\x2d4c48\x2db237\x2d7719a8b2e9b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7cqr8.mount: Deactivated successfully. Dec 13 02:10:05.540225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1-rootfs.mount: Deactivated successfully. Dec 13 02:10:05.540289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f24c745c31dd0f70cc3d95b0cadf555b789b7320cdbde6e352722e5f3abbe7a1-shm.mount: Deactivated successfully. Dec 13 02:10:05.540343 systemd[1]: var-lib-kubelet-pods-bc207722\x2d7fd8\x2d494b\x2db54c\x2d35c6f322c23c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8zghp.mount: Deactivated successfully. Dec 13 02:10:05.540393 systemd[1]: var-lib-kubelet-pods-bc207722\x2d7fd8\x2d494b\x2db54c\x2d35c6f322c23c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:10:05.540443 systemd[1]: var-lib-kubelet-pods-bc207722\x2d7fd8\x2d494b\x2db54c\x2d35c6f322c23c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:10:06.485934 sshd[3556]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:06.488368 systemd[1]: sshd@22-10.0.0.140:22-10.0.0.1:41944.service: Deactivated successfully. Dec 13 02:10:06.488904 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:10:06.489400 systemd-logind[1196]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:10:06.490357 systemd[1]: Started sshd@23-10.0.0.140:22-10.0.0.1:59606.service. Dec 13 02:10:06.491817 systemd-logind[1196]: Removed session 23. Dec 13 02:10:06.521564 sshd[3718]: Accepted publickey for core from 10.0.0.1 port 59606 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:10:06.522624 sshd[3718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:06.525924 systemd-logind[1196]: New session 24 of user core. Dec 13 02:10:06.526754 systemd[1]: Started session-24.scope. Dec 13 02:10:07.068331 sshd[3718]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:07.072000 systemd[1]: Started sshd@24-10.0.0.140:22-10.0.0.1:59616.service. Dec 13 02:10:07.076329 kubelet[1937]: I1213 02:10:07.076291 1937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" path="/var/lib/kubelet/pods/bc207722-7fd8-494b-b54c-35c6f322c23c/volumes" Dec 13 02:10:07.076877 kubelet[1937]: I1213 02:10:07.076849 1937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1ff450d-d652-4c48-b237-7719a8b2e9b6" path="/var/lib/kubelet/pods/f1ff450d-d652-4c48-b237-7719a8b2e9b6/volumes" Dec 13 02:10:07.077375 systemd[1]: sshd@23-10.0.0.140:22-10.0.0.1:59606.service: Deactivated successfully. Dec 13 02:10:07.077962 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:10:07.078944 systemd-logind[1196]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:10:07.079872 systemd-logind[1196]: Removed session 24. Dec 13 02:10:07.086246 kubelet[1937]: E1213 02:10:07.086213 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" containerName="mount-cgroup" Dec 13 02:10:07.086246 kubelet[1937]: E1213 02:10:07.086238 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" containerName="clean-cilium-state" Dec 13 02:10:07.086246 kubelet[1937]: E1213 02:10:07.086244 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" containerName="cilium-agent" Dec 13 02:10:07.086246 kubelet[1937]: E1213 02:10:07.086250 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1ff450d-d652-4c48-b237-7719a8b2e9b6" containerName="cilium-operator" Dec 13 02:10:07.086246 kubelet[1937]: E1213 02:10:07.086256 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" containerName="apply-sysctl-overwrites" Dec 13 02:10:07.086490 kubelet[1937]: E1213 02:10:07.086262 1937 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" containerName="mount-bpf-fs" Dec 13 02:10:07.086490 kubelet[1937]: I1213 02:10:07.086284 1937 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc207722-7fd8-494b-b54c-35c6f322c23c" containerName="cilium-agent" Dec 13 02:10:07.086490 kubelet[1937]: I1213 02:10:07.086290 1937 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ff450d-d652-4c48-b237-7719a8b2e9b6" containerName="cilium-operator" Dec 13 02:10:07.095188 systemd[1]: Created slice kubepods-burstable-pod2dbcfdab_a833_4e01_9029_afcfe506cbab.slice. Dec 13 02:10:07.114816 sshd[3729]: Accepted publickey for core from 10.0.0.1 port 59616 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:10:07.115962 sshd[3729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:07.119672 systemd-logind[1196]: New session 25 of user core. Dec 13 02:10:07.120165 systemd[1]: Started session-25.scope. Dec 13 02:10:07.234949 sshd[3729]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:07.238065 systemd[1]: sshd@24-10.0.0.140:22-10.0.0.1:59616.service: Deactivated successfully. Dec 13 02:10:07.238628 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:10:07.239282 systemd-logind[1196]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:10:07.240781 systemd[1]: Started sshd@25-10.0.0.140:22-10.0.0.1:59632.service. Dec 13 02:10:07.241748 systemd-logind[1196]: Removed session 25. Dec 13 02:10:07.244670 kubelet[1937]: E1213 02:10:07.244628 1937 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-gklfw lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-4fshz" podUID="2dbcfdab-a833-4e01-9029-afcfe506cbab" Dec 13 02:10:07.252231 kubelet[1937]: I1213 02:10:07.252201 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-xtables-lock\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252290 kubelet[1937]: I1213 02:10:07.252243 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-hostproc\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252290 kubelet[1937]: I1213 02:10:07.252267 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-ipsec-secrets\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252366 kubelet[1937]: I1213 02:10:07.252294 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-net\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252366 kubelet[1937]: I1213 02:10:07.252313 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-bpf-maps\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252366 kubelet[1937]: I1213 02:10:07.252332 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-etc-cni-netd\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252366 kubelet[1937]: I1213 02:10:07.252350 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-lib-modules\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252366 kubelet[1937]: I1213 02:10:07.252365 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-cgroup\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252486 kubelet[1937]: I1213 02:10:07.252378 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-clustermesh-secrets\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252486 kubelet[1937]: I1213 02:10:07.252405 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-kernel\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252486 kubelet[1937]: I1213 02:10:07.252425 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-hubble-tls\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252486 kubelet[1937]: I1213 02:10:07.252444 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-run\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252486 kubelet[1937]: I1213 02:10:07.252464 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gklfw\" (UniqueName: \"kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-kube-api-access-gklfw\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252486 kubelet[1937]: I1213 02:10:07.252479 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cni-path\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.252723 kubelet[1937]: I1213 02:10:07.252494 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-config-path\") pod \"cilium-4fshz\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " pod="kube-system/cilium-4fshz" Dec 13 02:10:07.270885 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 59632 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:10:07.272026 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:10:07.275315 systemd-logind[1196]: New session 26 of user core. Dec 13 02:10:07.276074 systemd[1]: Started session-26.scope. Dec 13 02:10:07.554370 kubelet[1937]: I1213 02:10:07.554304 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-lib-modules\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554370 kubelet[1937]: I1213 02:10:07.554345 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-cgroup\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554370 kubelet[1937]: I1213 02:10:07.554370 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-ipsec-secrets\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554567 kubelet[1937]: I1213 02:10:07.554385 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-run\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554567 kubelet[1937]: I1213 02:10:07.554398 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cni-path\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554567 kubelet[1937]: I1213 02:10:07.554413 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-bpf-maps\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554567 kubelet[1937]: I1213 02:10:07.554429 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gklfw\" (UniqueName: \"kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-kube-api-access-gklfw\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554567 kubelet[1937]: I1213 02:10:07.554442 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-etc-cni-netd\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554567 kubelet[1937]: I1213 02:10:07.554491 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-net\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554751 kubelet[1937]: I1213 02:10:07.554508 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-hubble-tls\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554751 kubelet[1937]: I1213 02:10:07.554522 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-hostproc\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554751 kubelet[1937]: I1213 02:10:07.554534 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-kernel\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554751 kubelet[1937]: I1213 02:10:07.554546 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-xtables-lock\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554751 kubelet[1937]: I1213 02:10:07.554562 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-config-path\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554751 kubelet[1937]: I1213 02:10:07.554589 1937 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-clustermesh-secrets\") pod \"2dbcfdab-a833-4e01-9029-afcfe506cbab\" (UID: \"2dbcfdab-a833-4e01-9029-afcfe506cbab\") " Dec 13 02:10:07.554903 kubelet[1937]: I1213 02:10:07.554423 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.554903 kubelet[1937]: I1213 02:10:07.554431 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.554903 kubelet[1937]: I1213 02:10:07.554461 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cni-path" (OuterVolumeSpecName: "cni-path") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.554903 kubelet[1937]: I1213 02:10:07.554493 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.554903 kubelet[1937]: I1213 02:10:07.554521 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.555019 kubelet[1937]: I1213 02:10:07.554535 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.555019 kubelet[1937]: I1213 02:10:07.554799 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-hostproc" (OuterVolumeSpecName: "hostproc") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.555019 kubelet[1937]: I1213 02:10:07.554871 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.557314 kubelet[1937]: I1213 02:10:07.557286 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:10:07.557428 kubelet[1937]: I1213 02:10:07.557411 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.557526 kubelet[1937]: I1213 02:10:07.557507 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:10:07.557626 kubelet[1937]: I1213 02:10:07.557514 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-kube-api-access-gklfw" (OuterVolumeSpecName: "kube-api-access-gklfw") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "kube-api-access-gklfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:10:07.558612 systemd[1]: var-lib-kubelet-pods-2dbcfdab\x2da833\x2d4e01\x2d9029\x2dafcfe506cbab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgklfw.mount: Deactivated successfully. Dec 13 02:10:07.558723 systemd[1]: var-lib-kubelet-pods-2dbcfdab\x2da833\x2d4e01\x2d9029\x2dafcfe506cbab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:10:07.559086 kubelet[1937]: I1213 02:10:07.559067 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:10:07.559230 kubelet[1937]: I1213 02:10:07.559212 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:10:07.559399 kubelet[1937]: I1213 02:10:07.559363 1937 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2dbcfdab-a833-4e01-9029-afcfe506cbab" (UID: "2dbcfdab-a833-4e01-9029-afcfe506cbab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:10:07.655297 kubelet[1937]: I1213 02:10:07.655253 1937 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655297 kubelet[1937]: I1213 02:10:07.655280 1937 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gklfw\" (UniqueName: \"kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-kube-api-access-gklfw\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655297 kubelet[1937]: I1213 02:10:07.655290 1937 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655297 kubelet[1937]: I1213 02:10:07.655298 1937 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655308 1937 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655316 1937 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2dbcfdab-a833-4e01-9029-afcfe506cbab-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655323 1937 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655330 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655337 1937 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655346 1937 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655352 1937 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655428 kubelet[1937]: I1213 02:10:07.655359 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655627 kubelet[1937]: I1213 02:10:07.655366 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655627 kubelet[1937]: I1213 02:10:07.655373 1937 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:07.655627 kubelet[1937]: I1213 02:10:07.655380 1937 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2dbcfdab-a833-4e01-9029-afcfe506cbab-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:10:08.110674 kubelet[1937]: E1213 02:10:08.110633 1937 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:10:08.253484 systemd[1]: Removed slice kubepods-burstable-pod2dbcfdab_a833_4e01_9029_afcfe506cbab.slice. Dec 13 02:10:08.286191 systemd[1]: Created slice kubepods-burstable-pod37d1893c_3704_45ee_826f_d3d1363a912b.slice. Dec 13 02:10:08.359201 systemd[1]: var-lib-kubelet-pods-2dbcfdab\x2da833\x2d4e01\x2d9029\x2dafcfe506cbab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:10:08.359309 systemd[1]: var-lib-kubelet-pods-2dbcfdab\x2da833\x2d4e01\x2d9029\x2dafcfe506cbab-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:10:08.459339 kubelet[1937]: I1213 02:10:08.459079 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-cilium-run\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459339 kubelet[1937]: I1213 02:10:08.459112 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-cilium-cgroup\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459339 kubelet[1937]: I1213 02:10:08.459131 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37d1893c-3704-45ee-826f-d3d1363a912b-hubble-tls\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459339 kubelet[1937]: I1213 02:10:08.459145 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-bpf-maps\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459339 kubelet[1937]: I1213 02:10:08.459158 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37d1893c-3704-45ee-826f-d3d1363a912b-clustermesh-secrets\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459339 kubelet[1937]: I1213 02:10:08.459174 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/37d1893c-3704-45ee-826f-d3d1363a912b-cilium-ipsec-secrets\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459519 kubelet[1937]: I1213 02:10:08.459205 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkgx\" (UniqueName: \"kubernetes.io/projected/37d1893c-3704-45ee-826f-d3d1363a912b-kube-api-access-dmkgx\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459519 kubelet[1937]: I1213 02:10:08.459229 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-cni-path\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459519 kubelet[1937]: I1213 02:10:08.459248 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37d1893c-3704-45ee-826f-d3d1363a912b-cilium-config-path\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459519 kubelet[1937]: I1213 02:10:08.459265 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-host-proc-sys-kernel\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459519 kubelet[1937]: I1213 02:10:08.459287 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-hostproc\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459519 kubelet[1937]: I1213 02:10:08.459301 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-etc-cni-netd\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459704 kubelet[1937]: I1213 02:10:08.459319 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-lib-modules\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459704 kubelet[1937]: I1213 02:10:08.459334 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-xtables-lock\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.459704 kubelet[1937]: I1213 02:10:08.459347 1937 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37d1893c-3704-45ee-826f-d3d1363a912b-host-proc-sys-net\") pod \"cilium-k9v7h\" (UID: \"37d1893c-3704-45ee-826f-d3d1363a912b\") " pod="kube-system/cilium-k9v7h" Dec 13 02:10:08.588875 kubelet[1937]: E1213 02:10:08.588848 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:08.589362 env[1205]: time="2024-12-13T02:10:08.589307287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9v7h,Uid:37d1893c-3704-45ee-826f-d3d1363a912b,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:08.602121 env[1205]: time="2024-12-13T02:10:08.602053144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:08.602121 env[1205]: time="2024-12-13T02:10:08.602091678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:08.602121 env[1205]: time="2024-12-13T02:10:08.602102289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:08.602300 env[1205]: time="2024-12-13T02:10:08.602227486Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44 pid=3773 runtime=io.containerd.runc.v2 Dec 13 02:10:08.612084 systemd[1]: Started cri-containerd-053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44.scope. Dec 13 02:10:08.631041 env[1205]: time="2024-12-13T02:10:08.630988100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k9v7h,Uid:37d1893c-3704-45ee-826f-d3d1363a912b,Namespace:kube-system,Attempt:0,} returns sandbox id \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\"" Dec 13 02:10:08.631711 kubelet[1937]: E1213 02:10:08.631685 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:08.634490 env[1205]: time="2024-12-13T02:10:08.633705850Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:10:08.646763 env[1205]: time="2024-12-13T02:10:08.646722553Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96\"" Dec 13 02:10:08.647147 env[1205]: time="2024-12-13T02:10:08.647108257Z" level=info msg="StartContainer for \"b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96\"" Dec 13 02:10:08.660597 systemd[1]: Started cri-containerd-b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96.scope. Dec 13 02:10:08.684252 env[1205]: time="2024-12-13T02:10:08.684216150Z" level=info msg="StartContainer for \"b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96\" returns successfully" Dec 13 02:10:08.691492 systemd[1]: cri-containerd-b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96.scope: Deactivated successfully. Dec 13 02:10:08.720306 env[1205]: time="2024-12-13T02:10:08.720190257Z" level=info msg="shim disconnected" id=b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96 Dec 13 02:10:08.720306 env[1205]: time="2024-12-13T02:10:08.720237948Z" level=warning msg="cleaning up after shim disconnected" id=b4bdf54fe6ed307be91807a3d51ac5bebe33b0622c4336e1a5a68b7c40eacb96 namespace=k8s.io Dec 13 02:10:08.720306 env[1205]: time="2024-12-13T02:10:08.720247376Z" level=info msg="cleaning up dead shim" Dec 13 02:10:08.726497 env[1205]: time="2024-12-13T02:10:08.726455296Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3854 runtime=io.containerd.runc.v2\n" Dec 13 02:10:09.069610 kubelet[1937]: I1213 02:10:09.069546 1937 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbcfdab-a833-4e01-9029-afcfe506cbab" path="/var/lib/kubelet/pods/2dbcfdab-a833-4e01-9029-afcfe506cbab/volumes" Dec 13 02:10:09.253928 kubelet[1937]: E1213 02:10:09.253892 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:09.256314 env[1205]: time="2024-12-13T02:10:09.256261015Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:10:09.268907 env[1205]: time="2024-12-13T02:10:09.268858833Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d\"" Dec 13 02:10:09.269348 env[1205]: time="2024-12-13T02:10:09.269305612Z" level=info msg="StartContainer for \"2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d\"" Dec 13 02:10:09.282271 systemd[1]: Started cri-containerd-2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d.scope. Dec 13 02:10:09.303273 env[1205]: time="2024-12-13T02:10:09.303232473Z" level=info msg="StartContainer for \"2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d\" returns successfully" Dec 13 02:10:09.307658 systemd[1]: cri-containerd-2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d.scope: Deactivated successfully. Dec 13 02:10:09.328975 env[1205]: time="2024-12-13T02:10:09.328882483Z" level=info msg="shim disconnected" id=2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d Dec 13 02:10:09.328975 env[1205]: time="2024-12-13T02:10:09.328922019Z" level=warning msg="cleaning up after shim disconnected" id=2f14f7dc01edc12b2af13a10447c85a48fd5dcc0120c159576708c171b37f25d namespace=k8s.io Dec 13 02:10:09.328975 env[1205]: time="2024-12-13T02:10:09.328930484Z" level=info msg="cleaning up dead shim" Dec 13 02:10:09.334497 env[1205]: time="2024-12-13T02:10:09.334468667Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3915 runtime=io.containerd.runc.v2\n" Dec 13 02:10:10.259237 kubelet[1937]: E1213 02:10:10.259202 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:10.260815 env[1205]: time="2024-12-13T02:10:10.260768860Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:10:10.281223 env[1205]: time="2024-12-13T02:10:10.281157181Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58\"" Dec 13 02:10:10.281816 env[1205]: time="2024-12-13T02:10:10.281781608Z" level=info msg="StartContainer for \"a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58\"" Dec 13 02:10:10.299053 systemd[1]: Started cri-containerd-a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58.scope. Dec 13 02:10:10.322128 env[1205]: time="2024-12-13T02:10:10.322081626Z" level=info msg="StartContainer for \"a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58\" returns successfully" Dec 13 02:10:10.323763 systemd[1]: cri-containerd-a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58.scope: Deactivated successfully. Dec 13 02:10:10.355037 env[1205]: time="2024-12-13T02:10:10.354984450Z" level=info msg="shim disconnected" id=a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58 Dec 13 02:10:10.355037 env[1205]: time="2024-12-13T02:10:10.355038182Z" level=warning msg="cleaning up after shim disconnected" id=a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58 namespace=k8s.io Dec 13 02:10:10.355225 env[1205]: time="2024-12-13T02:10:10.355047459Z" level=info msg="cleaning up dead shim" Dec 13 02:10:10.359281 systemd[1]: run-containerd-runc-k8s.io-a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58-runc.2iIAD7.mount: Deactivated successfully. Dec 13 02:10:10.359382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a51a40536a9b1352505656f887d63111cfff106444a65370e402f013abf4ee58-rootfs.mount: Deactivated successfully. Dec 13 02:10:10.362251 env[1205]: time="2024-12-13T02:10:10.362210588Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3970 runtime=io.containerd.runc.v2\n" Dec 13 02:10:11.261979 kubelet[1937]: E1213 02:10:11.261947 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:11.264518 env[1205]: time="2024-12-13T02:10:11.264471787Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:10:11.278983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702850616.mount: Deactivated successfully. Dec 13 02:10:11.282274 env[1205]: time="2024-12-13T02:10:11.282213707Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383\"" Dec 13 02:10:11.282755 env[1205]: time="2024-12-13T02:10:11.282725670Z" level=info msg="StartContainer for \"2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383\"" Dec 13 02:10:11.296457 systemd[1]: Started cri-containerd-2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383.scope. Dec 13 02:10:11.319619 systemd[1]: cri-containerd-2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383.scope: Deactivated successfully. Dec 13 02:10:11.321205 env[1205]: time="2024-12-13T02:10:11.321155459Z" level=info msg="StartContainer for \"2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383\" returns successfully" Dec 13 02:10:11.341299 env[1205]: time="2024-12-13T02:10:11.341250351Z" level=info msg="shim disconnected" id=2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383 Dec 13 02:10:11.341299 env[1205]: time="2024-12-13T02:10:11.341294906Z" level=warning msg="cleaning up after shim disconnected" id=2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383 namespace=k8s.io Dec 13 02:10:11.341299 env[1205]: time="2024-12-13T02:10:11.341302520Z" level=info msg="cleaning up dead shim" Dec 13 02:10:11.348564 env[1205]: time="2024-12-13T02:10:11.348538994Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:10:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4024 runtime=io.containerd.runc.v2\n" Dec 13 02:10:11.359256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2879282c73416e9a913ceb22f879aaf69a6a6ceb90b6268517ba1348beabe383-rootfs.mount: Deactivated successfully. Dec 13 02:10:12.266057 kubelet[1937]: E1213 02:10:12.266019 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:12.267931 env[1205]: time="2024-12-13T02:10:12.267880398Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:10:12.303175 env[1205]: time="2024-12-13T02:10:12.303125284Z" level=info msg="CreateContainer within sandbox \"053df18b488f7916bad0eb8ac11ca3419fa264db5d33548e3e5738552edc5f44\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4ed874bddc8cf3683558b4170c5dc9e03cf1051f19100a267a619c9f4bebc4f\"" Dec 13 02:10:12.304071 env[1205]: time="2024-12-13T02:10:12.304051384Z" level=info msg="StartContainer for \"b4ed874bddc8cf3683558b4170c5dc9e03cf1051f19100a267a619c9f4bebc4f\"" Dec 13 02:10:12.330615 systemd[1]: Started cri-containerd-b4ed874bddc8cf3683558b4170c5dc9e03cf1051f19100a267a619c9f4bebc4f.scope. Dec 13 02:10:12.356482 env[1205]: time="2024-12-13T02:10:12.356401671Z" level=info msg="StartContainer for \"b4ed874bddc8cf3683558b4170c5dc9e03cf1051f19100a267a619c9f4bebc4f\" returns successfully" Dec 13 02:10:12.617621 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:10:13.270355 kubelet[1937]: E1213 02:10:13.270319 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:13.282709 kubelet[1937]: I1213 02:10:13.282643 1937 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k9v7h" podStartSLOduration=5.282622594 podStartE2EDuration="5.282622594s" podCreationTimestamp="2024-12-13 02:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:13.281872469 +0000 UTC m=+90.284345646" watchObservedRunningTime="2024-12-13 02:10:13.282622594 +0000 UTC m=+90.285095771" Dec 13 02:10:14.589853 kubelet[1937]: E1213 02:10:14.589784 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:15.091483 systemd-networkd[1030]: lxc_health: Link UP Dec 13 02:10:15.099053 systemd-networkd[1030]: lxc_health: Gained carrier Dec 13 02:10:15.099680 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:10:15.546398 systemd[1]: run-containerd-runc-k8s.io-b4ed874bddc8cf3683558b4170c5dc9e03cf1051f19100a267a619c9f4bebc4f-runc.2hdNua.mount: Deactivated successfully. Dec 13 02:10:16.441776 systemd-networkd[1030]: lxc_health: Gained IPv6LL Dec 13 02:10:16.590121 kubelet[1937]: E1213 02:10:16.590061 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:17.067752 kubelet[1937]: E1213 02:10:17.067701 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:17.277233 kubelet[1937]: E1213 02:10:17.277202 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:18.067791 kubelet[1937]: E1213 02:10:18.067475 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:18.067791 kubelet[1937]: E1213 02:10:18.067706 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:18.279145 kubelet[1937]: E1213 02:10:18.279105 1937 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:10:19.709922 systemd[1]: run-containerd-runc-k8s.io-b4ed874bddc8cf3683558b4170c5dc9e03cf1051f19100a267a619c9f4bebc4f-runc.8KQhI3.mount: Deactivated successfully. Dec 13 02:10:21.833636 sshd[3744]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:21.835946 systemd[1]: sshd@25-10.0.0.140:22-10.0.0.1:59632.service: Deactivated successfully. Dec 13 02:10:21.836740 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:10:21.837324 systemd-logind[1196]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:10:21.837990 systemd-logind[1196]: Removed session 26.