Dec 13 14:18:27.127554 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:18:27.127580 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:27.127591 kernel: BIOS-provided physical RAM map: Dec 13 14:18:27.127599 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:18:27.127606 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:18:27.127613 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:18:27.127623 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 14:18:27.127631 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 14:18:27.127640 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:18:27.127648 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:18:27.127655 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:18:27.127663 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:18:27.127670 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 14:18:27.127678 kernel: NX (Execute Disable) protection: active Dec 13 14:18:27.127689 kernel: SMBIOS 2.8 present. Dec 13 14:18:27.127698 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 14:18:27.127706 kernel: Hypervisor detected: KVM Dec 13 14:18:27.127714 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:18:27.127722 kernel: kvm-clock: cpu 0, msr 6c19a001, primary cpu clock Dec 13 14:18:27.127741 kernel: kvm-clock: using sched offset of 2950397890 cycles Dec 13 14:18:27.127750 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:18:27.127759 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:18:27.127767 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:18:27.127778 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:18:27.127786 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 14:18:27.127799 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:18:27.127807 kernel: Using GB pages for direct mapping Dec 13 14:18:27.127815 kernel: ACPI: Early table checksum verification disabled Dec 13 14:18:27.127824 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 14:18:27.127832 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127840 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127885 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127896 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 14:18:27.127905 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127914 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127923 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127931 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:27.127940 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 14:18:27.127949 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 14:18:27.127958 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 14:18:27.127971 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 14:18:27.127980 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 14:18:27.127989 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 14:18:27.127998 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 14:18:27.128007 kernel: No NUMA configuration found Dec 13 14:18:27.128017 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 14:18:27.128027 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 14:18:27.128037 kernel: Zone ranges: Dec 13 14:18:27.128046 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:18:27.128055 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 14:18:27.128064 kernel: Normal empty Dec 13 14:18:27.128073 kernel: Movable zone start for each node Dec 13 14:18:27.128082 kernel: Early memory node ranges Dec 13 14:18:27.128091 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:18:27.128101 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 14:18:27.128111 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 14:18:27.128120 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:18:27.128130 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:18:27.128139 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 14:18:27.128148 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:18:27.128157 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:18:27.128166 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:18:27.128176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:18:27.128185 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:18:27.128194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:18:27.128205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:18:27.128214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:18:27.128223 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:18:27.128232 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:18:27.128241 kernel: TSC deadline timer available Dec 13 14:18:27.128251 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:18:27.128260 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:18:27.128269 kernel: kvm-guest: setup PV sched yield Dec 13 14:18:27.128279 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:18:27.128289 kernel: Booting paravirtualized kernel on KVM Dec 13 14:18:27.128299 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:18:27.128308 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:18:27.128318 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:18:27.128327 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:18:27.128336 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:18:27.128345 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:18:27.128354 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 14:18:27.128363 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:18:27.128374 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:18:27.128383 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 14:18:27.128392 kernel: Policy zone: DMA32 Dec 13 14:18:27.128403 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:27.128413 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:18:27.128422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:18:27.128432 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:18:27.128441 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:18:27.128452 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 134796K reserved, 0K cma-reserved) Dec 13 14:18:27.128462 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:18:27.128471 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:18:27.128480 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:18:27.128489 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:18:27.128499 kernel: rcu: RCU event tracing is enabled. Dec 13 14:18:27.128509 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:18:27.128518 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:18:27.128528 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:18:27.128539 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:18:27.128548 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:18:27.128558 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:18:27.128567 kernel: random: crng init done Dec 13 14:18:27.128576 kernel: Console: colour VGA+ 80x25 Dec 13 14:18:27.128585 kernel: printk: console [ttyS0] enabled Dec 13 14:18:27.128595 kernel: ACPI: Core revision 20210730 Dec 13 14:18:27.128604 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:18:27.128614 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:18:27.128624 kernel: x2apic enabled Dec 13 14:18:27.128634 kernel: Switched APIC routing to physical x2apic. Dec 13 14:18:27.128643 kernel: kvm-guest: setup PV IPIs Dec 13 14:18:27.128652 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:18:27.128661 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:18:27.128671 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:18:27.128680 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:18:27.128690 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:18:27.128700 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:18:27.128717 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:18:27.128726 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:18:27.128745 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:18:27.128756 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:18:27.128766 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:18:27.128776 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:18:27.128785 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:18:27.128795 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:18:27.128805 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:18:27.128816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:18:27.128826 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:18:27.128836 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:18:27.128857 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:18:27.128867 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:18:27.128876 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:18:27.128886 kernel: LSM: Security Framework initializing Dec 13 14:18:27.128896 kernel: SELinux: Initializing. Dec 13 14:18:27.128907 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:18:27.128918 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:18:27.128928 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:18:27.128937 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:18:27.128947 kernel: ... version: 0 Dec 13 14:18:27.128956 kernel: ... bit width: 48 Dec 13 14:18:27.128966 kernel: ... generic registers: 6 Dec 13 14:18:27.128975 kernel: ... value mask: 0000ffffffffffff Dec 13 14:18:27.128985 kernel: ... max period: 00007fffffffffff Dec 13 14:18:27.128996 kernel: ... fixed-purpose events: 0 Dec 13 14:18:27.129006 kernel: ... event mask: 000000000000003f Dec 13 14:18:27.129016 kernel: signal: max sigframe size: 1776 Dec 13 14:18:27.129026 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:18:27.129035 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:18:27.129045 kernel: x86: Booting SMP configuration: Dec 13 14:18:27.129054 kernel: .... node #0, CPUs: #1 Dec 13 14:18:27.129064 kernel: kvm-clock: cpu 1, msr 6c19a041, secondary cpu clock Dec 13 14:18:27.129074 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:18:27.129085 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 14:18:27.129094 kernel: #2 Dec 13 14:18:27.129104 kernel: kvm-clock: cpu 2, msr 6c19a081, secondary cpu clock Dec 13 14:18:27.129114 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:18:27.129123 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 14:18:27.129133 kernel: #3 Dec 13 14:18:27.129143 kernel: kvm-clock: cpu 3, msr 6c19a0c1, secondary cpu clock Dec 13 14:18:27.129152 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:18:27.129161 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 14:18:27.129173 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:18:27.129182 kernel: smpboot: Max logical packages: 1 Dec 13 14:18:27.129192 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:18:27.129201 kernel: devtmpfs: initialized Dec 13 14:18:27.129211 kernel: x86/mm: Memory block size: 128MB Dec 13 14:18:27.129221 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:18:27.129231 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:18:27.129241 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:18:27.129251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:18:27.129262 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:18:27.129272 kernel: audit: type=2000 audit(1734099506.161:1): state=initialized audit_enabled=0 res=1 Dec 13 14:18:27.129282 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:18:27.129291 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:18:27.129301 kernel: cpuidle: using governor menu Dec 13 14:18:27.129310 kernel: ACPI: bus type PCI registered Dec 13 14:18:27.129320 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:18:27.129330 kernel: dca service started, version 1.12.1 Dec 13 14:18:27.129340 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:18:27.129352 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:18:27.129361 kernel: PCI: Using configuration type 1 for base access Dec 13 14:18:27.129371 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:18:27.129381 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:18:27.129391 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:18:27.129400 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:18:27.129410 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:18:27.129419 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:18:27.129429 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:18:27.129440 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:18:27.129449 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:18:27.129459 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:18:27.129469 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:18:27.129478 kernel: ACPI: Interpreter enabled Dec 13 14:18:27.129488 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:18:27.129498 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:18:27.129507 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:18:27.129517 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:18:27.129527 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:18:27.129699 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:18:27.129811 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:18:27.129937 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:18:27.129953 kernel: PCI host bridge to bus 0000:00 Dec 13 14:18:27.130053 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:18:27.130140 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:18:27.130229 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:18:27.130319 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:18:27.130419 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:18:27.130504 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 14:18:27.130603 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:18:27.130767 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:18:27.130921 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:18:27.131053 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 14:18:27.131152 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 14:18:27.131262 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 14:18:27.131374 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:18:27.131510 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:18:27.131638 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:18:27.131787 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 14:18:27.131930 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 14:18:27.132079 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:18:27.132208 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:18:27.132329 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 14:18:27.132460 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 14:18:27.132599 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:18:27.132743 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 14:18:27.132912 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 14:18:27.133015 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 14:18:27.133113 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 14:18:27.133222 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:18:27.133320 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:18:27.133427 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:18:27.133533 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 14:18:27.133626 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 14:18:27.133727 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:18:27.133835 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:18:27.133862 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:18:27.133873 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:18:27.133883 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:18:27.133896 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:18:27.133906 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:18:27.133915 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:18:27.133925 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:18:27.133935 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:18:27.133945 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:18:27.133955 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:18:27.133964 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:18:27.133974 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:18:27.133986 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:18:27.133995 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:18:27.134005 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:18:27.134015 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:18:27.134025 kernel: iommu: Default domain type: Translated Dec 13 14:18:27.134034 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:18:27.134136 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:18:27.134231 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:18:27.134324 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:18:27.134340 kernel: vgaarb: loaded Dec 13 14:18:27.134351 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:18:27.134361 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:18:27.134371 kernel: PTP clock support registered Dec 13 14:18:27.134381 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:18:27.134391 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:18:27.134400 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:18:27.134410 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 14:18:27.134420 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:18:27.134431 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:18:27.134441 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:18:27.134451 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:18:27.134461 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:18:27.134470 kernel: pnp: PnP ACPI init Dec 13 14:18:27.134577 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:18:27.134592 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:18:27.134603 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:18:27.134615 kernel: NET: Registered PF_INET protocol family Dec 13 14:18:27.134625 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:18:27.134635 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:18:27.134645 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:18:27.134655 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:18:27.134665 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:18:27.134675 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:18:27.134684 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:18:27.134696 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:18:27.134707 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:18:27.134717 kernel: NET: Registered PF_XDP protocol family Dec 13 14:18:27.134819 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:18:27.134918 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:18:27.135004 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:18:27.135086 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:18:27.135173 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:18:27.135260 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 14:18:27.135277 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:18:27.135288 kernel: Initialise system trusted keyrings Dec 13 14:18:27.135298 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:18:27.135308 kernel: Key type asymmetric registered Dec 13 14:18:27.135317 kernel: Asymmetric key parser 'x509' registered Dec 13 14:18:27.135327 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:18:27.135337 kernel: io scheduler mq-deadline registered Dec 13 14:18:27.135347 kernel: io scheduler kyber registered Dec 13 14:18:27.135356 kernel: io scheduler bfq registered Dec 13 14:18:27.135368 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:18:27.135379 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:18:27.135389 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:18:27.135398 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:18:27.135408 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:18:27.135418 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:18:27.135428 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:18:27.135439 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:18:27.135449 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:18:27.139989 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:18:27.140027 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:18:27.140098 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:18:27.140193 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:18:26 UTC (1734099506) Dec 13 14:18:27.140266 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:18:27.140278 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:18:27.140286 kernel: Segment Routing with IPv6 Dec 13 14:18:27.140295 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:18:27.140309 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:18:27.140316 kernel: Key type dns_resolver registered Dec 13 14:18:27.140323 kernel: IPI shorthand broadcast: enabled Dec 13 14:18:27.140331 kernel: sched_clock: Marking stable (404334964, 101398039)->(550102829, -44369826) Dec 13 14:18:27.140339 kernel: registered taskstats version 1 Dec 13 14:18:27.140346 kernel: Loading compiled-in X.509 certificates Dec 13 14:18:27.140354 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:18:27.140361 kernel: Key type .fscrypt registered Dec 13 14:18:27.140368 kernel: Key type fscrypt-provisioning registered Dec 13 14:18:27.140378 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:18:27.140385 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:18:27.140392 kernel: ima: No architecture policies found Dec 13 14:18:27.140400 kernel: clk: Disabling unused clocks Dec 13 14:18:27.140407 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:18:27.140415 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:18:27.140422 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:18:27.140429 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:18:27.140436 kernel: Run /init as init process Dec 13 14:18:27.140445 kernel: with arguments: Dec 13 14:18:27.140453 kernel: /init Dec 13 14:18:27.140460 kernel: with environment: Dec 13 14:18:27.140467 kernel: HOME=/ Dec 13 14:18:27.140474 kernel: TERM=linux Dec 13 14:18:27.140481 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:18:27.140492 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:18:27.140503 systemd[1]: Detected virtualization kvm. Dec 13 14:18:27.140512 systemd[1]: Detected architecture x86-64. Dec 13 14:18:27.140520 systemd[1]: Running in initrd. Dec 13 14:18:27.140527 systemd[1]: No hostname configured, using default hostname. Dec 13 14:18:27.140535 systemd[1]: Hostname set to . Dec 13 14:18:27.140543 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:18:27.140551 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:18:27.140559 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:18:27.140566 systemd[1]: Reached target cryptsetup.target. Dec 13 14:18:27.140576 systemd[1]: Reached target paths.target. Dec 13 14:18:27.140591 systemd[1]: Reached target slices.target. Dec 13 14:18:27.140600 systemd[1]: Reached target swap.target. Dec 13 14:18:27.140608 systemd[1]: Reached target timers.target. Dec 13 14:18:27.140616 systemd[1]: Listening on iscsid.socket. Dec 13 14:18:27.140625 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:18:27.140634 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:18:27.140642 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:18:27.140650 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:18:27.140658 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:18:27.140665 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:18:27.140673 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:18:27.140681 systemd[1]: Reached target sockets.target. Dec 13 14:18:27.140689 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:18:27.140698 systemd[1]: Finished network-cleanup.service. Dec 13 14:18:27.140706 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:18:27.140714 systemd[1]: Starting systemd-journald.service... Dec 13 14:18:27.140722 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:18:27.140741 systemd[1]: Starting systemd-resolved.service... Dec 13 14:18:27.140749 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:18:27.140757 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:18:27.140765 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:18:27.140773 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:18:27.140787 systemd-journald[198]: Journal started Dec 13 14:18:27.140833 systemd-journald[198]: Runtime Journal (/run/log/journal/2ee12defbdff438aa2bdbb1befa10fba) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:18:27.121455 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 14:18:27.164684 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:18:27.164704 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:18:27.140694 systemd-resolved[200]: Positive Trust Anchors: Dec 13 14:18:27.140704 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:18:27.168292 kernel: Bridge firewalling registered Dec 13 14:18:27.140752 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:18:27.143254 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 14:18:27.166498 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 14:18:27.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.180881 kernel: audit: type=1130 audit(1734099507.176:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.180909 systemd[1]: Started systemd-journald.service. Dec 13 14:18:27.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.182920 systemd[1]: Started systemd-resolved.service. Dec 13 14:18:27.187907 kernel: audit: type=1130 audit(1734099507.182:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.188228 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:18:27.193207 kernel: audit: type=1130 audit(1734099507.187:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.193229 kernel: SCSI subsystem initialized Dec 13 14:18:27.193241 kernel: audit: type=1130 audit(1734099507.192:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.193329 systemd[1]: Reached target nss-lookup.target. Dec 13 14:18:27.198773 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:18:27.203232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:18:27.203255 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:18:27.204469 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:18:27.207150 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 14:18:27.211885 kernel: audit: type=1130 audit(1734099507.208:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.207908 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:18:27.208793 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:18:27.214280 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:18:27.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.217071 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:18:27.220590 kernel: audit: type=1130 audit(1734099507.216:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.219447 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:18:27.224973 kernel: audit: type=1130 audit(1734099507.221:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.230720 dracut-cmdline[222]: dracut-dracut-053 Dec 13 14:18:27.232445 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:27.291895 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:18:27.313884 kernel: iscsi: registered transport (tcp) Dec 13 14:18:27.340893 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:18:27.340983 kernel: QLogic iSCSI HBA Driver Dec 13 14:18:27.372378 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:18:27.379267 kernel: audit: type=1130 audit(1734099507.373:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.374947 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:18:27.423888 kernel: raid6: avx2x4 gen() 29544 MB/s Dec 13 14:18:27.440886 kernel: raid6: avx2x4 xor() 7153 MB/s Dec 13 14:18:27.457874 kernel: raid6: avx2x2 gen() 31576 MB/s Dec 13 14:18:27.474886 kernel: raid6: avx2x2 xor() 18497 MB/s Dec 13 14:18:27.491880 kernel: raid6: avx2x1 gen() 24725 MB/s Dec 13 14:18:27.508874 kernel: raid6: avx2x1 xor() 14002 MB/s Dec 13 14:18:27.525870 kernel: raid6: sse2x4 gen() 14519 MB/s Dec 13 14:18:27.542887 kernel: raid6: sse2x4 xor() 6982 MB/s Dec 13 14:18:27.559870 kernel: raid6: sse2x2 gen() 14557 MB/s Dec 13 14:18:27.576874 kernel: raid6: sse2x2 xor() 9385 MB/s Dec 13 14:18:27.593884 kernel: raid6: sse2x1 gen() 11346 MB/s Dec 13 14:18:27.611324 kernel: raid6: sse2x1 xor() 7548 MB/s Dec 13 14:18:27.611385 kernel: raid6: using algorithm avx2x2 gen() 31576 MB/s Dec 13 14:18:27.611394 kernel: raid6: .... xor() 18497 MB/s, rmw enabled Dec 13 14:18:27.612050 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:18:27.624873 kernel: xor: automatically using best checksumming function avx Dec 13 14:18:27.716878 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:18:27.725087 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:18:27.730119 kernel: audit: type=1130 audit(1734099507.724:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.729000 audit: BPF prog-id=7 op=LOAD Dec 13 14:18:27.729000 audit: BPF prog-id=8 op=LOAD Dec 13 14:18:27.730591 systemd[1]: Starting systemd-udevd.service... Dec 13 14:18:27.742703 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 14:18:27.747491 systemd[1]: Started systemd-udevd.service. Dec 13 14:18:27.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.748299 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:18:27.759665 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Dec 13 14:18:27.784871 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:18:27.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.786311 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:18:27.827051 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:18:27.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:27.852388 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:18:27.858435 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:18:27.858452 kernel: GPT:9289727 != 19775487 Dec 13 14:18:27.858461 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:18:27.858470 kernel: GPT:9289727 != 19775487 Dec 13 14:18:27.858478 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:18:27.858486 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:27.859859 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:18:27.871114 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:18:27.871156 kernel: AES CTR mode by8 optimization enabled Dec 13 14:18:27.871866 kernel: libata version 3.00 loaded. Dec 13 14:18:27.880621 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:18:27.909609 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:18:27.909626 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:18:27.909728 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:18:27.909808 kernel: scsi host0: ahci Dec 13 14:18:27.909920 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (499) Dec 13 14:18:27.909930 kernel: scsi host1: ahci Dec 13 14:18:27.910014 kernel: scsi host2: ahci Dec 13 14:18:27.910111 kernel: scsi host3: ahci Dec 13 14:18:27.910195 kernel: scsi host4: ahci Dec 13 14:18:27.910276 kernel: scsi host5: ahci Dec 13 14:18:27.910356 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 14:18:27.910366 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 14:18:27.910377 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 14:18:27.910385 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 14:18:27.910394 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 14:18:27.910403 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 14:18:27.901454 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:18:27.940280 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:18:27.941522 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:18:27.954133 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:18:27.959803 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:18:27.962096 systemd[1]: Starting disk-uuid.service... Dec 13 14:18:27.972378 disk-uuid[536]: Primary Header is updated. Dec 13 14:18:27.972378 disk-uuid[536]: Secondary Entries is updated. Dec 13 14:18:27.972378 disk-uuid[536]: Secondary Header is updated. Dec 13 14:18:27.975865 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:27.978868 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:28.214888 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:28.223274 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:28.223297 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:28.223862 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:28.224899 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:28.225890 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:18:28.227242 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:18:28.227268 kernel: ata3.00: applying bridge limits Dec 13 14:18:28.228879 kernel: ata3.00: configured for UDMA/100 Dec 13 14:18:28.230977 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:18:28.260168 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:18:28.277665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:18:28.277682 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:18:28.981828 disk-uuid[537]: The operation has completed successfully. Dec 13 14:18:28.983052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:29.006502 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:18:29.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.006589 systemd[1]: Finished disk-uuid.service. Dec 13 14:18:29.010514 systemd[1]: Starting verity-setup.service... Dec 13 14:18:29.024870 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:18:29.045778 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:18:29.047911 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:18:29.049872 systemd[1]: Finished verity-setup.service. Dec 13 14:18:29.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.121871 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:18:29.122192 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:18:29.122502 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:18:29.123323 systemd[1]: Starting ignition-setup.service... Dec 13 14:18:29.125311 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:18:29.135704 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:18:29.135734 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:18:29.135747 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:18:29.143630 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:18:29.153811 systemd[1]: Finished ignition-setup.service. Dec 13 14:18:29.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.156338 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:18:29.199710 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:18:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.200000 audit: BPF prog-id=9 op=LOAD Dec 13 14:18:29.202138 systemd[1]: Starting systemd-networkd.service... Dec 13 14:18:29.229806 ignition[645]: Ignition 2.14.0 Dec 13 14:18:29.229818 ignition[645]: Stage: fetch-offline Dec 13 14:18:29.229966 ignition[645]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:29.229989 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:29.230278 ignition[645]: parsed url from cmdline: "" Dec 13 14:18:29.230284 ignition[645]: no config URL provided Dec 13 14:18:29.230289 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:18:29.230296 ignition[645]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:18:29.230337 ignition[645]: op(1): [started] loading QEMU firmware config module Dec 13 14:18:29.230348 ignition[645]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:18:29.239345 ignition[645]: op(1): [finished] loading QEMU firmware config module Dec 13 14:18:29.244416 systemd-networkd[716]: lo: Link UP Dec 13 14:18:29.244669 systemd-networkd[716]: lo: Gained carrier Dec 13 14:18:29.245585 systemd-networkd[716]: Enumeration completed Dec 13 14:18:29.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.245706 systemd[1]: Started systemd-networkd.service. Dec 13 14:18:29.247072 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:18:29.247446 systemd[1]: Reached target network.target. Dec 13 14:18:29.250304 systemd[1]: Starting iscsiuio.service... Dec 13 14:18:29.251918 systemd-networkd[716]: eth0: Link UP Dec 13 14:18:29.251922 systemd-networkd[716]: eth0: Gained carrier Dec 13 14:18:29.287454 ignition[645]: parsing config with SHA512: f8192fd8a9b338788170a0697f6937b0e4f45d66e1014405d98b348a8078811889d1ec9e827c400a50435a4fac05a3a400bc32e29d1f47d08425fb68aa06646a Dec 13 14:18:29.287610 systemd[1]: Started iscsiuio.service. Dec 13 14:18:29.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.288907 systemd[1]: Starting iscsid.service... Dec 13 14:18:29.294736 iscsid[722]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:18:29.294736 iscsid[722]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:18:29.294736 iscsid[722]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:18:29.294736 iscsid[722]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:18:29.294736 iscsid[722]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:18:29.294736 iscsid[722]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:18:29.297816 systemd[1]: Started iscsid.service. Dec 13 14:18:29.305982 systemd-networkd[716]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:18:29.306176 unknown[645]: fetched base config from "system" Dec 13 14:18:29.306183 unknown[645]: fetched user config from "qemu" Dec 13 14:18:29.306721 ignition[645]: fetch-offline: fetch-offline passed Dec 13 14:18:29.306793 ignition[645]: Ignition finished successfully Dec 13 14:18:29.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.314828 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:18:29.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.318144 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:18:29.319871 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:18:29.322164 systemd[1]: Starting ignition-kargs.service... Dec 13 14:18:29.330484 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:18:29.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.330650 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:18:29.331103 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:18:29.331456 systemd[1]: Reached target remote-fs.target. Dec 13 14:18:29.332975 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:18:29.388016 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:18:29.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.392873 ignition[725]: Ignition 2.14.0 Dec 13 14:18:29.392885 ignition[725]: Stage: kargs Dec 13 14:18:29.393006 ignition[725]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:29.393017 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:29.393856 ignition[725]: kargs: kargs passed Dec 13 14:18:29.393892 ignition[725]: Ignition finished successfully Dec 13 14:18:29.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.396377 systemd[1]: Finished ignition-kargs.service. Dec 13 14:18:29.397658 systemd[1]: Starting ignition-disks.service... Dec 13 14:18:29.404417 ignition[744]: Ignition 2.14.0 Dec 13 14:18:29.404427 ignition[744]: Stage: disks Dec 13 14:18:29.404549 ignition[744]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:29.404564 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:29.405910 ignition[744]: disks: disks passed Dec 13 14:18:29.405949 ignition[744]: Ignition finished successfully Dec 13 14:18:29.409615 systemd[1]: Finished ignition-disks.service. Dec 13 14:18:29.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.409741 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:18:29.411959 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:18:29.413498 systemd[1]: Reached target local-fs.target. Dec 13 14:18:29.414949 systemd[1]: Reached target sysinit.target. Dec 13 14:18:29.416329 systemd[1]: Reached target basic.target. Dec 13 14:18:29.416988 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:18:29.429779 systemd-fsck[752]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:18:29.436442 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:18:29.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.437140 systemd[1]: Mounting sysroot.mount... Dec 13 14:18:29.464867 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:18:29.465433 systemd[1]: Mounted sysroot.mount. Dec 13 14:18:29.465546 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:18:29.468736 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:18:29.469138 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:18:29.469180 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:18:29.469201 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:18:29.471665 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:18:29.474043 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:18:29.479363 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:18:29.483605 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:18:29.486962 initrd-setup-root[778]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:18:29.490551 initrd-setup-root[786]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:18:29.516378 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:18:29.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.518197 systemd[1]: Starting ignition-mount.service... Dec 13 14:18:29.519580 systemd[1]: Starting sysroot-boot.service... Dec 13 14:18:29.524122 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:18:29.562311 systemd[1]: Finished sysroot-boot.service. Dec 13 14:18:29.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.563951 ignition[805]: INFO : Ignition 2.14.0 Dec 13 14:18:29.563951 ignition[805]: INFO : Stage: mount Dec 13 14:18:29.563951 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:29.563951 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:29.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:29.568423 ignition[805]: INFO : mount: mount passed Dec 13 14:18:29.568423 ignition[805]: INFO : Ignition finished successfully Dec 13 14:18:29.565110 systemd[1]: Finished ignition-mount.service. Dec 13 14:18:30.056822 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:18:30.064870 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (813) Dec 13 14:18:30.064903 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:18:30.066279 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:18:30.066299 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:18:30.069881 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:18:30.070535 systemd[1]: Starting ignition-files.service... Dec 13 14:18:30.085641 ignition[833]: INFO : Ignition 2.14.0 Dec 13 14:18:30.085641 ignition[833]: INFO : Stage: files Dec 13 14:18:30.087450 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:30.087450 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:30.090760 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:18:30.093063 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:18:30.093063 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:18:30.097580 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:18:30.099253 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:18:30.101410 unknown[833]: wrote ssh authorized keys file for user: core Dec 13 14:18:30.102577 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:18:30.104469 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:18:30.106727 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:18:30.156641 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:18:30.339391 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:18:30.341559 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:18:30.341559 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:18:30.392040 systemd-networkd[716]: eth0: Gained IPv6LL Dec 13 14:18:30.820603 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:18:30.948054 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:18:30.948054 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:30.952104 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:18:31.244161 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:18:32.030095 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:32.030095 ignition[833]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:18:32.034382 ignition[833]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:18:32.118483 ignition[833]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:18:32.120484 ignition[833]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:18:32.120484 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:18:32.120484 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:18:32.120484 ignition[833]: INFO : files: files passed Dec 13 14:18:32.120484 ignition[833]: INFO : Ignition finished successfully Dec 13 14:18:32.141020 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:18:32.141049 kernel: audit: type=1130 audit(1734099512.122:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.141067 kernel: audit: type=1130 audit(1734099512.134:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.141081 kernel: audit: type=1130 audit(1734099512.140:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.120918 systemd[1]: Finished ignition-files.service. Dec 13 14:18:32.149637 kernel: audit: type=1131 audit(1734099512.140:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.124277 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:18:32.151072 initrd-setup-root-after-ignition[856]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:18:32.130781 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:18:32.155350 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:18:32.131565 systemd[1]: Starting ignition-quench.service... Dec 13 14:18:32.133945 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:18:32.135992 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:18:32.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.136050 systemd[1]: Finished ignition-quench.service. Dec 13 14:18:32.170270 kernel: audit: type=1130 audit(1734099512.162:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.171150 kernel: audit: type=1131 audit(1734099512.162:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.141111 systemd[1]: Reached target ignition-complete.target. Dec 13 14:18:32.148430 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:18:32.160119 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:18:32.160194 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:18:32.162142 systemd[1]: Reached target initrd-fs.target. Dec 13 14:18:32.170264 systemd[1]: Reached target initrd.target. Dec 13 14:18:32.171189 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.171907 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:18:32.181323 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:18:32.187046 kernel: audit: type=1130 audit(1734099512.182:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.183124 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:18:32.191464 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:18:32.192500 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:18:32.194412 systemd[1]: Stopped target timers.target. Dec 13 14:18:32.196236 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:18:32.202936 kernel: audit: type=1131 audit(1734099512.197:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.196330 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:18:32.243218 kernel: audit: type=1131 audit(1734099512.202:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.243246 kernel: audit: type=1131 audit(1734099512.208:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.243694 iscsid[722]: iscsid shutting down. Dec 13 14:18:32.198139 systemd[1]: Stopped target initrd.target. Dec 13 14:18:32.245821 ignition[873]: INFO : Ignition 2.14.0 Dec 13 14:18:32.245821 ignition[873]: INFO : Stage: umount Dec 13 14:18:32.245821 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:32.245821 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:32.245821 ignition[873]: INFO : umount: umount passed Dec 13 14:18:32.245821 ignition[873]: INFO : Ignition finished successfully Dec 13 14:18:32.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.203011 systemd[1]: Stopped target basic.target. Dec 13 14:18:32.257000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:18:32.203119 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:18:32.203340 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:18:32.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.203537 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:18:32.203769 systemd[1]: Stopped target remote-fs.target. Dec 13 14:18:32.204171 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:18:32.204391 systemd[1]: Stopped target sysinit.target. Dec 13 14:18:32.204585 systemd[1]: Stopped target local-fs.target. Dec 13 14:18:32.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.204797 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:18:32.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.205207 systemd[1]: Stopped target swap.target. Dec 13 14:18:32.205391 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:18:32.205477 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:18:32.205727 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:18:32.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.209106 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:18:32.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.209189 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:18:32.209350 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:18:32.209447 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:18:32.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.212906 systemd[1]: Stopped target paths.target. Dec 13 14:18:32.212981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:18:32.214927 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:18:32.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.215365 systemd[1]: Stopped target slices.target. Dec 13 14:18:32.215555 systemd[1]: Stopped target sockets.target. Dec 13 14:18:32.215797 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:18:32.215926 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:18:32.216225 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:18:32.216317 systemd[1]: Stopped ignition-files.service. Dec 13 14:18:32.217497 systemd[1]: Stopping ignition-mount.service... Dec 13 14:18:32.218025 systemd[1]: Stopping iscsid.service... Dec 13 14:18:32.219255 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:18:32.219710 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:18:32.219969 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:18:32.220533 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:18:32.220767 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:18:32.224662 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:18:32.224778 systemd[1]: Stopped iscsid.service. Dec 13 14:18:32.225607 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:18:32.225686 systemd[1]: Closed iscsid.socket. Dec 13 14:18:32.228946 systemd[1]: Stopping iscsiuio.service... Dec 13 14:18:32.229186 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:18:32.229292 systemd[1]: Stopped iscsiuio.service. Dec 13 14:18:32.229556 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:18:32.229614 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:18:32.230611 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:18:32.230682 systemd[1]: Stopped ignition-mount.service. Dec 13 14:18:32.231136 systemd[1]: Stopped target network.target. Dec 13 14:18:32.231290 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:18:32.231314 systemd[1]: Closed iscsiuio.socket. Dec 13 14:18:32.231500 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:18:32.231529 systemd[1]: Stopped ignition-disks.service. Dec 13 14:18:32.231824 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:18:32.231879 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:18:32.232021 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:18:32.232051 systemd[1]: Stopped ignition-setup.service. Dec 13 14:18:32.232276 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:18:32.232460 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:18:32.237575 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:18:32.245936 systemd-networkd[716]: eth0: DHCPv6 lease lost Dec 13 14:18:32.330000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:18:32.247221 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:18:32.247295 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:18:32.251970 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:18:32.252036 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:18:32.255290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:18:32.255313 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:18:32.257663 systemd[1]: Stopping network-cleanup.service... Dec 13 14:18:32.259254 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:18:32.259292 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:18:32.260362 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:18:32.260393 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:18:32.262819 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:18:32.262894 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:18:32.264014 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:18:32.267149 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:18:32.269731 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:18:32.269809 systemd[1]: Stopped network-cleanup.service. Dec 13 14:18:32.271721 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:18:32.271813 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:18:32.274755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:18:32.274782 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:18:32.277023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:18:32.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.277047 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:18:32.279067 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:18:32.279097 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:18:32.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.281046 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:18:32.281076 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:18:32.282002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:18:32.282029 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:18:32.284708 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:18:32.286264 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:18:32.286305 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:18:32.288813 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:18:32.288896 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:18:32.290898 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:18:32.290930 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:18:32.292822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:18:32.293252 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:18:32.293322 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:18:32.353002 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:18:32.353094 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:18:32.354705 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:18:32.356447 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:18:32.356483 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:18:32.358908 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:18:32.376341 systemd[1]: Switching root. Dec 13 14:18:32.398890 systemd-journald[198]: Journal stopped Dec 13 14:18:36.517899 systemd-journald[198]: Received SIGTERM from PID 1 (n/a). Dec 13 14:18:36.517960 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:18:36.517973 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:18:36.518077 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:18:36.518086 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:18:36.518100 kernel: SELinux: policy capability open_perms=1 Dec 13 14:18:36.518110 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:18:36.518122 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:18:36.518132 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:18:36.518143 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:18:36.518151 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:18:36.518162 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:18:36.518173 systemd[1]: Successfully loaded SELinux policy in 42.053ms. Dec 13 14:18:36.518192 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.714ms. Dec 13 14:18:36.518204 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:18:36.518214 systemd[1]: Detected virtualization kvm. Dec 13 14:18:36.518225 systemd[1]: Detected architecture x86-64. Dec 13 14:18:36.518237 systemd[1]: Detected first boot. Dec 13 14:18:36.518247 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:18:36.518257 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:18:36.518266 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:18:36.518277 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:36.518288 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:36.518299 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:36.518311 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:18:36.518321 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:18:36.518331 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:18:36.518341 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:18:36.518354 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:18:36.518364 systemd[1]: Created slice system-getty.slice. Dec 13 14:18:36.518375 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:18:36.518386 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:18:36.518398 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:18:36.518408 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:18:36.518418 systemd[1]: Created slice user.slice. Dec 13 14:18:36.518427 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:18:36.518437 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:18:36.518447 systemd[1]: Set up automount boot.automount. Dec 13 14:18:36.518457 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:18:36.518467 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:18:36.518477 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:18:36.518488 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:18:36.518498 systemd[1]: Reached target integritysetup.target. Dec 13 14:18:36.518508 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:18:36.518517 systemd[1]: Reached target remote-fs.target. Dec 13 14:18:36.518527 systemd[1]: Reached target slices.target. Dec 13 14:18:36.518545 systemd[1]: Reached target swap.target. Dec 13 14:18:36.518556 systemd[1]: Reached target torcx.target. Dec 13 14:18:36.518566 systemd[1]: Reached target veritysetup.target. Dec 13 14:18:36.518576 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:18:36.518587 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:18:36.518601 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:18:36.518611 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:18:36.518621 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:18:36.518631 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:18:36.518640 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:18:36.518651 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:18:36.518660 systemd[1]: Mounting media.mount... Dec 13 14:18:36.518670 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:36.518682 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:18:36.518692 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:18:36.518701 systemd[1]: Mounting tmp.mount... Dec 13 14:18:36.518711 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:18:36.518721 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:36.518731 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:18:36.518741 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:18:36.518750 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:36.518760 systemd[1]: Starting modprobe@drm.service... Dec 13 14:18:36.518771 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:36.518781 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:18:36.518791 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:36.518801 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:18:36.518813 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:18:36.518823 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:18:36.518832 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:18:36.518842 kernel: loop: module loaded Dec 13 14:18:36.518870 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:18:36.518882 systemd[1]: Stopped systemd-journald.service. Dec 13 14:18:36.518893 kernel: fuse: init (API version 7.34) Dec 13 14:18:36.518902 systemd[1]: Starting systemd-journald.service... Dec 13 14:18:36.518912 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:18:36.518922 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:18:36.518932 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:18:36.518942 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:18:36.518953 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:18:36.518964 systemd[1]: Stopped verity-setup.service. Dec 13 14:18:36.518978 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:36.518990 systemd-journald[993]: Journal started Dec 13 14:18:36.519028 systemd-journald[993]: Runtime Journal (/run/log/journal/2ee12defbdff438aa2bdbb1befa10fba) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:18:32.462000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:18:32.857000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:18:32.857000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:18:32.857000 audit: BPF prog-id=10 op=LOAD Dec 13 14:18:32.857000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:18:32.857000 audit: BPF prog-id=11 op=LOAD Dec 13 14:18:32.857000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:18:32.928000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:18:32.928000 audit[907]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558b2 a1=c0000d8de0 a2=c0000e10c0 a3=32 items=0 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:32.928000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:18:32.930000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:18:32.930000 audit[907]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155989 a2=1ed a3=0 items=2 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:32.930000 audit: CWD cwd="/" Dec 13 14:18:32.930000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:32.930000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:32.930000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:18:36.379000 audit: BPF prog-id=12 op=LOAD Dec 13 14:18:36.379000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:18:36.379000 audit: BPF prog-id=13 op=LOAD Dec 13 14:18:36.379000 audit: BPF prog-id=14 op=LOAD Dec 13 14:18:36.379000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:18:36.379000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:18:36.380000 audit: BPF prog-id=15 op=LOAD Dec 13 14:18:36.380000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:18:36.380000 audit: BPF prog-id=16 op=LOAD Dec 13 14:18:36.380000 audit: BPF prog-id=17 op=LOAD Dec 13 14:18:36.380000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:18:36.380000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:18:36.380000 audit: BPF prog-id=18 op=LOAD Dec 13 14:18:36.380000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:18:36.380000 audit: BPF prog-id=19 op=LOAD Dec 13 14:18:36.380000 audit: BPF prog-id=20 op=LOAD Dec 13 14:18:36.380000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:18:36.380000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:18:36.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.392000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:18:36.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.497000 audit: BPF prog-id=21 op=LOAD Dec 13 14:18:36.498000 audit: BPF prog-id=22 op=LOAD Dec 13 14:18:36.498000 audit: BPF prog-id=23 op=LOAD Dec 13 14:18:36.498000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:18:36.498000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:18:36.515000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:18:36.515000 audit[993]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff59025550 a2=4000 a3=7fff590255ec items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:36.515000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:18:36.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.378333 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:18:32.928266 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:36.378345 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:18:32.928527 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:18:36.382082 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:18:32.928543 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:18:32.928570 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:18:32.928579 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:18:32.928618 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:18:32.928630 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:18:32.928822 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:18:32.928868 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:18:32.928880 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:18:32.929215 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:18:32.929244 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:18:32.929260 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:18:32.929273 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:18:32.929287 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:18:32.929299 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:32Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:18:36.019785 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:36.020096 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:36.020216 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:36.020387 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:18:36.020438 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:18:36.020503 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-12-13T14:18:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:18:36.523872 systemd[1]: Started systemd-journald.service. Dec 13 14:18:36.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.524379 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:18:36.525200 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:18:36.526045 systemd[1]: Mounted media.mount. Dec 13 14:18:36.526911 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:18:36.527785 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:18:36.528751 systemd[1]: Mounted tmp.mount. Dec 13 14:18:36.529705 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:18:36.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.531012 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:18:36.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.532147 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:18:36.532275 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:18:36.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.533366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:36.533486 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:36.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.534502 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:18:36.534643 systemd[1]: Finished modprobe@drm.service. Dec 13 14:18:36.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.535733 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:36.535874 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:36.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.537122 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:18:36.537240 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:18:36.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.538213 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:36.538334 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:36.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.539403 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:18:36.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.540549 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:18:36.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.541713 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:18:36.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.543010 systemd[1]: Reached target network-pre.target. Dec 13 14:18:36.545107 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:18:36.547033 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:18:36.547811 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:18:36.549409 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:18:36.565059 systemd-journald[993]: Time spent on flushing to /var/log/journal/2ee12defbdff438aa2bdbb1befa10fba is 22.805ms for 1105 entries. Dec 13 14:18:36.565059 systemd-journald[993]: System Journal (/var/log/journal/2ee12defbdff438aa2bdbb1befa10fba) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:18:36.610703 systemd-journald[993]: Received client request to flush runtime journal. Dec 13 14:18:36.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.551272 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:18:36.552271 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:36.553410 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:18:36.554344 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:36.555620 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:18:36.570354 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:18:36.612311 udevadm[1014]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:18:36.574489 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:18:36.575599 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:18:36.585365 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:18:36.586444 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:18:36.594307 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:18:36.595705 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:18:36.596826 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:18:36.598796 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:18:36.601152 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:18:36.611685 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:18:36.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.619739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:18:36.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.278832 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:18:37.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.283865 kernel: kauditd_printk_skb: 107 callbacks suppressed Dec 13 14:18:37.283943 kernel: audit: type=1130 audit(1734099517.278:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.283973 kernel: audit: type=1334 audit(1734099517.283:143): prog-id=24 op=LOAD Dec 13 14:18:37.283000 audit: BPF prog-id=24 op=LOAD Dec 13 14:18:37.284000 audit: BPF prog-id=25 op=LOAD Dec 13 14:18:37.285609 systemd[1]: Starting systemd-udevd.service... Dec 13 14:18:37.285911 kernel: audit: type=1334 audit(1734099517.284:144): prog-id=25 op=LOAD Dec 13 14:18:37.285939 kernel: audit: type=1334 audit(1734099517.284:145): prog-id=7 op=UNLOAD Dec 13 14:18:37.286010 kernel: audit: type=1334 audit(1734099517.284:146): prog-id=8 op=UNLOAD Dec 13 14:18:37.284000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:18:37.284000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:18:37.304330 systemd-udevd[1016]: Using default interface naming scheme 'v252'. Dec 13 14:18:37.318273 systemd[1]: Started systemd-udevd.service. Dec 13 14:18:37.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.322000 audit: BPF prog-id=26 op=LOAD Dec 13 14:18:37.323828 systemd[1]: Starting systemd-networkd.service... Dec 13 14:18:37.325019 kernel: audit: type=1130 audit(1734099517.318:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.325065 kernel: audit: type=1334 audit(1734099517.322:148): prog-id=26 op=LOAD Dec 13 14:18:37.329000 audit: BPF prog-id=27 op=LOAD Dec 13 14:18:37.331000 audit: BPF prog-id=28 op=LOAD Dec 13 14:18:37.333208 kernel: audit: type=1334 audit(1734099517.329:149): prog-id=27 op=LOAD Dec 13 14:18:37.333257 kernel: audit: type=1334 audit(1734099517.331:150): prog-id=28 op=LOAD Dec 13 14:18:37.333284 kernel: audit: type=1334 audit(1734099517.332:151): prog-id=29 op=LOAD Dec 13 14:18:37.332000 audit: BPF prog-id=29 op=LOAD Dec 13 14:18:37.333738 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:18:37.350220 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:18:37.360054 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:18:37.363433 systemd[1]: Started systemd-userdbd.service. Dec 13 14:18:37.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.392878 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:18:37.401874 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:18:37.411794 systemd-networkd[1026]: lo: Link UP Dec 13 14:18:37.411808 systemd-networkd[1026]: lo: Gained carrier Dec 13 14:18:37.412265 systemd-networkd[1026]: Enumeration completed Dec 13 14:18:37.412379 systemd[1]: Started systemd-networkd.service. Dec 13 14:18:37.412403 systemd-networkd[1026]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:18:37.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.414361 systemd-networkd[1026]: eth0: Link UP Dec 13 14:18:37.414372 systemd-networkd[1026]: eth0: Gained carrier Dec 13 14:18:37.410000 audit[1017]: AVC avc: denied { confidentiality } for pid=1017 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:18:37.424960 systemd-networkd[1026]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:18:37.410000 audit[1017]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5641cae50810 a1=337fc a2=7ff561b7dbc5 a3=5 items=110 ppid=1016 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.410000 audit: CWD cwd="/" Dec 13 14:18:37.410000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=1 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=2 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=3 name=(null) inode=15527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=4 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=5 name=(null) inode=15528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=6 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=7 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=8 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=9 name=(null) inode=15530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=10 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=11 name=(null) inode=15531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=12 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=13 name=(null) inode=15532 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=14 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.440378 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:18:37.442258 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:18:37.442429 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:18:37.410000 audit: PATH item=15 name=(null) inode=15533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=16 name=(null) inode=15529 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=17 name=(null) inode=15534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=18 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=19 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=20 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=21 name=(null) inode=15536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=22 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=23 name=(null) inode=15537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=24 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=25 name=(null) inode=15538 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=26 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=27 name=(null) inode=15539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=28 name=(null) inode=15535 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=29 name=(null) inode=15540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=30 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=31 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=32 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=33 name=(null) inode=15542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=34 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=35 name=(null) inode=15543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=36 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=37 name=(null) inode=15544 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=38 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=39 name=(null) inode=15545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=40 name=(null) inode=15541 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=41 name=(null) inode=15546 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=42 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=43 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=44 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=45 name=(null) inode=15548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=46 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=47 name=(null) inode=15549 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=48 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=49 name=(null) inode=15550 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=50 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=51 name=(null) inode=15551 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=52 name=(null) inode=15547 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=53 name=(null) inode=15552 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=55 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=56 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=57 name=(null) inode=15554 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=58 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=59 name=(null) inode=15555 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=60 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=61 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=62 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=63 name=(null) inode=15557 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=64 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=65 name=(null) inode=15558 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=66 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=67 name=(null) inode=15559 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=68 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=69 name=(null) inode=15560 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=70 name=(null) inode=15556 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=71 name=(null) inode=15561 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=72 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=73 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=74 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=75 name=(null) inode=15563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=76 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=77 name=(null) inode=15564 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=78 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=79 name=(null) inode=15565 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=80 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=81 name=(null) inode=15566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=82 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=83 name=(null) inode=15567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=84 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=85 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=86 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=87 name=(null) inode=15569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=88 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=89 name=(null) inode=15570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=90 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=91 name=(null) inode=15571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=92 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=93 name=(null) inode=15572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=94 name=(null) inode=15568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=95 name=(null) inode=15573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=96 name=(null) inode=15553 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=97 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=98 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=99 name=(null) inode=15575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=100 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=101 name=(null) inode=15576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=102 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=103 name=(null) inode=15577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=104 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=105 name=(null) inode=15578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=106 name=(null) inode=15574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=107 name=(null) inode=15579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PATH item=109 name=(null) inode=15580 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:37.410000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:18:37.450872 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:18:37.454874 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:18:37.559095 kernel: kvm: Nested Virtualization enabled Dec 13 14:18:37.559224 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:18:37.560780 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:18:37.560822 kernel: SVM: Virtual GIF supported Dec 13 14:18:37.579868 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:18:37.605315 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:18:37.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.607604 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:18:37.615916 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:18:37.643061 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:18:37.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.644221 systemd[1]: Reached target cryptsetup.target. Dec 13 14:18:37.646080 systemd[1]: Starting lvm2-activation.service... Dec 13 14:18:37.649678 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:18:37.694030 systemd[1]: Finished lvm2-activation.service. Dec 13 14:18:37.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.695078 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:18:37.696005 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:18:37.696039 systemd[1]: Reached target local-fs.target. Dec 13 14:18:37.696969 systemd[1]: Reached target machines.target. Dec 13 14:18:37.699115 systemd[1]: Starting ldconfig.service... Dec 13 14:18:37.700278 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:37.700323 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:37.701351 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:18:37.703711 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:18:37.706384 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:18:37.708589 systemd[1]: Starting systemd-sysext.service... Dec 13 14:18:37.709793 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Dec 13 14:18:37.711250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:18:37.716876 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:18:37.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.720049 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:18:37.724380 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:18:37.724562 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:18:37.734884 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:18:37.756341 systemd-fsck[1062]: fsck.fat 4.2 (2021-01-31) Dec 13 14:18:37.756341 systemd-fsck[1062]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:18:37.757729 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:18:37.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.761250 systemd[1]: Mounting boot.mount... Dec 13 14:18:37.882142 systemd[1]: Mounted boot.mount. Dec 13 14:18:37.893881 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:18:37.894640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:18:37.895678 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:18:37.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.897410 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:18:37.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.909873 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:18:37.915567 (sd-sysext)[1067]: Using extensions 'kubernetes'. Dec 13 14:18:37.915983 (sd-sysext)[1067]: Merged extensions into '/usr'. Dec 13 14:18:37.930508 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:37.931775 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:18:37.933136 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:37.934152 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:37.953791 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:37.955891 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:37.956894 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:37.956993 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:37.957090 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:37.959517 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:18:37.960889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:37.961024 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:37.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.962507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:37.962630 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:37.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.964068 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:37.964156 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:37.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.965589 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:37.965690 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:37.966510 systemd[1]: Finished systemd-sysext.service. Dec 13 14:18:37.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.968699 systemd[1]: Starting ensure-sysext.service... Dec 13 14:18:37.970533 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:18:37.971955 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:18:37.977729 systemd[1]: Finished ldconfig.service. Dec 13 14:18:37.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.978884 systemd[1]: Reloading. Dec 13 14:18:37.979816 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:18:37.980618 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:18:37.982239 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:18:38.044573 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2024-12-13T14:18:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:38.044630 /usr/lib/systemd/system-generators/torcx-generator[1093]: time="2024-12-13T14:18:38Z" level=info msg="torcx already run" Dec 13 14:18:38.106978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:38.106994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:38.125898 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:38.191000 audit: BPF prog-id=30 op=LOAD Dec 13 14:18:38.191000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:18:38.191000 audit: BPF prog-id=31 op=LOAD Dec 13 14:18:38.191000 audit: BPF prog-id=32 op=LOAD Dec 13 14:18:38.191000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:18:38.191000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:18:38.192000 audit: BPF prog-id=33 op=LOAD Dec 13 14:18:38.192000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:18:38.193000 audit: BPF prog-id=34 op=LOAD Dec 13 14:18:38.193000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:18:38.193000 audit: BPF prog-id=35 op=LOAD Dec 13 14:18:38.193000 audit: BPF prog-id=36 op=LOAD Dec 13 14:18:38.193000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:18:38.193000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:18:38.194000 audit: BPF prog-id=37 op=LOAD Dec 13 14:18:38.194000 audit: BPF prog-id=38 op=LOAD Dec 13 14:18:38.194000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:18:38.194000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:18:38.197329 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:18:38.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.201448 systemd[1]: Starting audit-rules.service... Dec 13 14:18:38.203451 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:18:38.205404 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:18:38.206000 audit: BPF prog-id=39 op=LOAD Dec 13 14:18:38.207860 systemd[1]: Starting systemd-resolved.service... Dec 13 14:18:38.208000 audit: BPF prog-id=40 op=LOAD Dec 13 14:18:38.210344 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:18:38.212410 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:18:38.213656 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:18:38.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.215000 audit[1146]: SYSTEM_BOOT pid=1146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.219426 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.220685 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:38.222532 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:38.224263 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:38.225048 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.225192 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:38.225323 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:38.226767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:38.226887 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:38.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.228287 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:38.228383 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:38.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.229701 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:18:38.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.231175 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:38.231278 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:38.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.233390 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:18:38.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:38.236302 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.237451 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:38.239109 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:38.241516 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:38.246000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:18:38.246000 audit[1160]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb43ad060 a2=420 a3=0 items=0 ppid=1135 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:38.246000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:18:38.247930 augenrules[1160]: No rules Dec 13 14:18:38.248020 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.248132 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:38.249278 systemd[1]: Starting systemd-update-done.service... Dec 13 14:18:38.250118 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:38.251179 systemd[1]: Finished audit-rules.service. Dec 13 14:18:38.252339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:38.252454 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:38.253724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:38.253887 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:38.255309 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:38.255436 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:38.256759 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:38.257014 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.259378 systemd[1]: Finished systemd-update-done.service. Dec 13 14:18:38.260648 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.261976 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:38.263767 systemd[1]: Starting modprobe@drm.service... Dec 13 14:18:38.265601 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:38.267626 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:38.268482 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.268608 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:38.269585 systemd-timesyncd[1140]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:18:38.269636 systemd-timesyncd[1140]: Initial clock synchronization to Fri 2024-12-13 14:18:38.375860 UTC. Dec 13 14:18:38.269813 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:18:38.270802 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:38.271878 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:18:38.273571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:38.273694 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:38.275005 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:18:38.275111 systemd[1]: Finished modprobe@drm.service. Dec 13 14:18:38.276312 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:38.276416 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:38.277609 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:38.277708 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:38.279113 systemd[1]: Reached target time-set.target. Dec 13 14:18:38.279981 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:38.280013 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.280978 systemd[1]: Finished ensure-sysext.service. Dec 13 14:18:38.290623 systemd-resolved[1138]: Positive Trust Anchors: Dec 13 14:18:38.290634 systemd-resolved[1138]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:18:38.290660 systemd-resolved[1138]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:18:38.299679 systemd-resolved[1138]: Defaulting to hostname 'linux'. Dec 13 14:18:38.301174 systemd[1]: Started systemd-resolved.service. Dec 13 14:18:38.302211 systemd[1]: Reached target network.target. Dec 13 14:18:38.303082 systemd[1]: Reached target nss-lookup.target. Dec 13 14:18:38.304028 systemd[1]: Reached target sysinit.target. Dec 13 14:18:38.305111 systemd[1]: Started motdgen.path. Dec 13 14:18:38.305916 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:18:38.307251 systemd[1]: Started logrotate.timer. Dec 13 14:18:38.308129 systemd[1]: Started mdadm.timer. Dec 13 14:18:38.308915 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:18:38.309842 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:18:38.309894 systemd[1]: Reached target paths.target. Dec 13 14:18:38.316090 systemd[1]: Reached target timers.target. Dec 13 14:18:38.317309 systemd[1]: Listening on dbus.socket. Dec 13 14:18:38.319160 systemd[1]: Starting docker.socket... Dec 13 14:18:38.322562 systemd[1]: Listening on sshd.socket. Dec 13 14:18:38.323646 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:38.324108 systemd[1]: Listening on docker.socket. Dec 13 14:18:38.325109 systemd[1]: Reached target sockets.target. Dec 13 14:18:38.325973 systemd[1]: Reached target basic.target. Dec 13 14:18:38.326918 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.326943 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:18:38.327833 systemd[1]: Starting containerd.service... Dec 13 14:18:38.333901 systemd[1]: Starting dbus.service... Dec 13 14:18:38.335749 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:18:38.337950 systemd[1]: Starting extend-filesystems.service... Dec 13 14:18:38.338957 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:18:38.340562 jq[1178]: false Dec 13 14:18:38.339813 systemd[1]: Starting motdgen.service... Dec 13 14:18:38.341959 systemd[1]: Starting prepare-helm.service... Dec 13 14:18:38.343781 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:18:38.361351 dbus-daemon[1177]: [system] SELinux support is enabled Dec 13 14:18:38.401922 systemd[1]: Starting sshd-keygen.service... Dec 13 14:18:38.405199 systemd[1]: Starting systemd-logind.service... Dec 13 14:18:38.405974 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:38.406058 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:18:38.406501 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:18:38.407227 systemd[1]: Starting update-engine.service... Dec 13 14:18:38.409783 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:18:38.411580 systemd[1]: Started dbus.service. Dec 13 14:18:38.414181 jq[1196]: true Dec 13 14:18:38.415934 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:18:38.416076 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:18:38.417050 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:18:38.417222 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:18:38.422523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:18:38.422568 systemd[1]: Reached target system-config.target. Dec 13 14:18:38.423487 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:18:38.423513 systemd[1]: Reached target user-config.target. Dec 13 14:18:38.423644 extend-filesystems[1179]: Found loop1 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found sr0 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda1 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda2 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda3 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found usr Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda4 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda6 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda7 Dec 13 14:18:38.426533 extend-filesystems[1179]: Found vda9 Dec 13 14:18:38.426533 extend-filesystems[1179]: Checking size of /dev/vda9 Dec 13 14:18:38.424726 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:18:38.436249 jq[1200]: true Dec 13 14:18:38.424879 systemd[1]: Finished motdgen.service. Dec 13 14:18:38.444769 tar[1199]: linux-amd64/helm Dec 13 14:18:38.454425 update_engine[1194]: I1213 14:18:38.454214 1194 main.cc:92] Flatcar Update Engine starting Dec 13 14:18:38.456631 systemd[1]: Started update-engine.service. Dec 13 14:18:38.456766 update_engine[1194]: I1213 14:18:38.456641 1194 update_check_scheduler.cc:74] Next update check in 11m0s Dec 13 14:18:38.459238 systemd[1]: Started locksmithd.service. Dec 13 14:18:38.478912 extend-filesystems[1179]: Resized partition /dev/vda9 Dec 13 14:18:38.499422 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:18:38.500435 systemd-logind[1192]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:18:38.500450 systemd-logind[1192]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:18:38.501452 systemd-logind[1192]: New seat seat0. Dec 13 14:18:38.507864 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:18:38.511310 systemd[1]: Started systemd-logind.service. Dec 13 14:18:38.567181 env[1201]: time="2024-12-13T14:18:38.567077939Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:18:38.593123 env[1201]: time="2024-12-13T14:18:38.593038444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:18:38.593348 env[1201]: time="2024-12-13T14:18:38.593324501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595201 env[1201]: time="2024-12-13T14:18:38.595164281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595201 env[1201]: time="2024-12-13T14:18:38.595197193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595497 env[1201]: time="2024-12-13T14:18:38.595446220Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595497 env[1201]: time="2024-12-13T14:18:38.595476416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595592 env[1201]: time="2024-12-13T14:18:38.595503788Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:18:38.595592 env[1201]: time="2024-12-13T14:18:38.595516521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595650 env[1201]: time="2024-12-13T14:18:38.595623372Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:38.595976 env[1201]: time="2024-12-13T14:18:38.595944494Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:38.596196 env[1201]: time="2024-12-13T14:18:38.596147545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:38.596196 env[1201]: time="2024-12-13T14:18:38.596192389Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:18:38.596331 env[1201]: time="2024-12-13T14:18:38.596266027Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:18:38.596331 env[1201]: time="2024-12-13T14:18:38.596285974Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:18:38.630907 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:18:38.643502 locksmithd[1225]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:18:38.725216 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:38.781594 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:18:38.781594 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:18:38.781594 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:18:38.725276 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:38.788450 bash[1219]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:18:38.788825 extend-filesystems[1179]: Resized filesystem in /dev/vda9 Dec 13 14:18:38.781839 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:18:38.782021 systemd[1]: Finished extend-filesystems.service. Dec 13 14:18:38.784921 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:18:38.791256 env[1201]: time="2024-12-13T14:18:38.791141368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:18:38.791323 env[1201]: time="2024-12-13T14:18:38.791293203Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:18:38.791323 env[1201]: time="2024-12-13T14:18:38.791315314Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:18:38.791525 env[1201]: time="2024-12-13T14:18:38.791387950Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791525 env[1201]: time="2024-12-13T14:18:38.791425791Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791525 env[1201]: time="2024-12-13T14:18:38.791441701Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791525 env[1201]: time="2024-12-13T14:18:38.791468812Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791525 env[1201]: time="2024-12-13T14:18:38.791506402Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791525 env[1201]: time="2024-12-13T14:18:38.791522984Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791693 env[1201]: time="2024-12-13T14:18:38.791540837Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791693 env[1201]: time="2024-12-13T14:18:38.791553992Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.791693 env[1201]: time="2024-12-13T14:18:38.791586813Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:18:38.791858 env[1201]: time="2024-12-13T14:18:38.791816494Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:18:38.792005 env[1201]: time="2024-12-13T14:18:38.791978368Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:18:38.792574 env[1201]: time="2024-12-13T14:18:38.792520244Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:18:38.792708 env[1201]: time="2024-12-13T14:18:38.792593461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792708 env[1201]: time="2024-12-13T14:18:38.792608680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:18:38.792708 env[1201]: time="2024-12-13T14:18:38.792701363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792769 env[1201]: time="2024-12-13T14:18:38.792714518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792769 env[1201]: time="2024-12-13T14:18:38.792730598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792769 env[1201]: time="2024-12-13T14:18:38.792745386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792769 env[1201]: time="2024-12-13T14:18:38.792761546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792939 env[1201]: time="2024-12-13T14:18:38.792779229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792939 env[1201]: time="2024-12-13T14:18:38.792791462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792939 env[1201]: time="2024-12-13T14:18:38.792801962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.792939 env[1201]: time="2024-12-13T14:18:38.792819745Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:18:38.793037 env[1201]: time="2024-12-13T14:18:38.792990436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.793037 env[1201]: time="2024-12-13T14:18:38.793008019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.793037 env[1201]: time="2024-12-13T14:18:38.793020893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.793037 env[1201]: time="2024-12-13T14:18:38.793031953Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:18:38.793115 env[1201]: time="2024-12-13T14:18:38.793046250Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:18:38.793115 env[1201]: time="2024-12-13T14:18:38.793061218Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:18:38.793115 env[1201]: time="2024-12-13T14:18:38.793086355Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:18:38.793180 env[1201]: time="2024-12-13T14:18:38.793133173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:18:38.793422 env[1201]: time="2024-12-13T14:18:38.793370609Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:18:38.794333 env[1201]: time="2024-12-13T14:18:38.793433266Z" level=info msg="Connect containerd service" Dec 13 14:18:38.794333 env[1201]: time="2024-12-13T14:18:38.793501895Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:18:38.794333 env[1201]: time="2024-12-13T14:18:38.794168455Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:18:38.794570 env[1201]: time="2024-12-13T14:18:38.794525975Z" level=info msg="Start subscribing containerd event" Dec 13 14:18:38.794626 env[1201]: time="2024-12-13T14:18:38.794591198Z" level=info msg="Start recovering state" Dec 13 14:18:38.794673 env[1201]: time="2024-12-13T14:18:38.794657502Z" level=info msg="Start event monitor" Dec 13 14:18:38.794721 env[1201]: time="2024-12-13T14:18:38.794676618Z" level=info msg="Start snapshots syncer" Dec 13 14:18:38.794721 env[1201]: time="2024-12-13T14:18:38.794688139Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:18:38.794721 env[1201]: time="2024-12-13T14:18:38.794712575Z" level=info msg="Start streaming server" Dec 13 14:18:38.795021 env[1201]: time="2024-12-13T14:18:38.794998972Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:18:38.795066 env[1201]: time="2024-12-13T14:18:38.795045610Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:18:38.795317 env[1201]: time="2024-12-13T14:18:38.795187055Z" level=info msg="containerd successfully booted in 0.264090s" Dec 13 14:18:38.795253 systemd[1]: Started containerd.service. Dec 13 14:18:39.148258 tar[1199]: linux-amd64/LICENSE Dec 13 14:18:39.148258 tar[1199]: linux-amd64/README.md Dec 13 14:18:39.152582 systemd[1]: Finished prepare-helm.service. Dec 13 14:18:39.224435 systemd-networkd[1026]: eth0: Gained IPv6LL Dec 13 14:18:39.226244 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:18:39.227658 systemd[1]: Reached target network-online.target. Dec 13 14:18:39.230044 systemd[1]: Starting kubelet.service... Dec 13 14:18:39.414142 sshd_keygen[1198]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:18:39.432817 systemd[1]: Finished sshd-keygen.service. Dec 13 14:18:39.435393 systemd[1]: Starting issuegen.service... Dec 13 14:18:39.441156 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:18:39.441319 systemd[1]: Finished issuegen.service. Dec 13 14:18:39.443612 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:18:39.448853 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:18:39.451321 systemd[1]: Started getty@tty1.service. Dec 13 14:18:39.453418 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:18:39.454625 systemd[1]: Reached target getty.target. Dec 13 14:18:39.806749 systemd[1]: Started kubelet.service. Dec 13 14:18:39.808199 systemd[1]: Reached target multi-user.target. Dec 13 14:18:39.810401 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:18:39.817092 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:18:39.817257 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:18:39.818702 systemd[1]: Startup finished in 854ms (kernel) + 5.461s (initrd) + 7.399s (userspace) = 13.715s. Dec 13 14:18:40.414989 kubelet[1257]: E1213 14:18:40.414904 1257 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:40.417699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:40.417842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:40.418105 systemd[1]: kubelet.service: Consumed 1.070s CPU time. Dec 13 14:18:47.955442 systemd[1]: Created slice system-sshd.slice. Dec 13 14:18:47.956532 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:51886.service. Dec 13 14:18:47.998041 sshd[1268]: Accepted publickey for core from 10.0.0.1 port 51886 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:47.999908 sshd[1268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:48.009825 systemd-logind[1192]: New session 1 of user core. Dec 13 14:18:48.010970 systemd[1]: Created slice user-500.slice. Dec 13 14:18:48.012330 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:18:48.021125 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:18:48.022763 systemd[1]: Starting user@500.service... Dec 13 14:18:48.025778 (systemd)[1271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:48.101492 systemd[1271]: Queued start job for default target default.target. Dec 13 14:18:48.102017 systemd[1271]: Reached target paths.target. Dec 13 14:18:48.102051 systemd[1271]: Reached target sockets.target. Dec 13 14:18:48.102069 systemd[1271]: Reached target timers.target. Dec 13 14:18:48.102083 systemd[1271]: Reached target basic.target. Dec 13 14:18:48.102131 systemd[1271]: Reached target default.target. Dec 13 14:18:48.102163 systemd[1271]: Startup finished in 70ms. Dec 13 14:18:48.102315 systemd[1]: Started user@500.service. Dec 13 14:18:48.103489 systemd[1]: Started session-1.scope. Dec 13 14:18:48.158194 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:51898.service. Dec 13 14:18:48.198164 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 51898 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:48.200132 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:48.205052 systemd-logind[1192]: New session 2 of user core. Dec 13 14:18:48.206415 systemd[1]: Started session-2.scope. Dec 13 14:18:48.263289 sshd[1280]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:48.267374 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:51906.service. Dec 13 14:18:48.269264 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:51898.service: Deactivated successfully. Dec 13 14:18:48.269965 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:18:48.270475 systemd-logind[1192]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:18:48.271299 systemd-logind[1192]: Removed session 2. Dec 13 14:18:48.303416 sshd[1285]: Accepted publickey for core from 10.0.0.1 port 51906 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:48.304523 sshd[1285]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:48.307823 systemd-logind[1192]: New session 3 of user core. Dec 13 14:18:48.308546 systemd[1]: Started session-3.scope. Dec 13 14:18:48.357503 sshd[1285]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:48.361380 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:51914.service. Dec 13 14:18:48.361839 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:51906.service: Deactivated successfully. Dec 13 14:18:48.362371 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:18:48.362848 systemd-logind[1192]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:18:48.363770 systemd-logind[1192]: Removed session 3. Dec 13 14:18:48.397515 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 51914 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:48.398696 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:48.401837 systemd-logind[1192]: New session 4 of user core. Dec 13 14:18:48.402521 systemd[1]: Started session-4.scope. Dec 13 14:18:48.456242 sshd[1291]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:48.458984 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:51914.service: Deactivated successfully. Dec 13 14:18:48.459479 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:18:48.459942 systemd-logind[1192]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:18:48.460910 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:51928.service. Dec 13 14:18:48.461543 systemd-logind[1192]: Removed session 4. Dec 13 14:18:48.497875 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 51928 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:18:48.499325 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:48.503237 systemd-logind[1192]: New session 5 of user core. Dec 13 14:18:48.503987 systemd[1]: Started session-5.scope. Dec 13 14:18:48.563475 sudo[1301]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:18:48.563712 sudo[1301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:18:48.621125 systemd[1]: Starting docker.service... Dec 13 14:18:48.689203 env[1313]: time="2024-12-13T14:18:48.689135671Z" level=info msg="Starting up" Dec 13 14:18:48.690778 env[1313]: time="2024-12-13T14:18:48.690731354Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:18:48.690778 env[1313]: time="2024-12-13T14:18:48.690763693Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:18:48.690891 env[1313]: time="2024-12-13T14:18:48.690785359Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:18:48.690891 env[1313]: time="2024-12-13T14:18:48.690798466Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:18:48.697013 env[1313]: time="2024-12-13T14:18:48.696968331Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:18:48.697013 env[1313]: time="2024-12-13T14:18:48.697003144Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:18:48.697112 env[1313]: time="2024-12-13T14:18:48.697031188Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:18:48.697112 env[1313]: time="2024-12-13T14:18:48.697045260Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:18:49.585363 env[1313]: time="2024-12-13T14:18:49.585316976Z" level=info msg="Loading containers: start." Dec 13 14:18:50.318903 kernel: Initializing XFRM netlink socket Dec 13 14:18:50.349901 env[1313]: time="2024-12-13T14:18:50.349826516Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:18:50.400535 systemd-networkd[1026]: docker0: Link UP Dec 13 14:18:50.477535 env[1313]: time="2024-12-13T14:18:50.477464548Z" level=info msg="Loading containers: done." Dec 13 14:18:50.491012 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:18:50.491196 systemd[1]: Stopped kubelet.service. Dec 13 14:18:50.491247 systemd[1]: kubelet.service: Consumed 1.070s CPU time. Dec 13 14:18:50.492634 systemd[1]: Starting kubelet.service... Dec 13 14:18:50.495707 env[1313]: time="2024-12-13T14:18:50.495646294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:18:50.495979 env[1313]: time="2024-12-13T14:18:50.495877411Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:18:50.496055 env[1313]: time="2024-12-13T14:18:50.495988016Z" level=info msg="Daemon has completed initialization" Dec 13 14:18:50.586081 systemd[1]: Started kubelet.service. Dec 13 14:18:50.762799 systemd[1]: Started docker.service. Dec 13 14:18:50.767564 env[1313]: time="2024-12-13T14:18:50.767494224Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:18:50.803018 kubelet[1420]: E1213 14:18:50.802922 1420 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:50.806034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:50.806151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:51.860612 env[1201]: time="2024-12-13T14:18:51.860554635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:18:52.540110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478526408.mount: Deactivated successfully. Dec 13 14:18:55.168317 env[1201]: time="2024-12-13T14:18:55.168252950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.224187 env[1201]: time="2024-12-13T14:18:55.224141642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.254809 env[1201]: time="2024-12-13T14:18:55.254760291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.302525 env[1201]: time="2024-12-13T14:18:55.302469188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.303340 env[1201]: time="2024-12-13T14:18:55.303292931Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:18:55.318297 env[1201]: time="2024-12-13T14:18:55.318254640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:18:59.077380 env[1201]: time="2024-12-13T14:18:59.077284090Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:59.121794 env[1201]: time="2024-12-13T14:18:59.121726004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:59.135347 env[1201]: time="2024-12-13T14:18:59.135254616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:59.146336 env[1201]: time="2024-12-13T14:18:59.146271113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:59.147106 env[1201]: time="2024-12-13T14:18:59.147048744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:18:59.175432 env[1201]: time="2024-12-13T14:18:59.175356621Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:19:01.056941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:19:01.057117 systemd[1]: Stopped kubelet.service. Dec 13 14:19:01.058548 systemd[1]: Starting kubelet.service... Dec 13 14:19:01.143651 systemd[1]: Started kubelet.service. Dec 13 14:19:01.201907 kubelet[1480]: E1213 14:19:01.201829 1480 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:19:01.203529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:19:01.203647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:19:01.831130 env[1201]: time="2024-12-13T14:19:01.831034351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.875414 env[1201]: time="2024-12-13T14:19:01.875320396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.899568 env[1201]: time="2024-12-13T14:19:01.899472817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.938873 env[1201]: time="2024-12-13T14:19:01.938768316Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.939917 env[1201]: time="2024-12-13T14:19:01.939840450Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:19:01.949688 env[1201]: time="2024-12-13T14:19:01.949650579Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:19:04.669133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182845222.mount: Deactivated successfully. Dec 13 14:19:05.700297 env[1201]: time="2024-12-13T14:19:05.700194188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:05.835326 env[1201]: time="2024-12-13T14:19:05.835259360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:05.889413 env[1201]: time="2024-12-13T14:19:05.889332491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:05.989611 env[1201]: time="2024-12-13T14:19:05.989419178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:05.990075 env[1201]: time="2024-12-13T14:19:05.990006040Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:19:06.336557 env[1201]: time="2024-12-13T14:19:06.336421596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:19:07.019687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4033905576.mount: Deactivated successfully. Dec 13 14:19:08.336455 env[1201]: time="2024-12-13T14:19:08.336380065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.338666 env[1201]: time="2024-12-13T14:19:08.338641680Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.341376 env[1201]: time="2024-12-13T14:19:08.341313077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.343091 env[1201]: time="2024-12-13T14:19:08.343064976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.344155 env[1201]: time="2024-12-13T14:19:08.344116539Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:19:08.358341 env[1201]: time="2024-12-13T14:19:08.358306557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:19:08.969248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533939421.mount: Deactivated successfully. Dec 13 14:19:08.975025 env[1201]: time="2024-12-13T14:19:08.974980158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.976928 env[1201]: time="2024-12-13T14:19:08.976876669Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.978376 env[1201]: time="2024-12-13T14:19:08.978346420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.979629 env[1201]: time="2024-12-13T14:19:08.979577418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:08.980041 env[1201]: time="2024-12-13T14:19:08.980001742Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:19:09.005297 env[1201]: time="2024-12-13T14:19:09.005259367Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:19:09.689675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688953635.mount: Deactivated successfully. Dec 13 14:19:11.454704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:19:11.454994 systemd[1]: Stopped kubelet.service. Dec 13 14:19:11.456760 systemd[1]: Starting kubelet.service... Dec 13 14:19:11.542029 systemd[1]: Started kubelet.service. Dec 13 14:19:11.643029 kubelet[1520]: E1213 14:19:11.642962 1520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:19:11.644913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:19:11.645056 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:19:14.608869 env[1201]: time="2024-12-13T14:19:14.608795215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:14.611203 env[1201]: time="2024-12-13T14:19:14.611139016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:14.613502 env[1201]: time="2024-12-13T14:19:14.613456253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:14.615825 env[1201]: time="2024-12-13T14:19:14.615781034Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:14.616664 env[1201]: time="2024-12-13T14:19:14.616616145Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:19:17.560777 systemd[1]: Stopped kubelet.service. Dec 13 14:19:17.563158 systemd[1]: Starting kubelet.service... Dec 13 14:19:17.579633 systemd[1]: Reloading. Dec 13 14:19:17.642652 /usr/lib/systemd/system-generators/torcx-generator[1629]: time="2024-12-13T14:19:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:19:17.642694 /usr/lib/systemd/system-generators/torcx-generator[1629]: time="2024-12-13T14:19:17Z" level=info msg="torcx already run" Dec 13 14:19:18.043970 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:19:18.043987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:19:18.060764 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:19:18.136332 systemd[1]: Started kubelet.service. Dec 13 14:19:18.138052 systemd[1]: Stopping kubelet.service... Dec 13 14:19:18.138304 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:19:18.138478 systemd[1]: Stopped kubelet.service. Dec 13 14:19:18.139955 systemd[1]: Starting kubelet.service... Dec 13 14:19:18.213731 systemd[1]: Started kubelet.service. Dec 13 14:19:18.262008 kubelet[1676]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:18.262008 kubelet[1676]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:19:18.262008 kubelet[1676]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:18.262457 kubelet[1676]: I1213 14:19:18.262039 1676 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:19:18.549671 kubelet[1676]: I1213 14:19:18.549616 1676 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:19:18.549671 kubelet[1676]: I1213 14:19:18.549648 1676 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:19:18.549932 kubelet[1676]: I1213 14:19:18.549876 1676 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:19:18.578680 kubelet[1676]: E1213 14:19:18.578626 1676 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.587074 kubelet[1676]: I1213 14:19:18.587028 1676 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:19:18.614323 kubelet[1676]: I1213 14:19:18.614285 1676 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:19:18.614536 kubelet[1676]: I1213 14:19:18.614511 1676 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:19:18.614695 kubelet[1676]: I1213 14:19:18.614672 1676 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:19:18.614786 kubelet[1676]: I1213 14:19:18.614697 1676 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:19:18.614786 kubelet[1676]: I1213 14:19:18.614705 1676 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:19:18.614839 kubelet[1676]: I1213 14:19:18.614818 1676 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:18.614946 kubelet[1676]: I1213 14:19:18.614929 1676 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:19:18.614946 kubelet[1676]: I1213 14:19:18.614950 1676 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:19:18.615029 kubelet[1676]: I1213 14:19:18.614970 1676 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:19:18.615029 kubelet[1676]: I1213 14:19:18.614986 1676 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:19:18.615999 kubelet[1676]: W1213 14:19:18.615925 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.615999 kubelet[1676]: E1213 14:19:18.615985 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.615999 kubelet[1676]: W1213 14:19:18.615960 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.616366 kubelet[1676]: E1213 14:19:18.616028 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.617268 kubelet[1676]: I1213 14:19:18.617248 1676 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:19:18.626669 kubelet[1676]: I1213 14:19:18.626645 1676 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:19:18.626733 kubelet[1676]: W1213 14:19:18.626700 1676 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:19:18.627174 kubelet[1676]: I1213 14:19:18.627157 1676 server.go:1256] "Started kubelet" Dec 13 14:19:18.627398 kubelet[1676]: I1213 14:19:18.627380 1676 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:19:18.627451 kubelet[1676]: I1213 14:19:18.627400 1676 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:19:18.627875 kubelet[1676]: I1213 14:19:18.627630 1676 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:19:18.628263 kubelet[1676]: I1213 14:19:18.628138 1676 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:19:18.642267 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:19:18.643424 kubelet[1676]: I1213 14:19:18.643397 1676 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:19:18.646547 kubelet[1676]: E1213 14:19:18.646521 1676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:19:18.646618 kubelet[1676]: I1213 14:19:18.646571 1676 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:19:18.646709 kubelet[1676]: I1213 14:19:18.646689 1676 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:19:18.646783 kubelet[1676]: I1213 14:19:18.646777 1676 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:19:18.649052 kubelet[1676]: E1213 14:19:18.648831 1676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" Dec 13 14:19:18.649465 kubelet[1676]: I1213 14:19:18.649429 1676 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:19:18.649742 kubelet[1676]: W1213 14:19:18.649709 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.649802 kubelet[1676]: E1213 14:19:18.649746 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.649941 kubelet[1676]: E1213 14:19:18.649907 1676 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c25c6521e36a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:19:18.627140458 +0000 UTC m=+0.409941149,LastTimestamp:2024-12-13 14:19:18.627140458 +0000 UTC m=+0.409941149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:19:18.650677 kubelet[1676]: I1213 14:19:18.650663 1676 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:19:18.650677 kubelet[1676]: I1213 14:19:18.650675 1676 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:19:18.650869 kubelet[1676]: E1213 14:19:18.650814 1676 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:19:18.657813 kubelet[1676]: I1213 14:19:18.657772 1676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:19:18.658779 kubelet[1676]: I1213 14:19:18.658747 1676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:19:18.658898 kubelet[1676]: I1213 14:19:18.658883 1676 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:19:18.659017 kubelet[1676]: I1213 14:19:18.658974 1676 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:19:18.659116 kubelet[1676]: E1213 14:19:18.659049 1676 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:19:18.659670 kubelet[1676]: W1213 14:19:18.659615 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.659726 kubelet[1676]: E1213 14:19:18.659687 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:18.662472 kubelet[1676]: I1213 14:19:18.662431 1676 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:19:18.662472 kubelet[1676]: I1213 14:19:18.662460 1676 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:19:18.662577 kubelet[1676]: I1213 14:19:18.662479 1676 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:18.748022 kubelet[1676]: I1213 14:19:18.747976 1676 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:18.748368 kubelet[1676]: E1213 14:19:18.748349 1676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 13 14:19:18.759633 kubelet[1676]: E1213 14:19:18.759592 1676 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:19:18.849634 kubelet[1676]: E1213 14:19:18.849516 1676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" Dec 13 14:19:18.950089 kubelet[1676]: I1213 14:19:18.950049 1676 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:18.950554 kubelet[1676]: E1213 14:19:18.950535 1676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 13 14:19:18.960651 kubelet[1676]: E1213 14:19:18.960618 1676 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:19:19.050022 kubelet[1676]: I1213 14:19:19.049963 1676 policy_none.go:49] "None policy: Start" Dec 13 14:19:19.051082 kubelet[1676]: I1213 14:19:19.051048 1676 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:19:19.051082 kubelet[1676]: I1213 14:19:19.051076 1676 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:19:19.060435 systemd[1]: Created slice kubepods.slice. Dec 13 14:19:19.064691 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:19:19.067229 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:19:19.080257 kubelet[1676]: I1213 14:19:19.080188 1676 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:19:19.080987 kubelet[1676]: I1213 14:19:19.080747 1676 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:19:19.082485 kubelet[1676]: E1213 14:19:19.082464 1676 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:19:19.250038 kubelet[1676]: E1213 14:19:19.250006 1676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" Dec 13 14:19:19.353319 kubelet[1676]: I1213 14:19:19.353256 1676 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:19.353740 kubelet[1676]: E1213 14:19:19.353720 1676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 13 14:19:19.361047 kubelet[1676]: I1213 14:19:19.360985 1676 topology_manager.go:215] "Topology Admit Handler" podUID="cef33b089f7fa50c6210cefc08362a97" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:19:19.362347 kubelet[1676]: I1213 14:19:19.362324 1676 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:19:19.363331 kubelet[1676]: I1213 14:19:19.363302 1676 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:19:19.368215 systemd[1]: Created slice kubepods-burstable-podcef33b089f7fa50c6210cefc08362a97.slice. Dec 13 14:19:19.375768 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 14:19:19.384124 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 14:19:19.451087 kubelet[1676]: I1213 14:19:19.451011 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cef33b089f7fa50c6210cefc08362a97-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cef33b089f7fa50c6210cefc08362a97\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:19.451087 kubelet[1676]: I1213 14:19:19.451070 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cef33b089f7fa50c6210cefc08362a97-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cef33b089f7fa50c6210cefc08362a97\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:19.451087 kubelet[1676]: I1213 14:19:19.451094 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:19.451367 kubelet[1676]: I1213 14:19:19.451133 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:19.451367 kubelet[1676]: I1213 14:19:19.451150 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:19.451367 kubelet[1676]: I1213 14:19:19.451168 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:19:19.451367 kubelet[1676]: I1213 14:19:19.451184 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cef33b089f7fa50c6210cefc08362a97-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cef33b089f7fa50c6210cefc08362a97\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:19.451367 kubelet[1676]: I1213 14:19:19.451200 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:19.451532 kubelet[1676]: I1213 14:19:19.451220 1676 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:19.746738 kubelet[1676]: E1213 14:19:19.746686 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:19.746738 kubelet[1676]: E1213 14:19:19.746714 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:19.747063 kubelet[1676]: E1213 14:19:19.746711 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:19.747164 kubelet[1676]: W1213 14:19:19.747135 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:19.747216 kubelet[1676]: E1213 14:19:19.747173 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:19.747725 env[1201]: time="2024-12-13T14:19:19.747664842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:19.748039 env[1201]: time="2024-12-13T14:19:19.747664822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:19.748039 env[1201]: time="2024-12-13T14:19:19.747664842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cef33b089f7fa50c6210cefc08362a97,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:19.838976 kubelet[1676]: W1213 14:19:19.838871 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:19.838976 kubelet[1676]: E1213 14:19:19.838960 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:20.051298 kubelet[1676]: E1213 14:19:20.051211 1676 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="1.6s" Dec 13 14:19:20.139246 kubelet[1676]: W1213 14:19:20.139184 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:20.139246 kubelet[1676]: E1213 14:19:20.139248 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:20.155440 kubelet[1676]: I1213 14:19:20.155410 1676 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:20.155683 kubelet[1676]: E1213 14:19:20.155658 1676 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Dec 13 14:19:20.210034 kubelet[1676]: W1213 14:19:20.210012 1676 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:20.210112 kubelet[1676]: E1213 14:19:20.210039 1676 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:20.258948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739305627.mount: Deactivated successfully. Dec 13 14:19:20.262924 env[1201]: time="2024-12-13T14:19:20.262883127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.266374 env[1201]: time="2024-12-13T14:19:20.266308728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.267815 env[1201]: time="2024-12-13T14:19:20.267770671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.268906 env[1201]: time="2024-12-13T14:19:20.268872916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.271883 env[1201]: time="2024-12-13T14:19:20.271821794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.273801 env[1201]: time="2024-12-13T14:19:20.273715381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.275657 env[1201]: time="2024-12-13T14:19:20.275588758Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.277806 env[1201]: time="2024-12-13T14:19:20.277748358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.279221 env[1201]: time="2024-12-13T14:19:20.279177637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.280069 env[1201]: time="2024-12-13T14:19:20.280028418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.281589 env[1201]: time="2024-12-13T14:19:20.281537586Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.284103 env[1201]: time="2024-12-13T14:19:20.284052467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:20.313230 env[1201]: time="2024-12-13T14:19:20.313003991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:20.313230 env[1201]: time="2024-12-13T14:19:20.313113761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:20.313230 env[1201]: time="2024-12-13T14:19:20.313143261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:20.313663 env[1201]: time="2024-12-13T14:19:20.313556927Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ee5b395bf18fb98e8024e23d44ae768c07fed56d872756c9d8aa4e1504e747e pid=1716 runtime=io.containerd.runc.v2 Dec 13 14:19:20.322321 env[1201]: time="2024-12-13T14:19:20.322238281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:20.322321 env[1201]: time="2024-12-13T14:19:20.322284383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:20.322321 env[1201]: time="2024-12-13T14:19:20.322299353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:20.322495 env[1201]: time="2024-12-13T14:19:20.322438842Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5e1ebb56cd9990c30fd03b8b9e7f050466ed5ee41a75eabcc8c1a18fee7a08a pid=1734 runtime=io.containerd.runc.v2 Dec 13 14:19:20.331164 env[1201]: time="2024-12-13T14:19:20.331060095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:20.331164 env[1201]: time="2024-12-13T14:19:20.331108061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:20.331164 env[1201]: time="2024-12-13T14:19:20.331121528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:20.331703 env[1201]: time="2024-12-13T14:19:20.331607520Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6d35eb9633556bfa8a56a7202f605624dabd23154de15b7d1a842f3cef6e24c pid=1758 runtime=io.containerd.runc.v2 Dec 13 14:19:20.356040 systemd[1]: Started cri-containerd-5ee5b395bf18fb98e8024e23d44ae768c07fed56d872756c9d8aa4e1504e747e.scope. Dec 13 14:19:20.368158 systemd[1]: Started cri-containerd-c5e1ebb56cd9990c30fd03b8b9e7f050466ed5ee41a75eabcc8c1a18fee7a08a.scope. Dec 13 14:19:20.394473 systemd[1]: Started cri-containerd-f6d35eb9633556bfa8a56a7202f605624dabd23154de15b7d1a842f3cef6e24c.scope. Dec 13 14:19:20.494104 env[1201]: time="2024-12-13T14:19:20.494064837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5e1ebb56cd9990c30fd03b8b9e7f050466ed5ee41a75eabcc8c1a18fee7a08a\"" Dec 13 14:19:20.496780 kubelet[1676]: E1213 14:19:20.496756 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:20.500426 env[1201]: time="2024-12-13T14:19:20.500348835Z" level=info msg="CreateContainer within sandbox \"c5e1ebb56cd9990c30fd03b8b9e7f050466ed5ee41a75eabcc8c1a18fee7a08a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:19:20.505164 env[1201]: time="2024-12-13T14:19:20.505101591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cef33b089f7fa50c6210cefc08362a97,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ee5b395bf18fb98e8024e23d44ae768c07fed56d872756c9d8aa4e1504e747e\"" Dec 13 14:19:20.505908 kubelet[1676]: E1213 14:19:20.505883 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:20.509040 env[1201]: time="2024-12-13T14:19:20.508996019Z" level=info msg="CreateContainer within sandbox \"5ee5b395bf18fb98e8024e23d44ae768c07fed56d872756c9d8aa4e1504e747e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:19:20.518421 env[1201]: time="2024-12-13T14:19:20.518352693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6d35eb9633556bfa8a56a7202f605624dabd23154de15b7d1a842f3cef6e24c\"" Dec 13 14:19:20.519123 kubelet[1676]: E1213 14:19:20.519093 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:20.520803 env[1201]: time="2024-12-13T14:19:20.520761591Z" level=info msg="CreateContainer within sandbox \"f6d35eb9633556bfa8a56a7202f605624dabd23154de15b7d1a842f3cef6e24c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:19:20.538086 env[1201]: time="2024-12-13T14:19:20.538029068Z" level=info msg="CreateContainer within sandbox \"c5e1ebb56cd9990c30fd03b8b9e7f050466ed5ee41a75eabcc8c1a18fee7a08a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cd68dae7cb04672b973d4259032fa3311f70ef2ae300b4f446fe30b427d889f6\"" Dec 13 14:19:20.538762 env[1201]: time="2024-12-13T14:19:20.538736813Z" level=info msg="StartContainer for \"cd68dae7cb04672b973d4259032fa3311f70ef2ae300b4f446fe30b427d889f6\"" Dec 13 14:19:20.543059 env[1201]: time="2024-12-13T14:19:20.543020952Z" level=info msg="CreateContainer within sandbox \"5ee5b395bf18fb98e8024e23d44ae768c07fed56d872756c9d8aa4e1504e747e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a865c64ad930af57583c8559c6ba4349229fbd4721128ccab3805849434710a\"" Dec 13 14:19:20.544014 env[1201]: time="2024-12-13T14:19:20.543970932Z" level=info msg="StartContainer for \"3a865c64ad930af57583c8559c6ba4349229fbd4721128ccab3805849434710a\"" Dec 13 14:19:20.549802 env[1201]: time="2024-12-13T14:19:20.549739810Z" level=info msg="CreateContainer within sandbox \"f6d35eb9633556bfa8a56a7202f605624dabd23154de15b7d1a842f3cef6e24c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"252537664e051749b0947e01c414768fe866b3d92f175d9ec29450f0fb6e874d\"" Dec 13 14:19:20.551608 env[1201]: time="2024-12-13T14:19:20.551565511Z" level=info msg="StartContainer for \"252537664e051749b0947e01c414768fe866b3d92f175d9ec29450f0fb6e874d\"" Dec 13 14:19:20.556462 systemd[1]: Started cri-containerd-cd68dae7cb04672b973d4259032fa3311f70ef2ae300b4f446fe30b427d889f6.scope. Dec 13 14:19:20.572274 systemd[1]: Started cri-containerd-3a865c64ad930af57583c8559c6ba4349229fbd4721128ccab3805849434710a.scope. Dec 13 14:19:20.578141 systemd[1]: Started cri-containerd-252537664e051749b0947e01c414768fe866b3d92f175d9ec29450f0fb6e874d.scope. Dec 13 14:19:20.658906 env[1201]: time="2024-12-13T14:19:20.657736832Z" level=info msg="StartContainer for \"cd68dae7cb04672b973d4259032fa3311f70ef2ae300b4f446fe30b427d889f6\" returns successfully" Dec 13 14:19:20.672924 kubelet[1676]: E1213 14:19:20.672877 1676 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.27:6443: connect: connection refused Dec 13 14:19:20.685510 env[1201]: time="2024-12-13T14:19:20.685455692Z" level=info msg="StartContainer for \"252537664e051749b0947e01c414768fe866b3d92f175d9ec29450f0fb6e874d\" returns successfully" Dec 13 14:19:20.703930 env[1201]: time="2024-12-13T14:19:20.703830323Z" level=info msg="StartContainer for \"3a865c64ad930af57583c8559c6ba4349229fbd4721128ccab3805849434710a\" returns successfully" Dec 13 14:19:20.756145 kubelet[1676]: E1213 14:19:20.756107 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:20.757539 kubelet[1676]: E1213 14:19:20.757507 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:20.763061 kubelet[1676]: E1213 14:19:20.763033 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:21.757475 kubelet[1676]: I1213 14:19:21.757433 1676 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:21.764842 kubelet[1676]: E1213 14:19:21.764798 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:22.086892 kubelet[1676]: E1213 14:19:22.086761 1676 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:19:22.245801 kubelet[1676]: I1213 14:19:22.245738 1676 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:19:22.310008 kubelet[1676]: E1213 14:19:22.309957 1676 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 14:19:22.617073 kubelet[1676]: I1213 14:19:22.617036 1676 apiserver.go:52] "Watching apiserver" Dec 13 14:19:22.647703 kubelet[1676]: I1213 14:19:22.647631 1676 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:19:22.770150 kubelet[1676]: E1213 14:19:22.770121 1676 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:22.770636 kubelet[1676]: E1213 14:19:22.770588 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:24.106929 update_engine[1194]: I1213 14:19:24.106860 1194 update_attempter.cc:509] Updating boot flags... Dec 13 14:19:24.900896 systemd[1]: Reloading. Dec 13 14:19:24.963259 kubelet[1676]: E1213 14:19:24.963227 1676 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:24.977042 /usr/lib/systemd/system-generators/torcx-generator[1987]: time="2024-12-13T14:19:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:19:24.977075 /usr/lib/systemd/system-generators/torcx-generator[1987]: time="2024-12-13T14:19:24Z" level=info msg="torcx already run" Dec 13 14:19:25.034646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:19:25.034667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:19:25.052244 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:19:25.143164 kubelet[1676]: I1213 14:19:25.143102 1676 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:19:25.143156 systemd[1]: Stopping kubelet.service... Dec 13 14:19:25.161472 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:19:25.161708 systemd[1]: Stopped kubelet.service. Dec 13 14:19:25.161766 systemd[1]: kubelet.service: Consumed 1.035s CPU time. Dec 13 14:19:25.163639 systemd[1]: Starting kubelet.service... Dec 13 14:19:25.240664 systemd[1]: Started kubelet.service. Dec 13 14:19:25.292422 kubelet[2033]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:25.292894 kubelet[2033]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:19:25.292971 kubelet[2033]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:25.293157 kubelet[2033]: I1213 14:19:25.293110 2033 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:19:25.299896 kubelet[2033]: I1213 14:19:25.299835 2033 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:19:25.299896 kubelet[2033]: I1213 14:19:25.299887 2033 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:19:25.300148 kubelet[2033]: I1213 14:19:25.300129 2033 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:19:25.301771 kubelet[2033]: I1213 14:19:25.301747 2033 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:19:25.303716 kubelet[2033]: I1213 14:19:25.303695 2033 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:19:25.311426 kubelet[2033]: I1213 14:19:25.311379 2033 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:19:25.319163 kubelet[2033]: I1213 14:19:25.319121 2033 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:19:25.319493 kubelet[2033]: I1213 14:19:25.319475 2033 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:19:25.319601 kubelet[2033]: I1213 14:19:25.319507 2033 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:19:25.319601 kubelet[2033]: I1213 14:19:25.319519 2033 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:19:25.319601 kubelet[2033]: I1213 14:19:25.319556 2033 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:25.319691 kubelet[2033]: I1213 14:19:25.319674 2033 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:19:25.319738 kubelet[2033]: I1213 14:19:25.319697 2033 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:19:25.319802 kubelet[2033]: I1213 14:19:25.319741 2033 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:19:25.319802 kubelet[2033]: I1213 14:19:25.319777 2033 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:19:25.320774 kubelet[2033]: I1213 14:19:25.320723 2033 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:19:25.321038 kubelet[2033]: I1213 14:19:25.321005 2033 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:19:25.321493 kubelet[2033]: I1213 14:19:25.321478 2033 server.go:1256] "Started kubelet" Dec 13 14:19:25.325958 kubelet[2033]: I1213 14:19:25.325920 2033 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:19:25.327191 kubelet[2033]: I1213 14:19:25.327162 2033 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:19:25.327505 kubelet[2033]: I1213 14:19:25.327479 2033 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:19:25.333249 kubelet[2033]: I1213 14:19:25.332532 2033 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:19:25.333518 kubelet[2033]: E1213 14:19:25.333329 2033 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:19:25.335283 kubelet[2033]: I1213 14:19:25.335261 2033 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:19:25.337215 kubelet[2033]: I1213 14:19:25.336711 2033 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:19:25.339331 kubelet[2033]: I1213 14:19:25.339300 2033 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:19:25.339464 kubelet[2033]: I1213 14:19:25.339419 2033 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:19:25.339589 kubelet[2033]: I1213 14:19:25.339573 2033 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:19:25.339682 kubelet[2033]: I1213 14:19:25.339668 2033 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:19:25.341119 kubelet[2033]: I1213 14:19:25.341093 2033 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:19:25.350217 kubelet[2033]: I1213 14:19:25.350178 2033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:19:25.351126 kubelet[2033]: I1213 14:19:25.351097 2033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:19:25.351126 kubelet[2033]: I1213 14:19:25.351127 2033 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:19:25.351230 kubelet[2033]: I1213 14:19:25.351149 2033 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:19:25.351230 kubelet[2033]: E1213 14:19:25.351203 2033 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:19:25.380583 kubelet[2033]: I1213 14:19:25.380554 2033 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:19:25.380787 kubelet[2033]: I1213 14:19:25.380773 2033 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:19:25.380890 kubelet[2033]: I1213 14:19:25.380876 2033 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:25.381208 kubelet[2033]: I1213 14:19:25.381194 2033 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:19:25.381357 kubelet[2033]: I1213 14:19:25.381342 2033 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:19:25.381456 kubelet[2033]: I1213 14:19:25.381442 2033 policy_none.go:49] "None policy: Start" Dec 13 14:19:25.382227 kubelet[2033]: I1213 14:19:25.382166 2033 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:19:25.382310 kubelet[2033]: I1213 14:19:25.382246 2033 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:19:25.382444 kubelet[2033]: I1213 14:19:25.382428 2033 state_mem.go:75] "Updated machine memory state" Dec 13 14:19:25.385932 kubelet[2033]: I1213 14:19:25.385884 2033 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:19:25.386169 kubelet[2033]: I1213 14:19:25.386148 2033 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:19:25.442046 kubelet[2033]: I1213 14:19:25.441932 2033 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:25.451985 kubelet[2033]: I1213 14:19:25.451935 2033 topology_manager.go:215] "Topology Admit Handler" podUID="cef33b089f7fa50c6210cefc08362a97" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:19:25.452111 kubelet[2033]: I1213 14:19:25.452045 2033 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:19:25.452111 kubelet[2033]: I1213 14:19:25.452076 2033 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:19:25.472867 kubelet[2033]: E1213 14:19:25.472773 2033 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:25.475214 kubelet[2033]: I1213 14:19:25.475185 2033 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:19:25.475388 kubelet[2033]: I1213 14:19:25.475333 2033 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:19:25.484533 sudo[2066]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:19:25.484720 sudo[2066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:19:25.641464 kubelet[2033]: I1213 14:19:25.641401 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cef33b089f7fa50c6210cefc08362a97-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cef33b089f7fa50c6210cefc08362a97\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:25.641464 kubelet[2033]: I1213 14:19:25.641467 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cef33b089f7fa50c6210cefc08362a97-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cef33b089f7fa50c6210cefc08362a97\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:25.641722 kubelet[2033]: I1213 14:19:25.641501 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cef33b089f7fa50c6210cefc08362a97-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cef33b089f7fa50c6210cefc08362a97\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:25.641722 kubelet[2033]: I1213 14:19:25.641527 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:25.641722 kubelet[2033]: I1213 14:19:25.641554 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:25.641722 kubelet[2033]: I1213 14:19:25.641578 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:25.641722 kubelet[2033]: I1213 14:19:25.641604 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:25.641911 kubelet[2033]: I1213 14:19:25.641630 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:25.641911 kubelet[2033]: I1213 14:19:25.641661 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:19:25.826240 kubelet[2033]: E1213 14:19:25.826205 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:25.826433 kubelet[2033]: E1213 14:19:25.826401 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:25.826657 kubelet[2033]: E1213 14:19:25.826642 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:26.084370 sudo[2066]: pam_unix(sudo:session): session closed for user root Dec 13 14:19:26.320813 kubelet[2033]: I1213 14:19:26.320751 2033 apiserver.go:52] "Watching apiserver" Dec 13 14:19:26.340217 kubelet[2033]: I1213 14:19:26.340083 2033 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:19:26.367881 kubelet[2033]: E1213 14:19:26.367816 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:26.369200 kubelet[2033]: E1213 14:19:26.369173 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:26.377876 kubelet[2033]: E1213 14:19:26.377809 2033 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:26.378373 kubelet[2033]: E1213 14:19:26.378344 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:26.392662 kubelet[2033]: I1213 14:19:26.392609 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.392560472 podStartE2EDuration="1.392560472s" podCreationTimestamp="2024-12-13 14:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:26.392133771 +0000 UTC m=+1.139756028" watchObservedRunningTime="2024-12-13 14:19:26.392560472 +0000 UTC m=+1.140182739" Dec 13 14:19:26.400378 kubelet[2033]: I1213 14:19:26.400323 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.400273234 podStartE2EDuration="1.400273234s" podCreationTimestamp="2024-12-13 14:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:26.400186023 +0000 UTC m=+1.147808290" watchObservedRunningTime="2024-12-13 14:19:26.400273234 +0000 UTC m=+1.147895511" Dec 13 14:19:26.420159 kubelet[2033]: I1213 14:19:26.420106 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.420045494 podStartE2EDuration="2.420045494s" podCreationTimestamp="2024-12-13 14:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:26.409360112 +0000 UTC m=+1.156982379" watchObservedRunningTime="2024-12-13 14:19:26.420045494 +0000 UTC m=+1.167667761" Dec 13 14:19:27.369368 kubelet[2033]: E1213 14:19:27.369331 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.866015 sudo[1301]: pam_unix(sudo:session): session closed for user root Dec 13 14:19:27.867768 sshd[1298]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:27.870712 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:51928.service: Deactivated successfully. Dec 13 14:19:27.871629 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:19:27.871780 systemd[1]: session-5.scope: Consumed 6.303s CPU time. Dec 13 14:19:27.872249 systemd-logind[1192]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:19:27.873027 systemd-logind[1192]: Removed session 5. Dec 13 14:19:28.523231 kubelet[2033]: E1213 14:19:28.523175 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:35.713689 kubelet[2033]: E1213 14:19:35.713618 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:35.726189 kubelet[2033]: E1213 14:19:35.726141 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:36.385252 kubelet[2033]: E1213 14:19:36.385218 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:36.385252 kubelet[2033]: E1213 14:19:36.385224 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:38.527771 kubelet[2033]: E1213 14:19:38.527731 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:39.579402 kubelet[2033]: I1213 14:19:39.579361 2033 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:19:39.579833 env[1201]: time="2024-12-13T14:19:39.579752957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:19:39.580056 kubelet[2033]: I1213 14:19:39.579982 2033 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:19:40.596706 kubelet[2033]: I1213 14:19:40.596634 2033 topology_manager.go:215] "Topology Admit Handler" podUID="b7608033-6af9-4e5d-80fe-73474694dd01" podNamespace="kube-system" podName="kube-proxy-4f4qn" Dec 13 14:19:40.604213 systemd[1]: Created slice kubepods-besteffort-podb7608033_6af9_4e5d_80fe_73474694dd01.slice. Dec 13 14:19:40.633804 kubelet[2033]: I1213 14:19:40.633739 2033 topology_manager.go:215] "Topology Admit Handler" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" podNamespace="kube-system" podName="cilium-8r4fz" Dec 13 14:19:40.639790 systemd[1]: Created slice kubepods-burstable-pod7725461c_0819_4c88_8faa_37cb7f5d1189.slice. Dec 13 14:19:40.757032 kubelet[2033]: I1213 14:19:40.756971 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7608033-6af9-4e5d-80fe-73474694dd01-kube-proxy\") pod \"kube-proxy-4f4qn\" (UID: \"b7608033-6af9-4e5d-80fe-73474694dd01\") " pod="kube-system/kube-proxy-4f4qn" Dec 13 14:19:40.757032 kubelet[2033]: I1213 14:19:40.757045 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7608033-6af9-4e5d-80fe-73474694dd01-xtables-lock\") pod \"kube-proxy-4f4qn\" (UID: \"b7608033-6af9-4e5d-80fe-73474694dd01\") " pod="kube-system/kube-proxy-4f4qn" Dec 13 14:19:40.757283 kubelet[2033]: I1213 14:19:40.757070 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ckg\" (UniqueName: \"kubernetes.io/projected/b7608033-6af9-4e5d-80fe-73474694dd01-kube-api-access-r2ckg\") pod \"kube-proxy-4f4qn\" (UID: \"b7608033-6af9-4e5d-80fe-73474694dd01\") " pod="kube-system/kube-proxy-4f4qn" Dec 13 14:19:40.757423 kubelet[2033]: I1213 14:19:40.757398 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-bpf-maps\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757471 kubelet[2033]: I1213 14:19:40.757456 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cni-path\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757495 kubelet[2033]: I1213 14:19:40.757487 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-lib-modules\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757524 kubelet[2033]: I1213 14:19:40.757512 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-cgroup\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757551 kubelet[2033]: I1213 14:19:40.757537 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-hubble-tls\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757593 kubelet[2033]: I1213 14:19:40.757581 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7608033-6af9-4e5d-80fe-73474694dd01-lib-modules\") pod \"kube-proxy-4f4qn\" (UID: \"b7608033-6af9-4e5d-80fe-73474694dd01\") " pod="kube-system/kube-proxy-4f4qn" Dec 13 14:19:40.757619 kubelet[2033]: I1213 14:19:40.757605 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-hostproc\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757671 kubelet[2033]: I1213 14:19:40.757645 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-config-path\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757716 kubelet[2033]: I1213 14:19:40.757692 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-kernel\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.757761 kubelet[2033]: I1213 14:19:40.757716 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-net\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.758041 kubelet[2033]: I1213 14:19:40.758015 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49nv\" (UniqueName: \"kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-kube-api-access-v49nv\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.758098 kubelet[2033]: I1213 14:19:40.758049 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-run\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.758098 kubelet[2033]: I1213 14:19:40.758074 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-etc-cni-netd\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.758098 kubelet[2033]: I1213 14:19:40.758097 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-xtables-lock\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.758172 kubelet[2033]: I1213 14:19:40.758121 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7725461c-0819-4c88-8faa-37cb7f5d1189-clustermesh-secrets\") pod \"cilium-8r4fz\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " pod="kube-system/cilium-8r4fz" Dec 13 14:19:40.772266 kubelet[2033]: I1213 14:19:40.772218 2033 topology_manager.go:215] "Topology Admit Handler" podUID="5ce33a37-9d7c-4cec-8981-31f0a7f39212" podNamespace="kube-system" podName="cilium-operator-5cc964979-xqgzs" Dec 13 14:19:40.778421 systemd[1]: Created slice kubepods-besteffort-pod5ce33a37_9d7c_4cec_8981_31f0a7f39212.slice. Dec 13 14:19:40.859462 kubelet[2033]: I1213 14:19:40.859316 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce33a37-9d7c-4cec-8981-31f0a7f39212-cilium-config-path\") pod \"cilium-operator-5cc964979-xqgzs\" (UID: \"5ce33a37-9d7c-4cec-8981-31f0a7f39212\") " pod="kube-system/cilium-operator-5cc964979-xqgzs" Dec 13 14:19:40.860375 kubelet[2033]: I1213 14:19:40.860357 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqdww\" (UniqueName: \"kubernetes.io/projected/5ce33a37-9d7c-4cec-8981-31f0a7f39212-kube-api-access-kqdww\") pod \"cilium-operator-5cc964979-xqgzs\" (UID: \"5ce33a37-9d7c-4cec-8981-31f0a7f39212\") " pod="kube-system/cilium-operator-5cc964979-xqgzs" Dec 13 14:19:40.921689 kubelet[2033]: E1213 14:19:40.921631 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:40.922505 env[1201]: time="2024-12-13T14:19:40.922444600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4f4qn,Uid:b7608033-6af9-4e5d-80fe-73474694dd01,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:40.942298 kubelet[2033]: E1213 14:19:40.942260 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:40.942946 env[1201]: time="2024-12-13T14:19:40.942893700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8r4fz,Uid:7725461c-0819-4c88-8faa-37cb7f5d1189,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:40.979593 env[1201]: time="2024-12-13T14:19:40.979473919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:40.979593 env[1201]: time="2024-12-13T14:19:40.979542843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:40.979593 env[1201]: time="2024-12-13T14:19:40.979557360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:40.979962 env[1201]: time="2024-12-13T14:19:40.979790529Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffcd4b68f58375caa010ee020867d58693a890e300dd537e711262cbb27e115e pid=2131 runtime=io.containerd.runc.v2 Dec 13 14:19:40.983104 env[1201]: time="2024-12-13T14:19:40.982062736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:40.983104 env[1201]: time="2024-12-13T14:19:40.982113684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:40.983104 env[1201]: time="2024-12-13T14:19:40.982124355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:40.983104 env[1201]: time="2024-12-13T14:19:40.982324730Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78 pid=2134 runtime=io.containerd.runc.v2 Dec 13 14:19:40.998951 systemd[1]: Started cri-containerd-fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78.scope. Dec 13 14:19:41.000275 systemd[1]: Started cri-containerd-ffcd4b68f58375caa010ee020867d58693a890e300dd537e711262cbb27e115e.scope. Dec 13 14:19:41.033913 env[1201]: time="2024-12-13T14:19:41.033815393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4f4qn,Uid:b7608033-6af9-4e5d-80fe-73474694dd01,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffcd4b68f58375caa010ee020867d58693a890e300dd537e711262cbb27e115e\"" Dec 13 14:19:41.034786 kubelet[2033]: E1213 14:19:41.034754 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:41.036804 env[1201]: time="2024-12-13T14:19:41.036751543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8r4fz,Uid:7725461c-0819-4c88-8faa-37cb7f5d1189,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\"" Dec 13 14:19:41.038491 kubelet[2033]: E1213 14:19:41.037479 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:41.038560 env[1201]: time="2024-12-13T14:19:41.037897759Z" level=info msg="CreateContainer within sandbox \"ffcd4b68f58375caa010ee020867d58693a890e300dd537e711262cbb27e115e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:19:41.041917 env[1201]: time="2024-12-13T14:19:41.041832531Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:19:41.082193 kubelet[2033]: E1213 14:19:41.082131 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:41.082946 env[1201]: time="2024-12-13T14:19:41.082886616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xqgzs,Uid:5ce33a37-9d7c-4cec-8981-31f0a7f39212,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:41.355326 env[1201]: time="2024-12-13T14:19:41.355241743Z" level=info msg="CreateContainer within sandbox \"ffcd4b68f58375caa010ee020867d58693a890e300dd537e711262cbb27e115e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ecd7ee83230e81ee00071b6aa08b0a228319ff137ef48062acd629538ec96395\"" Dec 13 14:19:41.357075 env[1201]: time="2024-12-13T14:19:41.357023371Z" level=info msg="StartContainer for \"ecd7ee83230e81ee00071b6aa08b0a228319ff137ef48062acd629538ec96395\"" Dec 13 14:19:41.362572 env[1201]: time="2024-12-13T14:19:41.362480274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:41.362572 env[1201]: time="2024-12-13T14:19:41.362525961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:41.362572 env[1201]: time="2024-12-13T14:19:41.362535639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:41.362869 env[1201]: time="2024-12-13T14:19:41.362736937Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae pid=2210 runtime=io.containerd.runc.v2 Dec 13 14:19:41.378982 systemd[1]: Started cri-containerd-e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae.scope. Dec 13 14:19:41.384150 systemd[1]: Started cri-containerd-ecd7ee83230e81ee00071b6aa08b0a228319ff137ef48062acd629538ec96395.scope. Dec 13 14:19:41.425005 env[1201]: time="2024-12-13T14:19:41.424961293Z" level=info msg="StartContainer for \"ecd7ee83230e81ee00071b6aa08b0a228319ff137ef48062acd629538ec96395\" returns successfully" Dec 13 14:19:41.434607 env[1201]: time="2024-12-13T14:19:41.434537953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-xqgzs,Uid:5ce33a37-9d7c-4cec-8981-31f0a7f39212,Namespace:kube-system,Attempt:0,} returns sandbox id \"e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae\"" Dec 13 14:19:41.435238 kubelet[2033]: E1213 14:19:41.435203 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:42.403150 kubelet[2033]: E1213 14:19:42.402839 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:42.516750 kubelet[2033]: I1213 14:19:42.516677 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4f4qn" podStartSLOduration=2.5165836710000002 podStartE2EDuration="2.516583671s" podCreationTimestamp="2024-12-13 14:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:42.516357206 +0000 UTC m=+17.263979473" watchObservedRunningTime="2024-12-13 14:19:42.516583671 +0000 UTC m=+17.264205938" Dec 13 14:19:43.406103 kubelet[2033]: E1213 14:19:43.406016 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:51.129950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1060506816.mount: Deactivated successfully. Dec 13 14:19:57.928312 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:44466.service. Dec 13 14:19:58.047902 env[1201]: time="2024-12-13T14:19:58.047813260Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:58.048419 sshd[2402]: Accepted publickey for core from 10.0.0.1 port 44466 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:19:58.050027 sshd[2402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:58.050900 env[1201]: time="2024-12-13T14:19:58.050837968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:58.054643 env[1201]: time="2024-12-13T14:19:58.054594893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:58.054951 env[1201]: time="2024-12-13T14:19:58.054776469Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:19:58.056300 env[1201]: time="2024-12-13T14:19:58.056249347Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:19:58.057379 systemd[1]: Started session-6.scope. Dec 13 14:19:58.057676 env[1201]: time="2024-12-13T14:19:58.057379924Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:19:58.058039 systemd-logind[1192]: New session 6 of user core. Dec 13 14:19:58.094205 env[1201]: time="2024-12-13T14:19:58.094133540Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\"" Dec 13 14:19:58.095049 env[1201]: time="2024-12-13T14:19:58.095009801Z" level=info msg="StartContainer for \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\"" Dec 13 14:19:58.119273 systemd[1]: Started cri-containerd-3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6.scope. Dec 13 14:19:58.183137 systemd[1]: cri-containerd-3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6.scope: Deactivated successfully. Dec 13 14:19:58.209879 sshd[2402]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:58.210669 env[1201]: time="2024-12-13T14:19:58.210606703Z" level=info msg="StartContainer for \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\" returns successfully" Dec 13 14:19:58.213374 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:44466.service: Deactivated successfully. Dec 13 14:19:58.214247 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:19:58.215193 systemd-logind[1192]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:19:58.216170 systemd-logind[1192]: Removed session 6. Dec 13 14:19:58.436811 kubelet[2033]: E1213 14:19:58.436706 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:59.084116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6-rootfs.mount: Deactivated successfully. Dec 13 14:19:59.092923 env[1201]: time="2024-12-13T14:19:59.092831789Z" level=info msg="shim disconnected" id=3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6 Dec 13 14:19:59.092923 env[1201]: time="2024-12-13T14:19:59.092919747Z" level=warning msg="cleaning up after shim disconnected" id=3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6 namespace=k8s.io Dec 13 14:19:59.092923 env[1201]: time="2024-12-13T14:19:59.092932972Z" level=info msg="cleaning up dead shim" Dec 13 14:19:59.099636 env[1201]: time="2024-12-13T14:19:59.099483620Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:19:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2466 runtime=io.containerd.runc.v2\n" Dec 13 14:19:59.440304 kubelet[2033]: E1213 14:19:59.440169 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:59.443380 env[1201]: time="2024-12-13T14:19:59.443325949Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:19:59.902834 env[1201]: time="2024-12-13T14:19:59.902774234Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\"" Dec 13 14:19:59.903356 env[1201]: time="2024-12-13T14:19:59.903311558Z" level=info msg="StartContainer for \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\"" Dec 13 14:19:59.922034 systemd[1]: Started cri-containerd-bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f.scope. Dec 13 14:19:59.949595 env[1201]: time="2024-12-13T14:19:59.948255027Z" level=info msg="StartContainer for \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\" returns successfully" Dec 13 14:19:59.958417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:19:59.958697 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:19:59.958963 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:19:59.960596 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:19:59.962685 systemd[1]: cri-containerd-bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f.scope: Deactivated successfully. Dec 13 14:19:59.970604 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:19:59.985826 env[1201]: time="2024-12-13T14:19:59.985766207Z" level=info msg="shim disconnected" id=bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f Dec 13 14:19:59.985826 env[1201]: time="2024-12-13T14:19:59.985814128Z" level=warning msg="cleaning up after shim disconnected" id=bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f namespace=k8s.io Dec 13 14:19:59.985826 env[1201]: time="2024-12-13T14:19:59.985823617Z" level=info msg="cleaning up dead shim" Dec 13 14:19:59.993635 env[1201]: time="2024-12-13T14:19:59.993559694Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:19:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2528 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:19:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 14:20:00.084539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f-rootfs.mount: Deactivated successfully. Dec 13 14:20:00.444033 kubelet[2033]: E1213 14:20:00.443994 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:00.446320 env[1201]: time="2024-12-13T14:20:00.446275726Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:20:00.563157 env[1201]: time="2024-12-13T14:20:00.563094405Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\"" Dec 13 14:20:00.563870 env[1201]: time="2024-12-13T14:20:00.563638753Z" level=info msg="StartContainer for \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\"" Dec 13 14:20:00.580924 systemd[1]: Started cri-containerd-d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238.scope. Dec 13 14:20:00.607485 env[1201]: time="2024-12-13T14:20:00.607420634Z" level=info msg="StartContainer for \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\" returns successfully" Dec 13 14:20:00.607959 systemd[1]: cri-containerd-d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238.scope: Deactivated successfully. Dec 13 14:20:00.635042 env[1201]: time="2024-12-13T14:20:00.634983685Z" level=info msg="shim disconnected" id=d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238 Dec 13 14:20:00.635042 env[1201]: time="2024-12-13T14:20:00.635037889Z" level=warning msg="cleaning up after shim disconnected" id=d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238 namespace=k8s.io Dec 13 14:20:00.635042 env[1201]: time="2024-12-13T14:20:00.635049230Z" level=info msg="cleaning up dead shim" Dec 13 14:20:00.641167 env[1201]: time="2024-12-13T14:20:00.641104380Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:20:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2585 runtime=io.containerd.runc.v2\n" Dec 13 14:20:01.083947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238-rootfs.mount: Deactivated successfully. Dec 13 14:20:01.447187 kubelet[2033]: E1213 14:20:01.446958 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:01.449621 env[1201]: time="2024-12-13T14:20:01.449574768Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:20:01.466065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239280008.mount: Deactivated successfully. Dec 13 14:20:01.467629 env[1201]: time="2024-12-13T14:20:01.467585334Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\"" Dec 13 14:20:01.468093 env[1201]: time="2024-12-13T14:20:01.468064979Z" level=info msg="StartContainer for \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\"" Dec 13 14:20:01.483004 systemd[1]: Started cri-containerd-004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37.scope. Dec 13 14:20:01.502885 systemd[1]: cri-containerd-004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37.scope: Deactivated successfully. Dec 13 14:20:01.504177 env[1201]: time="2024-12-13T14:20:01.504125136Z" level=info msg="StartContainer for \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\" returns successfully" Dec 13 14:20:01.523411 env[1201]: time="2024-12-13T14:20:01.523358793Z" level=info msg="shim disconnected" id=004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37 Dec 13 14:20:01.523611 env[1201]: time="2024-12-13T14:20:01.523410943Z" level=warning msg="cleaning up after shim disconnected" id=004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37 namespace=k8s.io Dec 13 14:20:01.523611 env[1201]: time="2024-12-13T14:20:01.523425891Z" level=info msg="cleaning up dead shim" Dec 13 14:20:01.529861 env[1201]: time="2024-12-13T14:20:01.529806158Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:20:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2640 runtime=io.containerd.runc.v2\n" Dec 13 14:20:02.084389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37-rootfs.mount: Deactivated successfully. Dec 13 14:20:02.451703 kubelet[2033]: E1213 14:20:02.450797 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:02.455454 env[1201]: time="2024-12-13T14:20:02.455400831Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:20:02.582752 env[1201]: time="2024-12-13T14:20:02.582665639Z" level=info msg="CreateContainer within sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\"" Dec 13 14:20:02.583411 env[1201]: time="2024-12-13T14:20:02.583358018Z" level=info msg="StartContainer for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\"" Dec 13 14:20:02.608468 systemd[1]: Started cri-containerd-7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9.scope. Dec 13 14:20:02.654519 env[1201]: time="2024-12-13T14:20:02.654435945Z" level=info msg="StartContainer for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" returns successfully" Dec 13 14:20:02.724549 env[1201]: time="2024-12-13T14:20:02.724100210Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:02.727828 env[1201]: time="2024-12-13T14:20:02.727790609Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:02.729890 env[1201]: time="2024-12-13T14:20:02.729483615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:02.730089 env[1201]: time="2024-12-13T14:20:02.729946406Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:20:02.731559 env[1201]: time="2024-12-13T14:20:02.731523290Z" level=info msg="CreateContainer within sandbox \"e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:20:02.746256 env[1201]: time="2024-12-13T14:20:02.746203615Z" level=info msg="CreateContainer within sandbox \"e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\"" Dec 13 14:20:02.747652 env[1201]: time="2024-12-13T14:20:02.747572242Z" level=info msg="StartContainer for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\"" Dec 13 14:20:02.755880 kubelet[2033]: I1213 14:20:02.755517 2033 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:20:02.780306 systemd[1]: Started cri-containerd-6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa.scope. Dec 13 14:20:02.784333 kubelet[2033]: I1213 14:20:02.780804 2033 topology_manager.go:215] "Topology Admit Handler" podUID="559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41" podNamespace="kube-system" podName="coredns-76f75df574-n4t67" Dec 13 14:20:02.784333 kubelet[2033]: I1213 14:20:02.782650 2033 topology_manager.go:215] "Topology Admit Handler" podUID="eef376bf-7369-47b8-8057-a7b5ff7a71fc" podNamespace="kube-system" podName="coredns-76f75df574-l2slc" Dec 13 14:20:02.792449 systemd[1]: Created slice kubepods-burstable-pod559b439b_9e1f_4b91_9f8f_e8d3ae8f3f41.slice. Dec 13 14:20:02.805662 systemd[1]: Created slice kubepods-burstable-podeef376bf_7369_47b8_8057_a7b5ff7a71fc.slice. Dec 13 14:20:02.825485 env[1201]: time="2024-12-13T14:20:02.825427019Z" level=info msg="StartContainer for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" returns successfully" Dec 13 14:20:02.827975 kubelet[2033]: I1213 14:20:02.827698 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41-config-volume\") pod \"coredns-76f75df574-n4t67\" (UID: \"559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41\") " pod="kube-system/coredns-76f75df574-n4t67" Dec 13 14:20:02.827975 kubelet[2033]: I1213 14:20:02.827755 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknm7\" (UniqueName: \"kubernetes.io/projected/559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41-kube-api-access-wknm7\") pod \"coredns-76f75df574-n4t67\" (UID: \"559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41\") " pod="kube-system/coredns-76f75df574-n4t67" Dec 13 14:20:02.827975 kubelet[2033]: I1213 14:20:02.827780 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8m6f\" (UniqueName: \"kubernetes.io/projected/eef376bf-7369-47b8-8057-a7b5ff7a71fc-kube-api-access-s8m6f\") pod \"coredns-76f75df574-l2slc\" (UID: \"eef376bf-7369-47b8-8057-a7b5ff7a71fc\") " pod="kube-system/coredns-76f75df574-l2slc" Dec 13 14:20:02.827975 kubelet[2033]: I1213 14:20:02.827799 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eef376bf-7369-47b8-8057-a7b5ff7a71fc-config-volume\") pod \"coredns-76f75df574-l2slc\" (UID: \"eef376bf-7369-47b8-8057-a7b5ff7a71fc\") " pod="kube-system/coredns-76f75df574-l2slc" Dec 13 14:20:03.088428 systemd[1]: run-containerd-runc-k8s.io-7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9-runc.eBMF3i.mount: Deactivated successfully. Dec 13 14:20:03.103284 kubelet[2033]: E1213 14:20:03.103240 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:03.104221 env[1201]: time="2024-12-13T14:20:03.104161130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n4t67,Uid:559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:03.110158 kubelet[2033]: E1213 14:20:03.110111 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:03.110945 env[1201]: time="2024-12-13T14:20:03.110904412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2slc,Uid:eef376bf-7369-47b8-8057-a7b5ff7a71fc,Namespace:kube-system,Attempt:0,}" Dec 13 14:20:03.215676 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:35526.service. Dec 13 14:20:03.283222 sshd[2840]: Accepted publickey for core from 10.0.0.1 port 35526 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:03.284424 sshd[2840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:03.290584 systemd-logind[1192]: New session 7 of user core. Dec 13 14:20:03.291335 systemd[1]: Started session-7.scope. Dec 13 14:20:03.457194 kubelet[2033]: E1213 14:20:03.457051 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:03.457863 kubelet[2033]: E1213 14:20:03.457788 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:03.503329 kubelet[2033]: I1213 14:20:03.502174 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8r4fz" podStartSLOduration=6.486143083 podStartE2EDuration="23.502137523s" podCreationTimestamp="2024-12-13 14:19:40 +0000 UTC" firstStartedPulling="2024-12-13 14:19:41.039289377 +0000 UTC m=+15.786911644" lastFinishedPulling="2024-12-13 14:19:58.055283827 +0000 UTC m=+32.802906084" observedRunningTime="2024-12-13 14:20:03.501584319 +0000 UTC m=+38.249206586" watchObservedRunningTime="2024-12-13 14:20:03.502137523 +0000 UTC m=+38.249759790" Dec 13 14:20:03.509734 sshd[2840]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:03.512899 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:35526.service: Deactivated successfully. Dec 13 14:20:03.514049 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:20:03.514894 systemd-logind[1192]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:20:03.515876 systemd-logind[1192]: Removed session 7. Dec 13 14:20:04.460234 kubelet[2033]: E1213 14:20:04.460194 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:04.460690 kubelet[2033]: E1213 14:20:04.460659 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:05.462450 kubelet[2033]: E1213 14:20:05.462415 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:06.702525 systemd-networkd[1026]: cilium_host: Link UP Dec 13 14:20:06.702639 systemd-networkd[1026]: cilium_net: Link UP Dec 13 14:20:06.719286 systemd-networkd[1026]: cilium_net: Gained carrier Dec 13 14:20:06.720454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:20:06.720510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:20:06.720590 systemd-networkd[1026]: cilium_host: Gained carrier Dec 13 14:20:06.789533 systemd-networkd[1026]: cilium_vxlan: Link UP Dec 13 14:20:06.789539 systemd-networkd[1026]: cilium_vxlan: Gained carrier Dec 13 14:20:06.815977 systemd-networkd[1026]: cilium_host: Gained IPv6LL Dec 13 14:20:06.992880 kernel: NET: Registered PF_ALG protocol family Dec 13 14:20:07.622516 systemd-networkd[1026]: lxc_health: Link UP Dec 13 14:20:07.649883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:20:07.656334 systemd-networkd[1026]: lxc_health: Gained carrier Dec 13 14:20:07.672056 systemd-networkd[1026]: cilium_net: Gained IPv6LL Dec 13 14:20:07.884770 systemd-networkd[1026]: lxc7aed4ab23b5d: Link UP Dec 13 14:20:07.925895 kernel: eth0: renamed from tmpa18b2 Dec 13 14:20:07.940424 systemd-networkd[1026]: lxcb293e389a467: Link UP Dec 13 14:20:07.946900 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:20:07.946982 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7aed4ab23b5d: link becomes ready Dec 13 14:20:07.947336 systemd-networkd[1026]: lxc7aed4ab23b5d: Gained carrier Dec 13 14:20:07.948893 kernel: eth0: renamed from tmpd7b79 Dec 13 14:20:07.957954 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb293e389a467: link becomes ready Dec 13 14:20:07.958042 systemd-networkd[1026]: lxcb293e389a467: Gained carrier Dec 13 14:20:08.089781 kubelet[2033]: E1213 14:20:08.089715 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:08.517353 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:38186.service. Dec 13 14:20:08.569181 systemd-networkd[1026]: cilium_vxlan: Gained IPv6LL Dec 13 14:20:08.615862 sshd[3250]: Accepted publickey for core from 10.0.0.1 port 38186 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:08.619116 sshd[3250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:08.625831 systemd-logind[1192]: New session 8 of user core. Dec 13 14:20:08.626912 systemd[1]: Started session-8.scope. Dec 13 14:20:08.821877 sshd[3250]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:08.825075 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:38186.service: Deactivated successfully. Dec 13 14:20:08.825773 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:20:08.826443 systemd-logind[1192]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:20:08.827285 systemd-logind[1192]: Removed session 8. Dec 13 14:20:08.945914 kubelet[2033]: E1213 14:20:08.945866 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:08.978640 kubelet[2033]: I1213 14:20:08.978597 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-xqgzs" podStartSLOduration=7.685109662 podStartE2EDuration="28.978550404s" podCreationTimestamp="2024-12-13 14:19:40 +0000 UTC" firstStartedPulling="2024-12-13 14:19:41.436714722 +0000 UTC m=+16.184336989" lastFinishedPulling="2024-12-13 14:20:02.730155464 +0000 UTC m=+37.477777731" observedRunningTime="2024-12-13 14:20:03.513713539 +0000 UTC m=+38.261335806" watchObservedRunningTime="2024-12-13 14:20:08.978550404 +0000 UTC m=+43.726172681" Dec 13 14:20:09.080115 systemd-networkd[1026]: lxc_health: Gained IPv6LL Dec 13 14:20:09.471281 kubelet[2033]: E1213 14:20:09.471135 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:09.720125 systemd-networkd[1026]: lxcb293e389a467: Gained IPv6LL Dec 13 14:20:09.912033 systemd-networkd[1026]: lxc7aed4ab23b5d: Gained IPv6LL Dec 13 14:20:11.938650 env[1201]: time="2024-12-13T14:20:11.938570225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:11.938650 env[1201]: time="2024-12-13T14:20:11.938616993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:11.938650 env[1201]: time="2024-12-13T14:20:11.938630389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:11.939250 env[1201]: time="2024-12-13T14:20:11.938802396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a18b29812ba9a345442ca03c5c31bd5a0733aa8651490e805af480b66d581909 pid=3295 runtime=io.containerd.runc.v2 Dec 13 14:20:11.942963 env[1201]: time="2024-12-13T14:20:11.941399636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:11.942963 env[1201]: time="2024-12-13T14:20:11.941465311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:11.942963 env[1201]: time="2024-12-13T14:20:11.941479518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:11.942963 env[1201]: time="2024-12-13T14:20:11.941736566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7b798e578b8b5fe88e39e7b6f777989d3ec3cefc4244e73f937cd4fe61c2b00 pid=3305 runtime=io.containerd.runc.v2 Dec 13 14:20:11.961864 systemd[1]: run-containerd-runc-k8s.io-d7b798e578b8b5fe88e39e7b6f777989d3ec3cefc4244e73f937cd4fe61c2b00-runc.Zo11j8.mount: Deactivated successfully. Dec 13 14:20:11.965464 systemd[1]: Started cri-containerd-a18b29812ba9a345442ca03c5c31bd5a0733aa8651490e805af480b66d581909.scope. Dec 13 14:20:11.966515 systemd[1]: Started cri-containerd-d7b798e578b8b5fe88e39e7b6f777989d3ec3cefc4244e73f937cd4fe61c2b00.scope. Dec 13 14:20:11.979665 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:11.981039 systemd-resolved[1138]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:12.007842 env[1201]: time="2024-12-13T14:20:12.007734659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-n4t67,Uid:559b439b-9e1f-4b91-9f8f-e8d3ae8f3f41,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7b798e578b8b5fe88e39e7b6f777989d3ec3cefc4244e73f937cd4fe61c2b00\"" Dec 13 14:20:12.010162 env[1201]: time="2024-12-13T14:20:12.010102492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2slc,Uid:eef376bf-7369-47b8-8057-a7b5ff7a71fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a18b29812ba9a345442ca03c5c31bd5a0733aa8651490e805af480b66d581909\"" Dec 13 14:20:12.011735 kubelet[2033]: E1213 14:20:12.010920 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:12.014209 kubelet[2033]: E1213 14:20:12.012942 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:12.014291 env[1201]: time="2024-12-13T14:20:12.014062102Z" level=info msg="CreateContainer within sandbox \"d7b798e578b8b5fe88e39e7b6f777989d3ec3cefc4244e73f937cd4fe61c2b00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:20:12.016153 env[1201]: time="2024-12-13T14:20:12.016124153Z" level=info msg="CreateContainer within sandbox \"a18b29812ba9a345442ca03c5c31bd5a0733aa8651490e805af480b66d581909\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:20:12.054592 env[1201]: time="2024-12-13T14:20:12.054531121Z" level=info msg="CreateContainer within sandbox \"d7b798e578b8b5fe88e39e7b6f777989d3ec3cefc4244e73f937cd4fe61c2b00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25a73adc596b8fc5b1085d77a03dc712e471c865935683dc1d76c0ee41059697\"" Dec 13 14:20:12.055132 env[1201]: time="2024-12-13T14:20:12.055098720Z" level=info msg="StartContainer for \"25a73adc596b8fc5b1085d77a03dc712e471c865935683dc1d76c0ee41059697\"" Dec 13 14:20:12.057012 env[1201]: time="2024-12-13T14:20:12.056963798Z" level=info msg="CreateContainer within sandbox \"a18b29812ba9a345442ca03c5c31bd5a0733aa8651490e805af480b66d581909\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9afbf0ecfbe3312cb4e0cec0bcb41db5b15cd86566e87fc7d3ef0f8d544e7f16\"" Dec 13 14:20:12.057670 env[1201]: time="2024-12-13T14:20:12.057510788Z" level=info msg="StartContainer for \"9afbf0ecfbe3312cb4e0cec0bcb41db5b15cd86566e87fc7d3ef0f8d544e7f16\"" Dec 13 14:20:12.074825 systemd[1]: Started cri-containerd-25a73adc596b8fc5b1085d77a03dc712e471c865935683dc1d76c0ee41059697.scope. Dec 13 14:20:12.080313 systemd[1]: Started cri-containerd-9afbf0ecfbe3312cb4e0cec0bcb41db5b15cd86566e87fc7d3ef0f8d544e7f16.scope. Dec 13 14:20:12.146544 env[1201]: time="2024-12-13T14:20:12.141231298Z" level=info msg="StartContainer for \"9afbf0ecfbe3312cb4e0cec0bcb41db5b15cd86566e87fc7d3ef0f8d544e7f16\" returns successfully" Dec 13 14:20:12.152667 env[1201]: time="2024-12-13T14:20:12.152585730Z" level=info msg="StartContainer for \"25a73adc596b8fc5b1085d77a03dc712e471c865935683dc1d76c0ee41059697\" returns successfully" Dec 13 14:20:12.477911 kubelet[2033]: E1213 14:20:12.477875 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:12.478902 kubelet[2033]: E1213 14:20:12.478881 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:12.626174 kubelet[2033]: I1213 14:20:12.626120 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l2slc" podStartSLOduration=32.626064762 podStartE2EDuration="32.626064762s" podCreationTimestamp="2024-12-13 14:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:12.54391622 +0000 UTC m=+47.291538488" watchObservedRunningTime="2024-12-13 14:20:12.626064762 +0000 UTC m=+47.373687029" Dec 13 14:20:12.842191 kubelet[2033]: I1213 14:20:12.842151 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-n4t67" podStartSLOduration=32.842096908 podStartE2EDuration="32.842096908s" podCreationTimestamp="2024-12-13 14:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:12.626603657 +0000 UTC m=+47.374225924" watchObservedRunningTime="2024-12-13 14:20:12.842096908 +0000 UTC m=+47.589719175" Dec 13 14:20:13.480762 kubelet[2033]: E1213 14:20:13.480734 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:13.481334 kubelet[2033]: E1213 14:20:13.480803 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:13.826503 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:38202.service. Dec 13 14:20:13.868099 sshd[3456]: Accepted publickey for core from 10.0.0.1 port 38202 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:13.869540 sshd[3456]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:13.873471 systemd-logind[1192]: New session 9 of user core. Dec 13 14:20:13.874471 systemd[1]: Started session-9.scope. Dec 13 14:20:14.012908 sshd[3456]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:14.015752 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:38202.service: Deactivated successfully. Dec 13 14:20:14.016646 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:20:14.017355 systemd-logind[1192]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:20:14.018245 systemd-logind[1192]: Removed session 9. Dec 13 14:20:14.482380 kubelet[2033]: E1213 14:20:14.482346 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:14.482380 kubelet[2033]: E1213 14:20:14.482392 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:19.017580 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:35340.service. Dec 13 14:20:19.058742 sshd[3471]: Accepted publickey for core from 10.0.0.1 port 35340 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:19.060262 sshd[3471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:19.065520 systemd-logind[1192]: New session 10 of user core. Dec 13 14:20:19.066562 systemd[1]: Started session-10.scope. Dec 13 14:20:19.193552 sshd[3471]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:19.196676 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:35340.service: Deactivated successfully. Dec 13 14:20:19.197459 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:20:19.198340 systemd-logind[1192]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:20:19.199123 systemd-logind[1192]: Removed session 10. Dec 13 14:20:24.197719 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:35356.service. Dec 13 14:20:24.237901 sshd[3485]: Accepted publickey for core from 10.0.0.1 port 35356 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:24.239281 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:24.243227 systemd-logind[1192]: New session 11 of user core. Dec 13 14:20:24.244134 systemd[1]: Started session-11.scope. Dec 13 14:20:24.357762 sshd[3485]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:24.361435 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:35356.service: Deactivated successfully. Dec 13 14:20:24.362134 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:20:24.362753 systemd-logind[1192]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:20:24.364244 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:35360.service. Dec 13 14:20:24.365167 systemd-logind[1192]: Removed session 11. Dec 13 14:20:24.401184 sshd[3499]: Accepted publickey for core from 10.0.0.1 port 35360 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:24.402252 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:24.405471 systemd-logind[1192]: New session 12 of user core. Dec 13 14:20:24.406296 systemd[1]: Started session-12.scope. Dec 13 14:20:24.568025 sshd[3499]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:24.570393 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:35360.service: Deactivated successfully. Dec 13 14:20:24.570878 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:20:24.573617 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:35362.service. Dec 13 14:20:24.574812 systemd-logind[1192]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:20:24.576231 systemd-logind[1192]: Removed session 12. Dec 13 14:20:24.614919 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 35362 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:24.616234 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:24.620797 systemd-logind[1192]: New session 13 of user core. Dec 13 14:20:24.621737 systemd[1]: Started session-13.scope. Dec 13 14:20:24.736665 sshd[3510]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:24.739443 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:35362.service: Deactivated successfully. Dec 13 14:20:24.740118 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:20:24.741018 systemd-logind[1192]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:20:24.741813 systemd-logind[1192]: Removed session 13. Dec 13 14:20:29.741099 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:52216.service. Dec 13 14:20:29.779454 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 52216 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:29.780923 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:29.784645 systemd-logind[1192]: New session 14 of user core. Dec 13 14:20:29.785436 systemd[1]: Started session-14.scope. Dec 13 14:20:29.901959 sshd[3525]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:29.904127 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:52216.service: Deactivated successfully. Dec 13 14:20:29.904809 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:20:29.905427 systemd-logind[1192]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:20:29.906181 systemd-logind[1192]: Removed session 14. Dec 13 14:20:34.907131 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:52254.service. Dec 13 14:20:34.947584 sshd[3538]: Accepted publickey for core from 10.0.0.1 port 52254 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:34.949015 sshd[3538]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:34.953453 systemd-logind[1192]: New session 15 of user core. Dec 13 14:20:34.954290 systemd[1]: Started session-15.scope. Dec 13 14:20:35.073894 sshd[3538]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:35.075936 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:52254.service: Deactivated successfully. Dec 13 14:20:35.076793 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:20:35.077441 systemd-logind[1192]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:20:35.078265 systemd-logind[1192]: Removed session 15. Dec 13 14:20:40.077624 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:46048.service. Dec 13 14:20:40.115980 sshd[3551]: Accepted publickey for core from 10.0.0.1 port 46048 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:40.117276 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:40.121140 systemd-logind[1192]: New session 16 of user core. Dec 13 14:20:40.122183 systemd[1]: Started session-16.scope. Dec 13 14:20:40.246790 sshd[3551]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:40.250781 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:46054.service. Dec 13 14:20:40.251491 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:46048.service: Deactivated successfully. Dec 13 14:20:40.252491 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:20:40.261506 systemd-logind[1192]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:20:40.265241 systemd-logind[1192]: Removed session 16. Dec 13 14:20:40.295304 sshd[3563]: Accepted publickey for core from 10.0.0.1 port 46054 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:40.296556 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:40.300068 systemd-logind[1192]: New session 17 of user core. Dec 13 14:20:40.300811 systemd[1]: Started session-17.scope. Dec 13 14:20:40.611421 sshd[3563]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:40.614553 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:46054.service: Deactivated successfully. Dec 13 14:20:40.615115 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:20:40.615639 systemd-logind[1192]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:20:40.616707 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:46060.service. Dec 13 14:20:40.617807 systemd-logind[1192]: Removed session 17. Dec 13 14:20:40.659625 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 46060 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:40.661172 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:40.665286 systemd-logind[1192]: New session 18 of user core. Dec 13 14:20:40.666100 systemd[1]: Started session-18.scope. Dec 13 14:20:41.352591 kubelet[2033]: E1213 14:20:41.352546 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:42.119779 sshd[3575]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:42.122795 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:46060.service: Deactivated successfully. Dec 13 14:20:42.123424 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:20:42.124100 systemd-logind[1192]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:20:42.125071 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:46086.service. Dec 13 14:20:42.126262 systemd-logind[1192]: Removed session 18. Dec 13 14:20:42.169590 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 46086 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:42.171081 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:42.177617 systemd[1]: Started session-19.scope. Dec 13 14:20:42.177898 systemd-logind[1192]: New session 19 of user core. Dec 13 14:20:42.430321 sshd[3596]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:42.433783 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:46086.service: Deactivated successfully. Dec 13 14:20:42.434456 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:20:42.435859 systemd-logind[1192]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:20:42.437009 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:46094.service. Dec 13 14:20:42.438615 systemd-logind[1192]: Removed session 19. Dec 13 14:20:42.473275 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 46094 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:42.474739 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:42.478735 systemd-logind[1192]: New session 20 of user core. Dec 13 14:20:42.479916 systemd[1]: Started session-20.scope. Dec 13 14:20:42.596458 sshd[3609]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:42.599307 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:46094.service: Deactivated successfully. Dec 13 14:20:42.600206 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:20:42.600951 systemd-logind[1192]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:20:42.601807 systemd-logind[1192]: Removed session 20. Dec 13 14:20:47.601552 systemd[1]: Started sshd@20-10.0.0.27:22-10.0.0.1:46118.service. Dec 13 14:20:47.638982 sshd[3622]: Accepted publickey for core from 10.0.0.1 port 46118 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:47.640345 sshd[3622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:47.643952 systemd-logind[1192]: New session 21 of user core. Dec 13 14:20:47.644711 systemd[1]: Started session-21.scope. Dec 13 14:20:47.757466 sshd[3622]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:47.760172 systemd[1]: sshd@20-10.0.0.27:22-10.0.0.1:46118.service: Deactivated successfully. Dec 13 14:20:47.760921 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:20:47.761435 systemd-logind[1192]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:20:47.762278 systemd-logind[1192]: Removed session 21. Dec 13 14:20:50.352707 kubelet[2033]: E1213 14:20:50.352660 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:51.352443 kubelet[2033]: E1213 14:20:51.352398 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:52.762526 systemd[1]: Started sshd@21-10.0.0.27:22-10.0.0.1:56070.service. Dec 13 14:20:52.801066 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 56070 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:52.802495 sshd[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:52.806156 systemd-logind[1192]: New session 22 of user core. Dec 13 14:20:52.807343 systemd[1]: Started session-22.scope. Dec 13 14:20:52.929885 sshd[3639]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:52.932206 systemd[1]: sshd@21-10.0.0.27:22-10.0.0.1:56070.service: Deactivated successfully. Dec 13 14:20:52.933163 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:20:52.933701 systemd-logind[1192]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:20:52.934533 systemd-logind[1192]: Removed session 22. Dec 13 14:20:57.935591 systemd[1]: Started sshd@22-10.0.0.27:22-10.0.0.1:56094.service. Dec 13 14:20:57.971681 sshd[3652]: Accepted publickey for core from 10.0.0.1 port 56094 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:20:57.972595 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:57.976466 systemd-logind[1192]: New session 23 of user core. Dec 13 14:20:57.977582 systemd[1]: Started session-23.scope. Dec 13 14:20:58.092280 sshd[3652]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:58.095547 systemd[1]: sshd@22-10.0.0.27:22-10.0.0.1:56094.service: Deactivated successfully. Dec 13 14:20:58.096284 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:20:58.097072 systemd-logind[1192]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:20:58.097904 systemd-logind[1192]: Removed session 23. Dec 13 14:21:03.098291 systemd[1]: Started sshd@23-10.0.0.27:22-10.0.0.1:46746.service. Dec 13 14:21:03.136497 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:21:03.137726 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:03.141408 systemd-logind[1192]: New session 24 of user core. Dec 13 14:21:03.142446 systemd[1]: Started session-24.scope. Dec 13 14:21:03.248269 sshd[3665]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:03.252176 systemd[1]: sshd@23-10.0.0.27:22-10.0.0.1:46746.service: Deactivated successfully. Dec 13 14:21:03.252828 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:21:03.253523 systemd-logind[1192]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:21:03.254735 systemd[1]: Started sshd@24-10.0.0.27:22-10.0.0.1:46754.service. Dec 13 14:21:03.255873 systemd-logind[1192]: Removed session 24. Dec 13 14:21:03.294845 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 46754 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:21:03.296169 sshd[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:03.300295 systemd-logind[1192]: New session 25 of user core. Dec 13 14:21:03.301332 systemd[1]: Started session-25.scope. Dec 13 14:21:04.789980 env[1201]: time="2024-12-13T14:21:04.789837034Z" level=info msg="StopContainer for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" with timeout 30 (s)" Dec 13 14:21:04.790769 env[1201]: time="2024-12-13T14:21:04.790667891Z" level=info msg="Stop container \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" with signal terminated" Dec 13 14:21:04.804583 systemd[1]: run-containerd-runc-k8s.io-7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9-runc.Unt3mS.mount: Deactivated successfully. Dec 13 14:21:04.814178 systemd[1]: cri-containerd-6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa.scope: Deactivated successfully. Dec 13 14:21:04.824813 env[1201]: time="2024-12-13T14:21:04.824702368Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:21:04.832207 env[1201]: time="2024-12-13T14:21:04.832144366Z" level=info msg="StopContainer for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" with timeout 2 (s)" Dec 13 14:21:04.832554 env[1201]: time="2024-12-13T14:21:04.832444144Z" level=info msg="Stop container \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" with signal terminated" Dec 13 14:21:04.837683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa-rootfs.mount: Deactivated successfully. Dec 13 14:21:04.842035 systemd-networkd[1026]: lxc_health: Link DOWN Dec 13 14:21:04.842040 systemd-networkd[1026]: lxc_health: Lost carrier Dec 13 14:21:04.848938 env[1201]: time="2024-12-13T14:21:04.848819871Z" level=info msg="shim disconnected" id=6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa Dec 13 14:21:04.848938 env[1201]: time="2024-12-13T14:21:04.848927415Z" level=warning msg="cleaning up after shim disconnected" id=6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa namespace=k8s.io Dec 13 14:21:04.849121 env[1201]: time="2024-12-13T14:21:04.848947072Z" level=info msg="cleaning up dead shim" Dec 13 14:21:04.859797 env[1201]: time="2024-12-13T14:21:04.859636170Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3731 runtime=io.containerd.runc.v2\n" Dec 13 14:21:04.865954 env[1201]: time="2024-12-13T14:21:04.865908580Z" level=info msg="StopContainer for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" returns successfully" Dec 13 14:21:04.866740 env[1201]: time="2024-12-13T14:21:04.866698207Z" level=info msg="StopPodSandbox for \"e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae\"" Dec 13 14:21:04.866822 env[1201]: time="2024-12-13T14:21:04.866788990Z" level=info msg="Container to stop \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:04.868681 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae-shm.mount: Deactivated successfully. Dec 13 14:21:04.876898 systemd[1]: cri-containerd-e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae.scope: Deactivated successfully. Dec 13 14:21:04.889153 systemd[1]: cri-containerd-7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9.scope: Deactivated successfully. Dec 13 14:21:04.889422 systemd[1]: cri-containerd-7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9.scope: Consumed 7.022s CPU time. Dec 13 14:21:04.897147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae-rootfs.mount: Deactivated successfully. Dec 13 14:21:04.980757 env[1201]: time="2024-12-13T14:21:04.980688739Z" level=info msg="shim disconnected" id=7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9 Dec 13 14:21:04.980757 env[1201]: time="2024-12-13T14:21:04.980747380Z" level=warning msg="cleaning up after shim disconnected" id=7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9 namespace=k8s.io Dec 13 14:21:04.980757 env[1201]: time="2024-12-13T14:21:04.980759552Z" level=info msg="cleaning up dead shim" Dec 13 14:21:04.980757 env[1201]: time="2024-12-13T14:21:04.980749574Z" level=info msg="shim disconnected" id=e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae Dec 13 14:21:04.981104 env[1201]: time="2024-12-13T14:21:04.980777096Z" level=warning msg="cleaning up after shim disconnected" id=e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae namespace=k8s.io Dec 13 14:21:04.981104 env[1201]: time="2024-12-13T14:21:04.980786043Z" level=info msg="cleaning up dead shim" Dec 13 14:21:04.987599 env[1201]: time="2024-12-13T14:21:04.987544424Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3778 runtime=io.containerd.runc.v2\n" Dec 13 14:21:04.987940 env[1201]: time="2024-12-13T14:21:04.987907393Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3779 runtime=io.containerd.runc.v2\n" Dec 13 14:21:04.988231 env[1201]: time="2024-12-13T14:21:04.988198876Z" level=info msg="TearDown network for sandbox \"e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae\" successfully" Dec 13 14:21:04.988290 env[1201]: time="2024-12-13T14:21:04.988230806Z" level=info msg="StopPodSandbox for \"e41bac58b109bdb838f921e09c1f1b0627902be565b65e360ae46d8346c62cae\" returns successfully" Dec 13 14:21:05.012043 env[1201]: time="2024-12-13T14:21:05.011997805Z" level=info msg="StopContainer for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" returns successfully" Dec 13 14:21:05.012452 env[1201]: time="2024-12-13T14:21:05.012425276Z" level=info msg="StopPodSandbox for \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\"" Dec 13 14:21:05.012532 env[1201]: time="2024-12-13T14:21:05.012493325Z" level=info msg="Container to stop \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:05.012532 env[1201]: time="2024-12-13T14:21:05.012517320Z" level=info msg="Container to stop \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:05.012591 env[1201]: time="2024-12-13T14:21:05.012531377Z" level=info msg="Container to stop \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:05.012591 env[1201]: time="2024-12-13T14:21:05.012544643Z" level=info msg="Container to stop \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:05.012591 env[1201]: time="2024-12-13T14:21:05.012556485Z" level=info msg="Container to stop \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:05.018115 systemd[1]: cri-containerd-fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78.scope: Deactivated successfully. Dec 13 14:21:05.079005 env[1201]: time="2024-12-13T14:21:05.078870069Z" level=info msg="shim disconnected" id=fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78 Dec 13 14:21:05.079005 env[1201]: time="2024-12-13T14:21:05.078929933Z" level=warning msg="cleaning up after shim disconnected" id=fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78 namespace=k8s.io Dec 13 14:21:05.079005 env[1201]: time="2024-12-13T14:21:05.078939030Z" level=info msg="cleaning up dead shim" Dec 13 14:21:05.085125 env[1201]: time="2024-12-13T14:21:05.085091011Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3820 runtime=io.containerd.runc.v2\n" Dec 13 14:21:05.085395 env[1201]: time="2024-12-13T14:21:05.085371343Z" level=info msg="TearDown network for sandbox \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" successfully" Dec 13 14:21:05.085430 env[1201]: time="2024-12-13T14:21:05.085397072Z" level=info msg="StopPodSandbox for \"fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78\" returns successfully" Dec 13 14:21:05.225171 kubelet[2033]: I1213 14:21:05.225101 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-hostproc\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225171 kubelet[2033]: I1213 14:21:05.225160 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce33a37-9d7c-4cec-8981-31f0a7f39212-cilium-config-path\") pod \"5ce33a37-9d7c-4cec-8981-31f0a7f39212\" (UID: \"5ce33a37-9d7c-4cec-8981-31f0a7f39212\") " Dec 13 14:21:05.225171 kubelet[2033]: I1213 14:21:05.225182 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-run\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225734 kubelet[2033]: I1213 14:21:05.225203 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-xtables-lock\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225734 kubelet[2033]: I1213 14:21:05.225243 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.225734 kubelet[2033]: I1213 14:21:05.225244 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-hostproc" (OuterVolumeSpecName: "hostproc") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.225734 kubelet[2033]: I1213 14:21:05.225308 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-bpf-maps\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225734 kubelet[2033]: I1213 14:21:05.225328 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v49nv\" (UniqueName: \"kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-kube-api-access-v49nv\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225947 kubelet[2033]: I1213 14:21:05.225355 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.225947 kubelet[2033]: I1213 14:21:05.225361 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.225947 kubelet[2033]: I1213 14:21:05.225383 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7725461c-0819-4c88-8faa-37cb7f5d1189-clustermesh-secrets\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225947 kubelet[2033]: I1213 14:21:05.225398 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-lib-modules\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.225947 kubelet[2033]: I1213 14:21:05.225702 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-config-path\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226138 kubelet[2033]: I1213 14:21:05.225732 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-net\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226138 kubelet[2033]: I1213 14:21:05.225749 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-etc-cni-netd\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226138 kubelet[2033]: I1213 14:21:05.225763 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cni-path\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226138 kubelet[2033]: I1213 14:21:05.225782 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-cgroup\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226138 kubelet[2033]: I1213 14:21:05.225799 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-hubble-tls\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226138 kubelet[2033]: I1213 14:21:05.225813 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-kernel\") pod \"7725461c-0819-4c88-8faa-37cb7f5d1189\" (UID: \"7725461c-0819-4c88-8faa-37cb7f5d1189\") " Dec 13 14:21:05.226350 kubelet[2033]: I1213 14:21:05.225833 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqdww\" (UniqueName: \"kubernetes.io/projected/5ce33a37-9d7c-4cec-8981-31f0a7f39212-kube-api-access-kqdww\") pod \"5ce33a37-9d7c-4cec-8981-31f0a7f39212\" (UID: \"5ce33a37-9d7c-4cec-8981-31f0a7f39212\") " Dec 13 14:21:05.226350 kubelet[2033]: I1213 14:21:05.225892 2033 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.226350 kubelet[2033]: I1213 14:21:05.225906 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.226350 kubelet[2033]: I1213 14:21:05.225914 2033 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.226350 kubelet[2033]: I1213 14:21:05.225923 2033 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.227338 kubelet[2033]: I1213 14:21:05.227281 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce33a37-9d7c-4cec-8981-31f0a7f39212-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ce33a37-9d7c-4cec-8981-31f0a7f39212" (UID: "5ce33a37-9d7c-4cec-8981-31f0a7f39212"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:21:05.227450 kubelet[2033]: I1213 14:21:05.227355 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.227450 kubelet[2033]: I1213 14:21:05.227379 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.228436 kubelet[2033]: I1213 14:21:05.228394 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7725461c-0819-4c88-8faa-37cb7f5d1189-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:21:05.228505 kubelet[2033]: I1213 14:21:05.228442 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.228505 kubelet[2033]: I1213 14:21:05.228458 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cni-path" (OuterVolumeSpecName: "cni-path") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.228505 kubelet[2033]: I1213 14:21:05.228470 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.228505 kubelet[2033]: I1213 14:21:05.228466 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-kube-api-access-v49nv" (OuterVolumeSpecName: "kube-api-access-v49nv") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "kube-api-access-v49nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:21:05.228643 kubelet[2033]: I1213 14:21:05.228510 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:05.229178 kubelet[2033]: I1213 14:21:05.229154 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:21:05.229360 kubelet[2033]: I1213 14:21:05.229336 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce33a37-9d7c-4cec-8981-31f0a7f39212-kube-api-access-kqdww" (OuterVolumeSpecName: "kube-api-access-kqdww") pod "5ce33a37-9d7c-4cec-8981-31f0a7f39212" (UID: "5ce33a37-9d7c-4cec-8981-31f0a7f39212"). InnerVolumeSpecName "kube-api-access-kqdww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:21:05.230097 kubelet[2033]: I1213 14:21:05.230065 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7725461c-0819-4c88-8faa-37cb7f5d1189" (UID: "7725461c-0819-4c88-8faa-37cb7f5d1189"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:21:05.326272 kubelet[2033]: I1213 14:21:05.326220 2033 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326272 kubelet[2033]: I1213 14:21:05.326261 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326272 kubelet[2033]: I1213 14:21:05.326274 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-kqdww\" (UniqueName: \"kubernetes.io/projected/5ce33a37-9d7c-4cec-8981-31f0a7f39212-kube-api-access-kqdww\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326272 kubelet[2033]: I1213 14:21:05.326285 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ce33a37-9d7c-4cec-8981-31f0a7f39212-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326294 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-v49nv\" (UniqueName: \"kubernetes.io/projected/7725461c-0819-4c88-8faa-37cb7f5d1189-kube-api-access-v49nv\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326303 2033 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7725461c-0819-4c88-8faa-37cb7f5d1189-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326311 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326319 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326329 2033 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326337 2033 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326346 2033 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.326521 kubelet[2033]: I1213 14:21:05.326356 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7725461c-0819-4c88-8faa-37cb7f5d1189-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:05.352347 kubelet[2033]: E1213 14:21:05.352214 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:05.358414 systemd[1]: Removed slice kubepods-besteffort-pod5ce33a37_9d7c_4cec_8981_31f0a7f39212.slice. Dec 13 14:21:05.359603 systemd[1]: Removed slice kubepods-burstable-pod7725461c_0819_4c88_8faa_37cb7f5d1189.slice. Dec 13 14:21:05.359703 systemd[1]: kubepods-burstable-pod7725461c_0819_4c88_8faa_37cb7f5d1189.slice: Consumed 7.111s CPU time. Dec 13 14:21:05.410339 kubelet[2033]: E1213 14:21:05.410289 2033 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:21:05.583587 kubelet[2033]: I1213 14:21:05.583555 2033 scope.go:117] "RemoveContainer" containerID="6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa" Dec 13 14:21:05.585715 env[1201]: time="2024-12-13T14:21:05.585094835Z" level=info msg="RemoveContainer for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\"" Dec 13 14:21:05.589399 env[1201]: time="2024-12-13T14:21:05.589349597Z" level=info msg="RemoveContainer for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" returns successfully" Dec 13 14:21:05.590285 kubelet[2033]: I1213 14:21:05.590254 2033 scope.go:117] "RemoveContainer" containerID="6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa" Dec 13 14:21:05.590743 env[1201]: time="2024-12-13T14:21:05.590613465Z" level=error msg="ContainerStatus for \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\": not found" Dec 13 14:21:05.590940 kubelet[2033]: E1213 14:21:05.590835 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\": not found" containerID="6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa" Dec 13 14:21:05.590940 kubelet[2033]: I1213 14:21:05.590918 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa"} err="failed to get container status \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f46ea1bde7aeae0090a6ace4595eeffb57c73b09298244726d3e4e5c4fac0aa\": not found" Dec 13 14:21:05.591837 kubelet[2033]: I1213 14:21:05.591817 2033 scope.go:117] "RemoveContainer" containerID="7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9" Dec 13 14:21:05.593105 env[1201]: time="2024-12-13T14:21:05.593052041Z" level=info msg="RemoveContainer for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\"" Dec 13 14:21:05.597163 env[1201]: time="2024-12-13T14:21:05.597126351Z" level=info msg="RemoveContainer for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" returns successfully" Dec 13 14:21:05.597394 kubelet[2033]: I1213 14:21:05.597357 2033 scope.go:117] "RemoveContainer" containerID="004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37" Dec 13 14:21:05.598360 env[1201]: time="2024-12-13T14:21:05.598332048Z" level=info msg="RemoveContainer for \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\"" Dec 13 14:21:05.602274 env[1201]: time="2024-12-13T14:21:05.602230504Z" level=info msg="RemoveContainer for \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\" returns successfully" Dec 13 14:21:05.602497 kubelet[2033]: I1213 14:21:05.602438 2033 scope.go:117] "RemoveContainer" containerID="d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238" Dec 13 14:21:05.603809 env[1201]: time="2024-12-13T14:21:05.603783450Z" level=info msg="RemoveContainer for \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\"" Dec 13 14:21:05.607800 env[1201]: time="2024-12-13T14:21:05.607744905Z" level=info msg="RemoveContainer for \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\" returns successfully" Dec 13 14:21:05.608046 kubelet[2033]: I1213 14:21:05.607990 2033 scope.go:117] "RemoveContainer" containerID="bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f" Dec 13 14:21:05.609249 env[1201]: time="2024-12-13T14:21:05.609202270Z" level=info msg="RemoveContainer for \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\"" Dec 13 14:21:05.612655 env[1201]: time="2024-12-13T14:21:05.612614814Z" level=info msg="RemoveContainer for \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\" returns successfully" Dec 13 14:21:05.612829 kubelet[2033]: I1213 14:21:05.612797 2033 scope.go:117] "RemoveContainer" containerID="3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6" Dec 13 14:21:05.616421 env[1201]: time="2024-12-13T14:21:05.616369357Z" level=info msg="RemoveContainer for \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\"" Dec 13 14:21:05.619575 env[1201]: time="2024-12-13T14:21:05.619549620Z" level=info msg="RemoveContainer for \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\" returns successfully" Dec 13 14:21:05.619730 kubelet[2033]: I1213 14:21:05.619698 2033 scope.go:117] "RemoveContainer" containerID="7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9" Dec 13 14:21:05.619924 env[1201]: time="2024-12-13T14:21:05.619829802Z" level=error msg="ContainerStatus for \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\": not found" Dec 13 14:21:05.620003 kubelet[2033]: E1213 14:21:05.619988 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\": not found" containerID="7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9" Dec 13 14:21:05.620053 kubelet[2033]: I1213 14:21:05.620027 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9"} err="failed to get container status \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9\": not found" Dec 13 14:21:05.620053 kubelet[2033]: I1213 14:21:05.620040 2033 scope.go:117] "RemoveContainer" containerID="004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37" Dec 13 14:21:05.620217 env[1201]: time="2024-12-13T14:21:05.620177982Z" level=error msg="ContainerStatus for \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\": not found" Dec 13 14:21:05.620307 kubelet[2033]: E1213 14:21:05.620290 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\": not found" containerID="004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37" Dec 13 14:21:05.620360 kubelet[2033]: I1213 14:21:05.620313 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37"} err="failed to get container status \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\": rpc error: code = NotFound desc = an error occurred when try to find container \"004f3fe30c4dde24b3fdbe80ccfd74b2c073ab96912f9a306dbe70fea4ed9e37\": not found" Dec 13 14:21:05.620360 kubelet[2033]: I1213 14:21:05.620324 2033 scope.go:117] "RemoveContainer" containerID="d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238" Dec 13 14:21:05.620494 env[1201]: time="2024-12-13T14:21:05.620447494Z" level=error msg="ContainerStatus for \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\": not found" Dec 13 14:21:05.620613 kubelet[2033]: E1213 14:21:05.620580 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\": not found" containerID="d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238" Dec 13 14:21:05.620613 kubelet[2033]: I1213 14:21:05.620607 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238"} err="failed to get container status \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0110563cd7245bf32019f9267820d56f2fa69f84926428a4b02edbd82085238\": not found" Dec 13 14:21:05.620613 kubelet[2033]: I1213 14:21:05.620616 2033 scope.go:117] "RemoveContainer" containerID="bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f" Dec 13 14:21:05.620829 env[1201]: time="2024-12-13T14:21:05.620789231Z" level=error msg="ContainerStatus for \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\": not found" Dec 13 14:21:05.621043 kubelet[2033]: E1213 14:21:05.621031 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\": not found" containerID="bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f" Dec 13 14:21:05.621115 kubelet[2033]: I1213 14:21:05.621055 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f"} err="failed to get container status \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd510f273a1ce5381a326fc6579cd74e27eb7803291e8a1a8d7e52eb7182018f\": not found" Dec 13 14:21:05.621115 kubelet[2033]: I1213 14:21:05.621063 2033 scope.go:117] "RemoveContainer" containerID="3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6" Dec 13 14:21:05.621277 env[1201]: time="2024-12-13T14:21:05.621212776Z" level=error msg="ContainerStatus for \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\": not found" Dec 13 14:21:05.621382 kubelet[2033]: E1213 14:21:05.621364 2033 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\": not found" containerID="3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6" Dec 13 14:21:05.621447 kubelet[2033]: I1213 14:21:05.621408 2033 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6"} err="failed to get container status \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"3100208caa5fb2ab05a5db3ea5c40fc4f4994adbef7008b75c647cf72dbfb8c6\": not found" Dec 13 14:21:05.800648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c8b6dcacd2d937abb29c682cadc4046d3d939f682703b8599654a93fb9183f9-rootfs.mount: Deactivated successfully. Dec 13 14:21:05.800776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78-rootfs.mount: Deactivated successfully. Dec 13 14:21:05.800870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd14df2ebf4ca98a3c3d16bc78d2907f14226e5da93340dda0fcdd2cd3f75c78-shm.mount: Deactivated successfully. Dec 13 14:21:05.800953 systemd[1]: var-lib-kubelet-pods-5ce33a37\x2d9d7c\x2d4cec\x2d8981\x2d31f0a7f39212-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqdww.mount: Deactivated successfully. Dec 13 14:21:05.801041 systemd[1]: var-lib-kubelet-pods-7725461c\x2d0819\x2d4c88\x2d8faa\x2d37cb7f5d1189-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv49nv.mount: Deactivated successfully. Dec 13 14:21:05.801117 systemd[1]: var-lib-kubelet-pods-7725461c\x2d0819\x2d4c88\x2d8faa\x2d37cb7f5d1189-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:21:05.801186 systemd[1]: var-lib-kubelet-pods-7725461c\x2d0819\x2d4c88\x2d8faa\x2d37cb7f5d1189-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:21:06.742336 sshd[3678]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.746591 systemd[1]: Started sshd@25-10.0.0.27:22-10.0.0.1:46764.service. Dec 13 14:21:06.747152 systemd[1]: sshd@24-10.0.0.27:22-10.0.0.1:46754.service: Deactivated successfully. Dec 13 14:21:06.747738 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:21:06.748357 systemd-logind[1192]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:21:06.749297 systemd-logind[1192]: Removed session 25. Dec 13 14:21:06.785919 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 46764 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:21:06.787100 sshd[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.790822 systemd-logind[1192]: New session 26 of user core. Dec 13 14:21:06.791560 systemd[1]: Started session-26.scope. Dec 13 14:21:07.196303 sshd[3837]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:07.199779 systemd[1]: sshd@25-10.0.0.27:22-10.0.0.1:46764.service: Deactivated successfully. Dec 13 14:21:07.200459 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:21:07.202651 systemd[1]: Started sshd@26-10.0.0.27:22-10.0.0.1:46768.service. Dec 13 14:21:07.203177 systemd-logind[1192]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:21:07.204679 systemd-logind[1192]: Removed session 26. Dec 13 14:21:07.226497 kubelet[2033]: I1213 14:21:07.226447 2033 topology_manager.go:215] "Topology Admit Handler" podUID="e23f4d76-8fb8-4000-9e3e-f3947c2935d5" podNamespace="kube-system" podName="cilium-4shlw" Dec 13 14:21:07.227054 kubelet[2033]: E1213 14:21:07.226517 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" containerName="mount-cgroup" Dec 13 14:21:07.227054 kubelet[2033]: E1213 14:21:07.226526 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" containerName="apply-sysctl-overwrites" Dec 13 14:21:07.227054 kubelet[2033]: E1213 14:21:07.226532 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" containerName="cilium-agent" Dec 13 14:21:07.227054 kubelet[2033]: E1213 14:21:07.226539 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" containerName="mount-bpf-fs" Dec 13 14:21:07.227054 kubelet[2033]: E1213 14:21:07.226546 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" containerName="clean-cilium-state" Dec 13 14:21:07.227054 kubelet[2033]: E1213 14:21:07.226552 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5ce33a37-9d7c-4cec-8981-31f0a7f39212" containerName="cilium-operator" Dec 13 14:21:07.227054 kubelet[2033]: I1213 14:21:07.226585 2033 memory_manager.go:354] "RemoveStaleState removing state" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" containerName="cilium-agent" Dec 13 14:21:07.227054 kubelet[2033]: I1213 14:21:07.226591 2033 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce33a37-9d7c-4cec-8981-31f0a7f39212" containerName="cilium-operator" Dec 13 14:21:07.232687 systemd[1]: Created slice kubepods-burstable-pode23f4d76_8fb8_4000_9e3e_f3947c2935d5.slice. Dec 13 14:21:07.246555 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 46768 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:21:07.248294 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:07.254021 systemd-logind[1192]: New session 27 of user core. Dec 13 14:21:07.255110 systemd[1]: Started session-27.scope. Dec 13 14:21:07.338103 kubelet[2033]: I1213 14:21:07.338042 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-net\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338103 kubelet[2033]: I1213 14:21:07.338097 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-kernel\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338334 kubelet[2033]: I1213 14:21:07.338171 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-run\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338334 kubelet[2033]: I1213 14:21:07.338202 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-clustermesh-secrets\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338334 kubelet[2033]: I1213 14:21:07.338290 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-cgroup\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338334 kubelet[2033]: I1213 14:21:07.338318 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-etc-cni-netd\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338334 kubelet[2033]: I1213 14:21:07.338339 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-bpf-maps\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338479 kubelet[2033]: I1213 14:21:07.338382 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-lib-modules\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338540 kubelet[2033]: I1213 14:21:07.338490 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hostproc\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338711 kubelet[2033]: I1213 14:21:07.338556 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-ipsec-secrets\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338711 kubelet[2033]: I1213 14:21:07.338588 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hubble-tls\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338711 kubelet[2033]: I1213 14:21:07.338621 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkmz\" (UniqueName: \"kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-kube-api-access-gdkmz\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338711 kubelet[2033]: I1213 14:21:07.338646 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cni-path\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338711 kubelet[2033]: I1213 14:21:07.338682 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-xtables-lock\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.338711 kubelet[2033]: I1213 14:21:07.338705 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-config-path\") pod \"cilium-4shlw\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " pod="kube-system/cilium-4shlw" Dec 13 14:21:07.355034 kubelet[2033]: I1213 14:21:07.354985 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5ce33a37-9d7c-4cec-8981-31f0a7f39212" path="/var/lib/kubelet/pods/5ce33a37-9d7c-4cec-8981-31f0a7f39212/volumes" Dec 13 14:21:07.355451 kubelet[2033]: I1213 14:21:07.355424 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7725461c-0819-4c88-8faa-37cb7f5d1189" path="/var/lib/kubelet/pods/7725461c-0819-4c88-8faa-37cb7f5d1189/volumes" Dec 13 14:21:07.445900 sshd[3850]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:07.450533 systemd[1]: sshd@26-10.0.0.27:22-10.0.0.1:46768.service: Deactivated successfully. Dec 13 14:21:07.451301 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:21:07.454406 systemd-logind[1192]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:21:07.455570 systemd[1]: Started sshd@27-10.0.0.27:22-10.0.0.1:46772.service. Dec 13 14:21:07.456589 systemd-logind[1192]: Removed session 27. Dec 13 14:21:07.494618 sshd[3866]: Accepted publickey for core from 10.0.0.1 port 46772 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:21:07.495933 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:07.503535 systemd-logind[1192]: New session 28 of user core. Dec 13 14:21:07.504381 systemd[1]: Started session-28.scope. Dec 13 14:21:07.506723 kubelet[2033]: E1213 14:21:07.506691 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:07.508001 env[1201]: time="2024-12-13T14:21:07.507333683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4shlw,Uid:e23f4d76-8fb8-4000-9e3e-f3947c2935d5,Namespace:kube-system,Attempt:0,}" Dec 13 14:21:07.525418 env[1201]: time="2024-12-13T14:21:07.525330095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:07.525748 env[1201]: time="2024-12-13T14:21:07.525688916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:07.525914 env[1201]: time="2024-12-13T14:21:07.525889266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:07.526417 env[1201]: time="2024-12-13T14:21:07.526372684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58 pid=3877 runtime=io.containerd.runc.v2 Dec 13 14:21:07.539245 systemd[1]: Started cri-containerd-16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58.scope. Dec 13 14:21:07.561739 env[1201]: time="2024-12-13T14:21:07.561448899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4shlw,Uid:e23f4d76-8fb8-4000-9e3e-f3947c2935d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58\"" Dec 13 14:21:07.562141 kubelet[2033]: E1213 14:21:07.562121 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:07.565031 env[1201]: time="2024-12-13T14:21:07.564174609Z" level=info msg="CreateContainer within sandbox \"16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:21:07.584168 env[1201]: time="2024-12-13T14:21:07.584095332Z" level=info msg="CreateContainer within sandbox \"16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\"" Dec 13 14:21:07.584936 env[1201]: time="2024-12-13T14:21:07.584913845Z" level=info msg="StartContainer for \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\"" Dec 13 14:21:07.597508 systemd[1]: Started cri-containerd-c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7.scope. Dec 13 14:21:07.605650 systemd[1]: cri-containerd-c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7.scope: Deactivated successfully. Dec 13 14:21:07.605881 systemd[1]: Stopped cri-containerd-c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7.scope. Dec 13 14:21:07.625214 env[1201]: time="2024-12-13T14:21:07.625144359Z" level=info msg="shim disconnected" id=c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7 Dec 13 14:21:07.625214 env[1201]: time="2024-12-13T14:21:07.625212948Z" level=warning msg="cleaning up after shim disconnected" id=c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7 namespace=k8s.io Dec 13 14:21:07.625214 env[1201]: time="2024-12-13T14:21:07.625225152Z" level=info msg="cleaning up dead shim" Dec 13 14:21:07.633353 env[1201]: time="2024-12-13T14:21:07.633301895Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3944 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:21:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:21:07.633645 env[1201]: time="2024-12-13T14:21:07.633535358Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Dec 13 14:21:07.633831 env[1201]: time="2024-12-13T14:21:07.633778669Z" level=error msg="Failed to pipe stdout of container \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\"" error="reading from a closed fifo" Dec 13 14:21:07.633949 env[1201]: time="2024-12-13T14:21:07.633910168Z" level=error msg="Failed to pipe stderr of container \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\"" error="reading from a closed fifo" Dec 13 14:21:07.639128 env[1201]: time="2024-12-13T14:21:07.639055971Z" level=error msg="StartContainer for \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:21:07.639524 kubelet[2033]: E1213 14:21:07.639504 2033 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7" Dec 13 14:21:07.641002 kubelet[2033]: E1213 14:21:07.640880 2033 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:21:07.641002 kubelet[2033]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:21:07.641002 kubelet[2033]: rm /hostbin/cilium-mount Dec 13 14:21:07.641130 kubelet[2033]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gdkmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4shlw_kube-system(e23f4d76-8fb8-4000-9e3e-f3947c2935d5): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:21:07.641130 kubelet[2033]: E1213 14:21:07.640930 2033 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4shlw" podUID="e23f4d76-8fb8-4000-9e3e-f3947c2935d5" Dec 13 14:21:07.763548 kubelet[2033]: I1213 14:21:07.763502 2033 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:21:07Z","lastTransitionTime":"2024-12-13T14:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:21:08.603790 env[1201]: time="2024-12-13T14:21:08.603743123Z" level=info msg="StopPodSandbox for \"16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58\"" Dec 13 14:21:08.603790 env[1201]: time="2024-12-13T14:21:08.603796073Z" level=info msg="Container to stop \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:21:08.605949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58-shm.mount: Deactivated successfully. Dec 13 14:21:08.612235 systemd[1]: cri-containerd-16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58.scope: Deactivated successfully. Dec 13 14:21:08.632191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58-rootfs.mount: Deactivated successfully. Dec 13 14:21:08.639793 env[1201]: time="2024-12-13T14:21:08.639742018Z" level=info msg="shim disconnected" id=16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58 Dec 13 14:21:08.639953 env[1201]: time="2024-12-13T14:21:08.639796562Z" level=warning msg="cleaning up after shim disconnected" id=16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58 namespace=k8s.io Dec 13 14:21:08.639953 env[1201]: time="2024-12-13T14:21:08.639806872Z" level=info msg="cleaning up dead shim" Dec 13 14:21:08.646563 env[1201]: time="2024-12-13T14:21:08.646519898Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3974 runtime=io.containerd.runc.v2\n" Dec 13 14:21:08.646895 env[1201]: time="2024-12-13T14:21:08.646860133Z" level=info msg="TearDown network for sandbox \"16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58\" successfully" Dec 13 14:21:08.646938 env[1201]: time="2024-12-13T14:21:08.646893035Z" level=info msg="StopPodSandbox for \"16fd1cf75cca4fa6673c86c4681e2b692b88031e68a17ff18d1ee14b6113bc58\" returns successfully" Dec 13 14:21:08.847875 kubelet[2033]: I1213 14:21:08.847818 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-ipsec-secrets\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.847875 kubelet[2033]: I1213 14:21:08.847883 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-run\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.847911 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-clustermesh-secrets\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.847942 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-kernel\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.847980 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkmz\" (UniqueName: \"kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-kube-api-access-gdkmz\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.847985 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848009 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-config-path\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848031 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-net\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848055 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-cgroup\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848079 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hostproc\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848103 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-xtables-lock\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848125 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cni-path\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848154 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-bpf-maps\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848178 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-etc-cni-netd\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848200 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-lib-modules\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848225 2033 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hubble-tls\") pod \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\" (UID: \"e23f4d76-8fb8-4000-9e3e-f3947c2935d5\") " Dec 13 14:21:08.848379 kubelet[2033]: I1213 14:21:08.848256 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.848967 kubelet[2033]: I1213 14:21:08.848487 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hostproc" (OuterVolumeSpecName: "hostproc") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.848967 kubelet[2033]: I1213 14:21:08.848622 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.848967 kubelet[2033]: I1213 14:21:08.848668 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.848967 kubelet[2033]: I1213 14:21:08.848693 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cni-path" (OuterVolumeSpecName: "cni-path") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.848967 kubelet[2033]: I1213 14:21:08.848718 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.848967 kubelet[2033]: I1213 14:21:08.848739 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.849298 kubelet[2033]: I1213 14:21:08.848028 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.849298 kubelet[2033]: I1213 14:21:08.849239 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.849298 kubelet[2033]: I1213 14:21:08.849267 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:21:08.853693 kubelet[2033]: I1213 14:21:08.851244 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:21:08.851645 systemd[1]: var-lib-kubelet-pods-e23f4d76\x2d8fb8\x2d4000\x2d9e3e\x2df3947c2935d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgdkmz.mount: Deactivated successfully. Dec 13 14:21:08.851742 systemd[1]: var-lib-kubelet-pods-e23f4d76\x2d8fb8\x2d4000\x2d9e3e\x2df3947c2935d5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:21:08.853749 systemd[1]: var-lib-kubelet-pods-e23f4d76\x2d8fb8\x2d4000\x2d9e3e\x2df3947c2935d5-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:21:08.854551 kubelet[2033]: I1213 14:21:08.854508 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:21:08.855146 kubelet[2033]: I1213 14:21:08.855125 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-kube-api-access-gdkmz" (OuterVolumeSpecName: "kube-api-access-gdkmz") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "kube-api-access-gdkmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:21:08.855235 kubelet[2033]: I1213 14:21:08.855178 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:21:08.855305 kubelet[2033]: I1213 14:21:08.855268 2033 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e23f4d76-8fb8-4000-9e3e-f3947c2935d5" (UID: "e23f4d76-8fb8-4000-9e3e-f3947c2935d5"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:21:08.948612 kubelet[2033]: I1213 14:21:08.948567 2033 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948612 kubelet[2033]: I1213 14:21:08.948603 2033 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948612 kubelet[2033]: I1213 14:21:08.948616 2033 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948612 kubelet[2033]: I1213 14:21:08.948632 2033 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948643 2033 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948659 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948671 2033 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948683 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948697 2033 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gdkmz\" (UniqueName: \"kubernetes.io/projected/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-kube-api-access-gdkmz\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948708 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948720 2033 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948733 2033 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948744 2033 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:08.948885 kubelet[2033]: I1213 14:21:08.948755 2033 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e23f4d76-8fb8-4000-9e3e-f3947c2935d5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:21:09.357532 systemd[1]: Removed slice kubepods-burstable-pode23f4d76_8fb8_4000_9e3e_f3947c2935d5.slice. Dec 13 14:21:09.444078 systemd[1]: var-lib-kubelet-pods-e23f4d76\x2d8fb8\x2d4000\x2d9e3e\x2df3947c2935d5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:21:09.607364 kubelet[2033]: I1213 14:21:09.607327 2033 scope.go:117] "RemoveContainer" containerID="c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7" Dec 13 14:21:09.608993 env[1201]: time="2024-12-13T14:21:09.608889939Z" level=info msg="RemoveContainer for \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\"" Dec 13 14:21:09.613286 env[1201]: time="2024-12-13T14:21:09.613247987Z" level=info msg="RemoveContainer for \"c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7\" returns successfully" Dec 13 14:21:09.642396 kubelet[2033]: I1213 14:21:09.642359 2033 topology_manager.go:215] "Topology Admit Handler" podUID="26a7423c-f82c-471d-a9a5-c6b248039cba" podNamespace="kube-system" podName="cilium-8qcrb" Dec 13 14:21:09.642630 kubelet[2033]: E1213 14:21:09.642614 2033 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e23f4d76-8fb8-4000-9e3e-f3947c2935d5" containerName="mount-cgroup" Dec 13 14:21:09.642776 kubelet[2033]: I1213 14:21:09.642760 2033 memory_manager.go:354] "RemoveStaleState removing state" podUID="e23f4d76-8fb8-4000-9e3e-f3947c2935d5" containerName="mount-cgroup" Dec 13 14:21:09.647777 systemd[1]: Created slice kubepods-burstable-pod26a7423c_f82c_471d_a9a5_c6b248039cba.slice. Dec 13 14:21:09.753331 kubelet[2033]: I1213 14:21:09.753253 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-xtables-lock\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753331 kubelet[2033]: I1213 14:21:09.753311 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlb9w\" (UniqueName: \"kubernetes.io/projected/26a7423c-f82c-471d-a9a5-c6b248039cba-kube-api-access-wlb9w\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753331 kubelet[2033]: I1213 14:21:09.753335 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26a7423c-f82c-471d-a9a5-c6b248039cba-clustermesh-secrets\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753629 kubelet[2033]: I1213 14:21:09.753363 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-hostproc\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753629 kubelet[2033]: I1213 14:21:09.753453 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-cilium-cgroup\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753629 kubelet[2033]: I1213 14:21:09.753521 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-cni-path\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753629 kubelet[2033]: I1213 14:21:09.753582 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-bpf-maps\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753629 kubelet[2033]: I1213 14:21:09.753602 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-cilium-run\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753788 kubelet[2033]: I1213 14:21:09.753663 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-etc-cni-netd\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753788 kubelet[2033]: I1213 14:21:09.753744 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-lib-modules\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.753938 kubelet[2033]: I1213 14:21:09.753911 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-host-proc-sys-kernel\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.754020 kubelet[2033]: I1213 14:21:09.753980 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26a7423c-f82c-471d-a9a5-c6b248039cba-hubble-tls\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.754063 kubelet[2033]: I1213 14:21:09.754036 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26a7423c-f82c-471d-a9a5-c6b248039cba-cilium-config-path\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.754063 kubelet[2033]: I1213 14:21:09.754057 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/26a7423c-f82c-471d-a9a5-c6b248039cba-cilium-ipsec-secrets\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.754063 kubelet[2033]: I1213 14:21:09.754078 2033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26a7423c-f82c-471d-a9a5-c6b248039cba-host-proc-sys-net\") pod \"cilium-8qcrb\" (UID: \"26a7423c-f82c-471d-a9a5-c6b248039cba\") " pod="kube-system/cilium-8qcrb" Dec 13 14:21:09.950697 kubelet[2033]: E1213 14:21:09.950564 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:09.951662 env[1201]: time="2024-12-13T14:21:09.951533309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qcrb,Uid:26a7423c-f82c-471d-a9a5-c6b248039cba,Namespace:kube-system,Attempt:0,}" Dec 13 14:21:09.965489 env[1201]: time="2024-12-13T14:21:09.965418602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:09.965489 env[1201]: time="2024-12-13T14:21:09.965464810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:09.965489 env[1201]: time="2024-12-13T14:21:09.965475410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:09.965746 env[1201]: time="2024-12-13T14:21:09.965625806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d pid=4001 runtime=io.containerd.runc.v2 Dec 13 14:21:09.975642 systemd[1]: Started cri-containerd-18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d.scope. Dec 13 14:21:09.993528 env[1201]: time="2024-12-13T14:21:09.993456245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8qcrb,Uid:26a7423c-f82c-471d-a9a5-c6b248039cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\"" Dec 13 14:21:09.994365 kubelet[2033]: E1213 14:21:09.994326 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:09.996196 env[1201]: time="2024-12-13T14:21:09.996163372Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:21:10.010123 env[1201]: time="2024-12-13T14:21:10.010049235Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb\"" Dec 13 14:21:10.010867 env[1201]: time="2024-12-13T14:21:10.010813966Z" level=info msg="StartContainer for \"e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb\"" Dec 13 14:21:10.024438 systemd[1]: Started cri-containerd-e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb.scope. Dec 13 14:21:10.052052 env[1201]: time="2024-12-13T14:21:10.051993132Z" level=info msg="StartContainer for \"e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb\" returns successfully" Dec 13 14:21:10.059473 systemd[1]: cri-containerd-e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb.scope: Deactivated successfully. Dec 13 14:21:10.086906 env[1201]: time="2024-12-13T14:21:10.086836025Z" level=info msg="shim disconnected" id=e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb Dec 13 14:21:10.086906 env[1201]: time="2024-12-13T14:21:10.086900588Z" level=warning msg="cleaning up after shim disconnected" id=e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb namespace=k8s.io Dec 13 14:21:10.086906 env[1201]: time="2024-12-13T14:21:10.086909595Z" level=info msg="cleaning up dead shim" Dec 13 14:21:10.094276 env[1201]: time="2024-12-13T14:21:10.094218792Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4084 runtime=io.containerd.runc.v2\n" Dec 13 14:21:10.412173 kubelet[2033]: E1213 14:21:10.412125 2033 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:21:10.611713 kubelet[2033]: E1213 14:21:10.611686 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:10.613253 env[1201]: time="2024-12-13T14:21:10.613187965Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:21:10.726741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192144359.mount: Deactivated successfully. Dec 13 14:21:10.728642 env[1201]: time="2024-12-13T14:21:10.728573234Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027\"" Dec 13 14:21:10.729217 env[1201]: time="2024-12-13T14:21:10.729179906Z" level=info msg="StartContainer for \"2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027\"" Dec 13 14:21:10.733202 kubelet[2033]: W1213 14:21:10.732171 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode23f4d76_8fb8_4000_9e3e_f3947c2935d5.slice/cri-containerd-c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7.scope WatchSource:0}: container "c4d947f63464a40a4e1bcff661e54cc3ba44bd07825acead687b91464a7936c7" in namespace "k8s.io": not found Dec 13 14:21:10.747904 systemd[1]: Started cri-containerd-2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027.scope. Dec 13 14:21:10.788762 systemd[1]: cri-containerd-2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027.scope: Deactivated successfully. Dec 13 14:21:10.818254 env[1201]: time="2024-12-13T14:21:10.818128780Z" level=info msg="StartContainer for \"2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027\" returns successfully" Dec 13 14:21:10.885673 env[1201]: time="2024-12-13T14:21:10.885596000Z" level=info msg="shim disconnected" id=2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027 Dec 13 14:21:10.885943 env[1201]: time="2024-12-13T14:21:10.885697883Z" level=warning msg="cleaning up after shim disconnected" id=2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027 namespace=k8s.io Dec 13 14:21:10.885943 env[1201]: time="2024-12-13T14:21:10.885734993Z" level=info msg="cleaning up dead shim" Dec 13 14:21:10.894991 env[1201]: time="2024-12-13T14:21:10.894927042Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4146 runtime=io.containerd.runc.v2\n" Dec 13 14:21:11.355121 kubelet[2033]: I1213 14:21:11.355057 2033 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e23f4d76-8fb8-4000-9e3e-f3947c2935d5" path="/var/lib/kubelet/pods/e23f4d76-8fb8-4000-9e3e-f3947c2935d5/volumes" Dec 13 14:21:11.444290 systemd[1]: run-containerd-runc-k8s.io-2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027-runc.AYtwAA.mount: Deactivated successfully. Dec 13 14:21:11.444397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027-rootfs.mount: Deactivated successfully. Dec 13 14:21:11.616198 kubelet[2033]: E1213 14:21:11.615716 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:11.617552 env[1201]: time="2024-12-13T14:21:11.617495686Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:21:11.633811 env[1201]: time="2024-12-13T14:21:11.633730988Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e\"" Dec 13 14:21:11.634442 env[1201]: time="2024-12-13T14:21:11.634384868Z" level=info msg="StartContainer for \"00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e\"" Dec 13 14:21:11.653196 systemd[1]: Started cri-containerd-00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e.scope. Dec 13 14:21:11.680711 systemd[1]: cri-containerd-00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e.scope: Deactivated successfully. Dec 13 14:21:11.683789 env[1201]: time="2024-12-13T14:21:11.683732442Z" level=info msg="StartContainer for \"00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e\" returns successfully" Dec 13 14:21:11.710237 env[1201]: time="2024-12-13T14:21:11.710180600Z" level=info msg="shim disconnected" id=00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e Dec 13 14:21:11.710237 env[1201]: time="2024-12-13T14:21:11.710233040Z" level=warning msg="cleaning up after shim disconnected" id=00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e namespace=k8s.io Dec 13 14:21:11.710237 env[1201]: time="2024-12-13T14:21:11.710241666Z" level=info msg="cleaning up dead shim" Dec 13 14:21:11.716270 env[1201]: time="2024-12-13T14:21:11.716227823Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4206 runtime=io.containerd.runc.v2\n" Dec 13 14:21:12.444427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e-rootfs.mount: Deactivated successfully. Dec 13 14:21:12.620690 kubelet[2033]: E1213 14:21:12.620653 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:12.622978 env[1201]: time="2024-12-13T14:21:12.622920503Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:21:12.636574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039462327.mount: Deactivated successfully. Dec 13 14:21:12.639379 env[1201]: time="2024-12-13T14:21:12.639312864Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415\"" Dec 13 14:21:12.640048 env[1201]: time="2024-12-13T14:21:12.640003834Z" level=info msg="StartContainer for \"5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415\"" Dec 13 14:21:12.658921 systemd[1]: Started cri-containerd-5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415.scope. Dec 13 14:21:12.687978 systemd[1]: cri-containerd-5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415.scope: Deactivated successfully. Dec 13 14:21:12.688890 env[1201]: time="2024-12-13T14:21:12.688837544Z" level=info msg="StartContainer for \"5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415\" returns successfully" Dec 13 14:21:12.725895 env[1201]: time="2024-12-13T14:21:12.725735648Z" level=info msg="shim disconnected" id=5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415 Dec 13 14:21:12.725895 env[1201]: time="2024-12-13T14:21:12.725797987Z" level=warning msg="cleaning up after shim disconnected" id=5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415 namespace=k8s.io Dec 13 14:21:12.725895 env[1201]: time="2024-12-13T14:21:12.725810751Z" level=info msg="cleaning up dead shim" Dec 13 14:21:12.732972 env[1201]: time="2024-12-13T14:21:12.732910521Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4259 runtime=io.containerd.runc.v2\n" Dec 13 14:21:13.444642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415-rootfs.mount: Deactivated successfully. Dec 13 14:21:13.625245 kubelet[2033]: E1213 14:21:13.625199 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:13.629258 env[1201]: time="2024-12-13T14:21:13.629170643Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:21:13.648180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931263009.mount: Deactivated successfully. Dec 13 14:21:13.652698 env[1201]: time="2024-12-13T14:21:13.652649708Z" level=info msg="CreateContainer within sandbox \"18d0c1901064108f5f8a0d23dc09dc21409edfbcf4dc4822a61fd47a258fca9d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510\"" Dec 13 14:21:13.653334 env[1201]: time="2024-12-13T14:21:13.653294922Z" level=info msg="StartContainer for \"bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510\"" Dec 13 14:21:13.672792 systemd[1]: Started cri-containerd-bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510.scope. Dec 13 14:21:13.701301 env[1201]: time="2024-12-13T14:21:13.701122504Z" level=info msg="StartContainer for \"bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510\" returns successfully" Dec 13 14:21:13.880604 kubelet[2033]: W1213 14:21:13.880553 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a7423c_f82c_471d_a9a5_c6b248039cba.slice/cri-containerd-e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb.scope WatchSource:0}: task e4e3dc8144415b79bda3a1c5387dc10753ef9e306c54ac2797ffa757bb6772bb not found: not found Dec 13 14:21:14.006082 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:21:14.444797 systemd[1]: run-containerd-runc-k8s.io-bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510-runc.RgKRK8.mount: Deactivated successfully. Dec 13 14:21:14.629682 kubelet[2033]: E1213 14:21:14.629648 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:15.951791 kubelet[2033]: E1213 14:21:15.951754 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:16.561809 systemd-networkd[1026]: lxc_health: Link UP Dec 13 14:21:16.571108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:21:16.570897 systemd-networkd[1026]: lxc_health: Gained carrier Dec 13 14:21:16.988661 kubelet[2033]: W1213 14:21:16.988625 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a7423c_f82c_471d_a9a5_c6b248039cba.slice/cri-containerd-2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027.scope WatchSource:0}: task 2a0f861ca71f547205f8d43a9b29a90a8c69855c476b9b8418efd35431082027 not found: not found Dec 13 14:21:17.752014 systemd-networkd[1026]: lxc_health: Gained IPv6LL Dec 13 14:21:17.865188 systemd[1]: run-containerd-runc-k8s.io-bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510-runc.Q2O1lj.mount: Deactivated successfully. Dec 13 14:21:17.953117 kubelet[2033]: E1213 14:21:17.953069 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:17.968354 kubelet[2033]: I1213 14:21:17.968306 2033 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8qcrb" podStartSLOduration=8.96826033 podStartE2EDuration="8.96826033s" podCreationTimestamp="2024-12-13 14:21:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:21:14.643372895 +0000 UTC m=+109.390995162" watchObservedRunningTime="2024-12-13 14:21:17.96826033 +0000 UTC m=+112.715882587" Dec 13 14:21:18.352775 kubelet[2033]: E1213 14:21:18.352714 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:18.636526 kubelet[2033]: E1213 14:21:18.636403 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:19.638661 kubelet[2033]: E1213 14:21:19.638609 2033 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:20.096868 kubelet[2033]: W1213 14:21:20.096806 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a7423c_f82c_471d_a9a5_c6b248039cba.slice/cri-containerd-00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e.scope WatchSource:0}: task 00b98f87ca22044a94906cc3896a05087f71dc2539b14aee3eafcaa9de06ae6e not found: not found Dec 13 14:21:22.062838 systemd[1]: run-containerd-runc-k8s.io-bb103904406057e99f8bf370f9367c8f979bcca3aec0fef5bbf23217ba912510-runc.33QcFu.mount: Deactivated successfully. Dec 13 14:21:22.113951 sshd[3866]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:22.117143 systemd[1]: sshd@27-10.0.0.27:22-10.0.0.1:46772.service: Deactivated successfully. Dec 13 14:21:22.117798 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:21:22.118563 systemd-logind[1192]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:21:22.119324 systemd-logind[1192]: Removed session 28. Dec 13 14:21:23.204817 kubelet[2033]: W1213 14:21:23.204736 2033 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a7423c_f82c_471d_a9a5_c6b248039cba.slice/cri-containerd-5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415.scope WatchSource:0}: task 5b07e6dedabbfdb5eff5e0d98db33a7b288d7358d4f4479fc4b905f77c6ab415 not found: not found