Dec 13 01:59:47.106142 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:59:47.106165 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:59:47.106174 kernel: BIOS-provided physical RAM map: Dec 13 01:59:47.106179 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:59:47.106185 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:59:47.106190 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:59:47.106197 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:59:47.106203 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:59:47.106210 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:59:47.106215 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:59:47.106221 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:59:47.106226 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:59:47.106232 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:59:47.106237 kernel: NX (Execute Disable) protection: active Dec 13 01:59:47.106245 kernel: SMBIOS 2.8 present. Dec 13 01:59:47.106252 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:59:47.106258 kernel: Hypervisor detected: KVM Dec 13 01:59:47.106274 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:59:47.106281 kernel: kvm-clock: cpu 0, msr 9819b001, primary cpu clock Dec 13 01:59:47.106288 kernel: kvm-clock: using sched offset of 4491627455 cycles Dec 13 01:59:47.106296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:59:47.106307 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:59:47.106315 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:59:47.106326 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:59:47.106333 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:59:47.106339 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:59:47.106345 kernel: Using GB pages for direct mapping Dec 13 01:59:47.106351 kernel: ACPI: Early table checksum verification disabled Dec 13 01:59:47.106357 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:59:47.106364 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106372 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106381 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106392 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:59:47.106401 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106407 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106413 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106419 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:59:47.106425 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:59:47.106432 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:59:47.106438 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:59:47.106448 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:59:47.106455 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:59:47.106461 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:59:47.106468 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:59:47.106474 kernel: No NUMA configuration found Dec 13 01:59:47.106481 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:59:47.106489 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:59:47.106497 kernel: Zone ranges: Dec 13 01:59:47.106505 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:59:47.106513 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:59:47.106521 kernel: Normal empty Dec 13 01:59:47.106529 kernel: Movable zone start for each node Dec 13 01:59:47.106537 kernel: Early memory node ranges Dec 13 01:59:47.106545 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:59:47.106552 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:59:47.106563 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:59:47.106575 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:59:47.106584 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:59:47.106592 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:59:47.106599 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:59:47.106606 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:59:47.106612 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:59:47.106619 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:59:47.106625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:59:47.106632 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:59:47.106640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:59:47.106648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:59:47.106656 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:59:47.106667 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:59:47.106675 kernel: TSC deadline timer available Dec 13 01:59:47.106683 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:59:47.106689 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:59:47.106696 kernel: kvm-guest: setup PV sched yield Dec 13 01:59:47.106703 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:59:47.106776 kernel: Booting paravirtualized kernel on KVM Dec 13 01:59:47.106783 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:59:47.106790 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:59:47.106797 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:59:47.106803 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:59:47.106810 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:59:47.106816 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:59:47.106823 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 01:59:47.106829 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:59:47.106838 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:59:47.106845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:59:47.106851 kernel: Policy zone: DMA32 Dec 13 01:59:47.106859 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:59:47.106866 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:59:47.106873 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:59:47.106879 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:59:47.106886 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:59:47.106894 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 01:59:47.106901 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:59:47.106908 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:59:47.106914 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:59:47.106921 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:59:47.106928 kernel: rcu: RCU event tracing is enabled. Dec 13 01:59:47.106935 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:59:47.106942 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:59:47.106948 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:59:47.106956 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:59:47.106963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:59:47.106969 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:59:47.106976 kernel: random: crng init done Dec 13 01:59:47.106983 kernel: Console: colour VGA+ 80x25 Dec 13 01:59:47.106989 kernel: printk: console [ttyS0] enabled Dec 13 01:59:47.106996 kernel: ACPI: Core revision 20210730 Dec 13 01:59:47.107002 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:59:47.107009 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:59:47.107017 kernel: x2apic enabled Dec 13 01:59:47.107024 kernel: Switched APIC routing to physical x2apic. Dec 13 01:59:47.107030 kernel: kvm-guest: setup PV IPIs Dec 13 01:59:47.107037 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:59:47.107043 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:59:47.107055 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:59:47.107062 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:59:47.107068 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:59:47.107075 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:59:47.107088 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:59:47.107095 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:59:47.107103 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:59:47.107110 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:59:47.107117 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:59:47.107124 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:59:47.107131 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:59:47.107138 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:59:47.107145 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:59:47.107154 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:59:47.107161 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:59:47.107175 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:59:47.107187 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:59:47.107196 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:59:47.107204 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:59:47.107211 kernel: LSM: Security Framework initializing Dec 13 01:59:47.107220 kernel: SELinux: Initializing. Dec 13 01:59:47.107227 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:59:47.107234 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:59:47.107241 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:59:47.107248 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:59:47.107255 kernel: ... version: 0 Dec 13 01:59:47.107273 kernel: ... bit width: 48 Dec 13 01:59:47.107281 kernel: ... generic registers: 6 Dec 13 01:59:47.107290 kernel: ... value mask: 0000ffffffffffff Dec 13 01:59:47.107300 kernel: ... max period: 00007fffffffffff Dec 13 01:59:47.107308 kernel: ... fixed-purpose events: 0 Dec 13 01:59:47.107315 kernel: ... event mask: 000000000000003f Dec 13 01:59:47.107322 kernel: signal: max sigframe size: 1776 Dec 13 01:59:47.107328 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:59:47.107335 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:59:47.107342 kernel: x86: Booting SMP configuration: Dec 13 01:59:47.107349 kernel: .... node #0, CPUs: #1 Dec 13 01:59:47.107355 kernel: kvm-clock: cpu 1, msr 9819b041, secondary cpu clock Dec 13 01:59:47.107364 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:59:47.107371 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 01:59:47.107377 kernel: #2 Dec 13 01:59:47.107384 kernel: kvm-clock: cpu 2, msr 9819b081, secondary cpu clock Dec 13 01:59:47.107391 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:59:47.107398 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 01:59:47.107405 kernel: #3 Dec 13 01:59:47.107412 kernel: kvm-clock: cpu 3, msr 9819b0c1, secondary cpu clock Dec 13 01:59:47.107418 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:59:47.107429 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 01:59:47.107437 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:59:47.107444 kernel: smpboot: Max logical packages: 1 Dec 13 01:59:47.107451 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:59:47.107458 kernel: devtmpfs: initialized Dec 13 01:59:47.107464 kernel: x86/mm: Memory block size: 128MB Dec 13 01:59:47.107472 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:59:47.107479 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:59:47.107485 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:59:47.107492 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:59:47.107500 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:59:47.107507 kernel: audit: type=2000 audit(1734055186.191:1): state=initialized audit_enabled=0 res=1 Dec 13 01:59:47.107514 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:59:47.107521 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:59:47.107528 kernel: cpuidle: using governor menu Dec 13 01:59:47.107535 kernel: ACPI: bus type PCI registered Dec 13 01:59:47.107541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:59:47.107548 kernel: dca service started, version 1.12.1 Dec 13 01:59:47.107555 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:59:47.107563 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:59:47.107570 kernel: PCI: Using configuration type 1 for base access Dec 13 01:59:47.107577 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:59:47.107584 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:59:47.107591 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:59:47.107598 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:59:47.107605 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:59:47.107612 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:59:47.107618 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:59:47.107626 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:59:47.107633 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:59:47.107640 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:59:47.107647 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:59:47.107654 kernel: ACPI: Interpreter enabled Dec 13 01:59:47.107661 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:59:47.107668 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:59:47.107675 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:59:47.107682 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:59:47.107690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:59:47.107873 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:59:47.107979 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:59:47.108082 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:59:47.108096 kernel: PCI host bridge to bus 0000:00 Dec 13 01:59:47.108227 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:59:47.108320 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:59:47.108393 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:59:47.108463 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:59:47.108530 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:59:47.108606 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:59:47.108676 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:59:47.108813 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:59:47.108912 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:59:47.108989 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:59:47.109064 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:59:47.109138 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:59:47.109211 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:59:47.109319 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:59:47.109395 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:59:47.109482 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:59:47.109557 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:59:47.109651 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:59:47.109741 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:59:47.109815 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:59:47.109917 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:59:47.110011 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:59:47.110112 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:59:47.110224 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:59:47.110347 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:59:47.112110 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:59:47.112236 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:59:47.112336 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:59:47.112448 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:59:47.112544 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:59:47.112630 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:59:47.112812 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:59:47.112902 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:59:47.112914 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:59:47.112923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:59:47.112932 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:59:47.112944 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:59:47.112952 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:59:47.112961 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:59:47.112971 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:59:47.112980 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:59:47.112989 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:59:47.112998 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:59:47.113007 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:59:47.113016 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:59:47.113026 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:59:47.113035 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:59:47.113044 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:59:47.113053 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:59:47.113062 kernel: iommu: Default domain type: Translated Dec 13 01:59:47.113071 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:59:47.113246 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:59:47.113389 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:59:47.116903 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:59:47.116922 kernel: vgaarb: loaded Dec 13 01:59:47.116931 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:59:47.116940 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:59:47.116950 kernel: PTP clock support registered Dec 13 01:59:47.116958 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:59:47.116967 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:59:47.116989 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:59:47.116999 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:59:47.117012 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:59:47.117021 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:59:47.117030 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:59:47.117040 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:59:47.117049 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:59:47.117058 kernel: pnp: PnP ACPI init Dec 13 01:59:47.117196 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:59:47.117208 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:59:47.117219 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:59:47.117227 kernel: NET: Registered PF_INET protocol family Dec 13 01:59:47.117234 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:59:47.117241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:59:47.117249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:59:47.117256 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:59:47.117273 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:59:47.117281 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:59:47.117288 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:59:47.117297 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:59:47.117304 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:59:47.117312 kernel: NET: Registered PF_XDP protocol family Dec 13 01:59:47.117388 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:59:47.117455 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:59:47.117519 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:59:47.117583 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:59:47.117648 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:59:47.117732 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:59:47.117742 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:59:47.117750 kernel: Initialise system trusted keyrings Dec 13 01:59:47.117757 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:59:47.117764 kernel: Key type asymmetric registered Dec 13 01:59:47.117772 kernel: Asymmetric key parser 'x509' registered Dec 13 01:59:47.117779 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:59:47.117786 kernel: io scheduler mq-deadline registered Dec 13 01:59:47.117793 kernel: io scheduler kyber registered Dec 13 01:59:47.117804 kernel: io scheduler bfq registered Dec 13 01:59:47.117811 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:59:47.117819 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:59:47.117827 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:59:47.117834 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:59:47.117841 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:59:47.117849 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:59:47.117856 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:59:47.117863 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:59:47.117871 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:59:47.117971 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:59:47.117982 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:59:47.118080 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:59:47.118153 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:59:46 UTC (1734055186) Dec 13 01:59:47.118220 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:59:47.118229 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:59:47.118236 kernel: Segment Routing with IPv6 Dec 13 01:59:47.118247 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:59:47.118254 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:59:47.118271 kernel: Key type dns_resolver registered Dec 13 01:59:47.118278 kernel: IPI shorthand broadcast: enabled Dec 13 01:59:47.118285 kernel: sched_clock: Marking stable (477537043, 127278850)->(673867011, -69051118) Dec 13 01:59:47.118292 kernel: registered taskstats version 1 Dec 13 01:59:47.118300 kernel: Loading compiled-in X.509 certificates Dec 13 01:59:47.118307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:59:47.118314 kernel: Key type .fscrypt registered Dec 13 01:59:47.118323 kernel: Key type fscrypt-provisioning registered Dec 13 01:59:47.118330 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:59:47.118337 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:59:47.118344 kernel: ima: No architecture policies found Dec 13 01:59:47.118351 kernel: clk: Disabling unused clocks Dec 13 01:59:47.118358 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:59:47.118365 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:59:47.118373 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:59:47.118380 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:59:47.118388 kernel: Run /init as init process Dec 13 01:59:47.118396 kernel: with arguments: Dec 13 01:59:47.118403 kernel: /init Dec 13 01:59:47.118410 kernel: with environment: Dec 13 01:59:47.118417 kernel: HOME=/ Dec 13 01:59:47.118424 kernel: TERM=linux Dec 13 01:59:47.118431 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:59:47.118441 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:59:47.118452 systemd[1]: Detected virtualization kvm. Dec 13 01:59:47.118460 systemd[1]: Detected architecture x86-64. Dec 13 01:59:47.118468 systemd[1]: Running in initrd. Dec 13 01:59:47.118475 systemd[1]: No hostname configured, using default hostname. Dec 13 01:59:47.118483 systemd[1]: Hostname set to . Dec 13 01:59:47.118491 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:59:47.118498 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:59:47.118506 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:59:47.118515 systemd[1]: Reached target cryptsetup.target. Dec 13 01:59:47.118523 systemd[1]: Reached target paths.target. Dec 13 01:59:47.118537 systemd[1]: Reached target slices.target. Dec 13 01:59:47.118546 systemd[1]: Reached target swap.target. Dec 13 01:59:47.118554 systemd[1]: Reached target timers.target. Dec 13 01:59:47.118562 systemd[1]: Listening on iscsid.socket. Dec 13 01:59:47.118571 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:59:47.118579 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:59:47.118587 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:59:47.118595 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:59:47.118603 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:59:47.118611 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:59:47.118619 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:59:47.118627 systemd[1]: Reached target sockets.target. Dec 13 01:59:47.118636 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:59:47.118644 systemd[1]: Finished network-cleanup.service. Dec 13 01:59:47.118652 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:59:47.118660 systemd[1]: Starting systemd-journald.service... Dec 13 01:59:47.118668 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:59:47.118675 systemd[1]: Starting systemd-resolved.service... Dec 13 01:59:47.118683 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:59:47.118691 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:59:47.118699 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:59:47.118815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:59:47.118824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:59:47.118832 kernel: audit: type=1130 audit(1734055187.112:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.118840 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:59:47.118853 systemd-journald[197]: Journal started Dec 13 01:59:47.118907 systemd-journald[197]: Runtime Journal (/run/log/journal/cadc215e0f564993a885de268629bb04) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:59:47.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.089347 systemd-modules-load[198]: Inserted module 'overlay' Dec 13 01:59:47.123537 kernel: audit: type=1130 audit(1734055187.118:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.123557 systemd[1]: Started systemd-journald.service. Dec 13 01:59:47.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.126786 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:59:47.129737 kernel: audit: type=1130 audit(1734055187.124:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.136458 systemd-resolved[199]: Positive Trust Anchors: Dec 13 01:59:47.136827 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:59:47.136488 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:59:47.136527 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:59:47.139624 systemd-resolved[199]: Defaulting to hostname 'linux'. Dec 13 01:59:47.140745 systemd[1]: Started systemd-resolved.service. Dec 13 01:59:47.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.141288 systemd[1]: Reached target nss-lookup.target. Dec 13 01:59:47.144838 kernel: audit: type=1130 audit(1734055187.140:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.146491 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:59:47.148764 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:59:47.154064 kernel: audit: type=1130 audit(1734055187.146:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.154096 kernel: Bridge firewalling registered Dec 13 01:59:47.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.154111 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 13 01:59:47.166281 dracut-cmdline[217]: dracut-dracut-053 Dec 13 01:59:47.169344 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:59:47.176750 kernel: SCSI subsystem initialized Dec 13 01:59:47.188285 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:59:47.188333 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:59:47.188345 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:59:47.192313 systemd-modules-load[198]: Inserted module 'dm_multipath' Dec 13 01:59:47.193103 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:59:47.198062 kernel: audit: type=1130 audit(1734055187.192:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.197935 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:59:47.206532 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:59:47.211046 kernel: audit: type=1130 audit(1734055187.206:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.241035 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:59:47.260741 kernel: iscsi: registered transport (tcp) Dec 13 01:59:47.281792 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:59:47.281867 kernel: QLogic iSCSI HBA Driver Dec 13 01:59:47.312820 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:59:47.318862 kernel: audit: type=1130 audit(1734055187.313:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.315113 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:59:47.371784 kernel: raid6: avx2x4 gen() 20170 MB/s Dec 13 01:59:47.388768 kernel: raid6: avx2x4 xor() 5503 MB/s Dec 13 01:59:47.405769 kernel: raid6: avx2x2 gen() 19650 MB/s Dec 13 01:59:47.422765 kernel: raid6: avx2x2 xor() 12821 MB/s Dec 13 01:59:47.439782 kernel: raid6: avx2x1 gen() 16939 MB/s Dec 13 01:59:47.456777 kernel: raid6: avx2x1 xor() 10212 MB/s Dec 13 01:59:47.473780 kernel: raid6: sse2x4 gen() 10294 MB/s Dec 13 01:59:47.490767 kernel: raid6: sse2x4 xor() 6503 MB/s Dec 13 01:59:47.507759 kernel: raid6: sse2x2 gen() 15805 MB/s Dec 13 01:59:47.524750 kernel: raid6: sse2x2 xor() 9707 MB/s Dec 13 01:59:47.541761 kernel: raid6: sse2x1 gen() 9729 MB/s Dec 13 01:59:47.559210 kernel: raid6: sse2x1 xor() 7495 MB/s Dec 13 01:59:47.559327 kernel: raid6: using algorithm avx2x4 gen() 20170 MB/s Dec 13 01:59:47.559341 kernel: raid6: .... xor() 5503 MB/s, rmw enabled Dec 13 01:59:47.559903 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:59:47.572746 kernel: xor: automatically using best checksumming function avx Dec 13 01:59:47.665768 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:59:47.674264 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:59:47.679194 kernel: audit: type=1130 audit(1734055187.675:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.678000 audit: BPF prog-id=7 op=LOAD Dec 13 01:59:47.678000 audit: BPF prog-id=8 op=LOAD Dec 13 01:59:47.679565 systemd[1]: Starting systemd-udevd.service... Dec 13 01:59:47.693774 systemd-udevd[400]: Using default interface naming scheme 'v252'. Dec 13 01:59:47.698123 systemd[1]: Started systemd-udevd.service. Dec 13 01:59:47.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.699147 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:59:47.712003 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Dec 13 01:59:47.738395 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:59:47.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.740262 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:59:47.780560 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:59:47.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:47.811377 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:59:47.837186 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:59:47.837208 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:59:47.837218 kernel: GPT:9289727 != 19775487 Dec 13 01:59:47.837226 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:59:47.837236 kernel: GPT:9289727 != 19775487 Dec 13 01:59:47.837257 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:59:47.837265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:59:47.837274 kernel: libata version 3.00 loaded. Dec 13 01:59:47.837287 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:59:47.837295 kernel: AES CTR mode by8 optimization enabled Dec 13 01:59:47.837306 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:59:47.848697 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:59:47.848727 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:59:47.848827 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:59:47.848907 kernel: scsi host0: ahci Dec 13 01:59:47.849000 kernel: scsi host1: ahci Dec 13 01:59:47.849101 kernel: scsi host2: ahci Dec 13 01:59:47.849195 kernel: scsi host3: ahci Dec 13 01:59:47.849297 kernel: scsi host4: ahci Dec 13 01:59:47.849382 kernel: scsi host5: ahci Dec 13 01:59:47.849466 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:59:47.849476 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:59:47.849485 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:59:47.849493 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:59:47.849505 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:59:47.849513 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:59:47.854740 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Dec 13 01:59:47.991290 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:59:48.037533 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:59:48.038992 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:59:48.045406 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:59:48.049444 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:59:48.054254 systemd[1]: Starting disk-uuid.service... Dec 13 01:59:48.067622 disk-uuid[534]: Primary Header is updated. Dec 13 01:59:48.067622 disk-uuid[534]: Secondary Entries is updated. Dec 13 01:59:48.067622 disk-uuid[534]: Secondary Header is updated. Dec 13 01:59:48.072751 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:59:48.076746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:59:48.080769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:59:48.162732 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:59:48.162817 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:59:48.162833 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:59:48.162847 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:59:48.164813 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:59:48.168614 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:59:48.168685 kernel: ata3.00: applying bridge limits Dec 13 01:59:48.170609 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:59:48.172025 kernel: ata3.00: configured for UDMA/100 Dec 13 01:59:48.179663 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:59:48.218777 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:59:48.240068 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:59:48.240092 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:59:49.080667 disk-uuid[535]: The operation has completed successfully. Dec 13 01:59:49.082219 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:59:49.100642 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:59:49.100760 systemd[1]: Finished disk-uuid.service. Dec 13 01:59:49.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.116733 systemd[1]: Starting verity-setup.service... Dec 13 01:59:49.129753 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:59:49.152824 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:59:49.156133 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:59:49.159058 systemd[1]: Finished verity-setup.service. Dec 13 01:59:49.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.219454 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:59:49.220904 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:59:49.219660 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:59:49.220930 systemd[1]: Starting ignition-setup.service... Dec 13 01:59:49.223469 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:59:49.235404 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:59:49.235463 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:59:49.235479 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:59:49.245533 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:59:49.256135 systemd[1]: Finished ignition-setup.service. Dec 13 01:59:49.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.257940 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:59:49.295476 ignition[655]: Ignition 2.14.0 Dec 13 01:59:49.295490 ignition[655]: Stage: fetch-offline Dec 13 01:59:49.295593 ignition[655]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:59:49.295606 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:59:49.295761 ignition[655]: parsed url from cmdline: "" Dec 13 01:59:49.295766 ignition[655]: no config URL provided Dec 13 01:59:49.295772 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:59:49.295781 ignition[655]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:59:49.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.303000 audit: BPF prog-id=9 op=LOAD Dec 13 01:59:49.300933 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:59:49.296504 ignition[655]: op(1): [started] loading QEMU firmware config module Dec 13 01:59:49.304390 systemd[1]: Starting systemd-networkd.service... Dec 13 01:59:49.296510 ignition[655]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:59:49.307774 ignition[655]: op(1): [finished] loading QEMU firmware config module Dec 13 01:59:49.325646 systemd-networkd[727]: lo: Link UP Dec 13 01:59:49.325658 systemd-networkd[727]: lo: Gained carrier Dec 13 01:59:49.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.326149 systemd-networkd[727]: Enumeration completed Dec 13 01:59:49.326253 systemd[1]: Started systemd-networkd.service. Dec 13 01:59:49.326362 systemd-networkd[727]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:59:49.327906 systemd-networkd[727]: eth0: Link UP Dec 13 01:59:49.327909 systemd-networkd[727]: eth0: Gained carrier Dec 13 01:59:49.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.328007 systemd[1]: Reached target network.target. Dec 13 01:59:49.331000 systemd[1]: Starting iscsiuio.service... Dec 13 01:59:49.335318 systemd[1]: Started iscsiuio.service. Dec 13 01:59:49.340699 iscsid[732]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:59:49.340699 iscsid[732]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 01:59:49.340699 iscsid[732]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:59:49.340699 iscsid[732]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:59:49.340699 iscsid[732]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:59:49.340699 iscsid[732]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:59:49.340699 iscsid[732]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:59:49.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.337231 systemd[1]: Starting iscsid.service... Dec 13 01:59:49.341628 systemd[1]: Started iscsid.service. Dec 13 01:59:49.344145 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:59:49.352980 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:59:49.354594 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:59:49.357150 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:59:49.359737 systemd[1]: Reached target remote-fs.target. Dec 13 01:59:49.361507 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:59:49.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.368575 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:59:49.384074 ignition[655]: parsing config with SHA512: 469a1b0e6f0490257d645bad76a5d9c4e53435f55337dc51b31715c5a6a4d7920965ec3901a026ef13bf9835866055d519304050f1775fec6528cbd7d2fe92ac Dec 13 01:59:49.392872 unknown[655]: fetched base config from "system" Dec 13 01:59:49.392887 unknown[655]: fetched user config from "qemu" Dec 13 01:59:49.395190 ignition[655]: fetch-offline: fetch-offline passed Dec 13 01:59:49.395262 ignition[655]: Ignition finished successfully Dec 13 01:59:49.396905 systemd-networkd[727]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:59:49.398566 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:59:49.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.399629 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:59:49.400364 systemd[1]: Starting ignition-kargs.service... Dec 13 01:59:49.410565 ignition[748]: Ignition 2.14.0 Dec 13 01:59:49.410576 ignition[748]: Stage: kargs Dec 13 01:59:49.410667 ignition[748]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:59:49.410678 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:59:49.412047 ignition[748]: kargs: kargs passed Dec 13 01:59:49.412107 ignition[748]: Ignition finished successfully Dec 13 01:59:49.415023 systemd[1]: Finished ignition-kargs.service. Dec 13 01:59:49.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.417287 systemd[1]: Starting ignition-disks.service... Dec 13 01:59:49.426016 ignition[754]: Ignition 2.14.0 Dec 13 01:59:49.426027 ignition[754]: Stage: disks Dec 13 01:59:49.426135 ignition[754]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:59:49.426147 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:59:49.428646 systemd[1]: Finished ignition-disks.service. Dec 13 01:59:49.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.427556 ignition[754]: disks: disks passed Dec 13 01:59:49.430837 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:59:49.427600 ignition[754]: Ignition finished successfully Dec 13 01:59:49.432294 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:59:49.433251 systemd[1]: Reached target local-fs.target. Dec 13 01:59:49.434118 systemd[1]: Reached target sysinit.target. Dec 13 01:59:49.435570 systemd[1]: Reached target basic.target. Dec 13 01:59:49.438475 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:59:49.451988 systemd-fsck[762]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:59:49.457952 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:59:49.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.461264 systemd[1]: Mounting sysroot.mount... Dec 13 01:59:49.467745 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:59:49.468340 systemd[1]: Mounted sysroot.mount. Dec 13 01:59:49.468466 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:59:49.471917 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:59:49.473780 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:59:49.473831 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:59:49.473857 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:59:49.482123 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:59:49.483814 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:59:49.490401 initrd-setup-root[772]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:59:49.494686 initrd-setup-root[780]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:59:49.498913 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:59:49.502952 initrd-setup-root[796]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:59:49.530406 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:59:49.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.531807 systemd[1]: Starting ignition-mount.service... Dec 13 01:59:49.534743 systemd[1]: Starting sysroot-boot.service... Dec 13 01:59:49.538762 bash[813]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:59:49.547384 ignition[814]: INFO : Ignition 2.14.0 Dec 13 01:59:49.547384 ignition[814]: INFO : Stage: mount Dec 13 01:59:49.549144 ignition[814]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:59:49.549144 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:59:49.549144 ignition[814]: INFO : mount: mount passed Dec 13 01:59:49.549144 ignition[814]: INFO : Ignition finished successfully Dec 13 01:59:49.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:49.549345 systemd[1]: Finished ignition-mount.service. Dec 13 01:59:49.558267 systemd[1]: Finished sysroot-boot.service. Dec 13 01:59:49.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:50.166673 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:59:50.172730 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (823) Dec 13 01:59:50.174988 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:59:50.175001 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:59:50.175010 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:59:50.178828 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:59:50.180524 systemd[1]: Starting ignition-files.service... Dec 13 01:59:50.196842 ignition[843]: INFO : Ignition 2.14.0 Dec 13 01:59:50.196842 ignition[843]: INFO : Stage: files Dec 13 01:59:50.198836 ignition[843]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:59:50.198836 ignition[843]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:59:50.203028 ignition[843]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:59:50.204797 ignition[843]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:59:50.206433 ignition[843]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:59:50.209445 ignition[843]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:59:50.211275 ignition[843]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:59:50.213444 unknown[843]: wrote ssh authorized keys file for user: core Dec 13 01:59:50.214509 ignition[843]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:59:50.216338 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:59:50.218235 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:59:50.218235 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:59:50.221993 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:59:50.265118 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:59:50.337389 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:59:50.339543 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:59:50.339543 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:59:50.675214 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 01:59:50.781328 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:59:50.781328 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:59:50.785187 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:59:50.785187 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:59:50.788722 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:59:50.788722 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:59:50.788722 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:59:50.794123 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:59:50.794123 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:59:50.797703 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:59:50.799525 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:59:50.801339 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:59:50.803878 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:59:50.803878 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:59:50.808556 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:59:50.847869 systemd-networkd[727]: eth0: Gained IPv6LL Dec 13 01:59:51.218917 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Dec 13 01:59:51.716220 ignition[843]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:59:51.716220 ignition[843]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 01:59:51.719808 ignition[843]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:59:51.722080 ignition[843]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:59:51.722080 ignition[843]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 01:59:51.722080 ignition[843]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 01:59:51.726735 ignition[843]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:59:51.728590 ignition[843]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:59:51.728590 ignition[843]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 01:59:51.728590 ignition[843]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Dec 13 01:59:51.732904 ignition[843]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:59:51.732904 ignition[843]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:59:51.732904 ignition[843]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Dec 13 01:59:51.732904 ignition[843]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:59:51.732904 ignition[843]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:59:51.758036 ignition[843]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:59:51.759765 ignition[843]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:59:51.759765 ignition[843]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:59:51.759765 ignition[843]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:59:51.759765 ignition[843]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:59:51.759765 ignition[843]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:59:51.759765 ignition[843]: INFO : files: files passed Dec 13 01:59:51.759765 ignition[843]: INFO : Ignition finished successfully Dec 13 01:59:51.769814 systemd[1]: Finished ignition-files.service. Dec 13 01:59:51.775126 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 01:59:51.775156 kernel: audit: type=1130 audit(1734055191.769:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.775187 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:59:51.775277 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:59:51.775869 systemd[1]: Starting ignition-quench.service... Dec 13 01:59:51.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.781651 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:59:51.790504 kernel: audit: type=1130 audit(1734055191.781:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.790528 kernel: audit: type=1131 audit(1734055191.782:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.790538 kernel: audit: type=1130 audit(1734055191.790:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.781734 systemd[1]: Finished ignition-quench.service. Dec 13 01:59:51.796091 initrd-setup-root-after-ignition[868]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:59:51.784086 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:59:51.798677 initrd-setup-root-after-ignition[870]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:59:51.790618 systemd[1]: Reached target ignition-complete.target. Dec 13 01:59:51.795291 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:59:51.806657 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:59:51.814625 kernel: audit: type=1130 audit(1734055191.807:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.814650 kernel: audit: type=1131 audit(1734055191.807:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.806752 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:59:51.807913 systemd[1]: Reached target initrd-fs.target. Dec 13 01:59:51.815432 systemd[1]: Reached target initrd.target. Dec 13 01:59:51.816976 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:59:51.817765 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:59:51.831557 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:59:51.836066 kernel: audit: type=1130 audit(1734055191.830:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.836128 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:59:51.845554 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:59:51.846473 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:59:51.848113 systemd[1]: Stopped target timers.target. Dec 13 01:59:51.849655 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:59:51.855068 kernel: audit: type=1131 audit(1734055191.851:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.849751 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:59:51.851246 systemd[1]: Stopped target initrd.target. Dec 13 01:59:51.855883 systemd[1]: Stopped target basic.target. Dec 13 01:59:51.856678 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:59:51.858085 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:59:51.860399 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:59:51.861312 systemd[1]: Stopped target remote-fs.target. Dec 13 01:59:51.863694 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:59:51.864573 systemd[1]: Stopped target sysinit.target. Dec 13 01:59:51.866794 systemd[1]: Stopped target local-fs.target. Dec 13 01:59:51.867595 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:59:51.868994 systemd[1]: Stopped target swap.target. Dec 13 01:59:51.885599 kernel: audit: type=1131 audit(1734055191.871:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.871139 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:59:51.871224 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:59:51.891901 kernel: audit: type=1131 audit(1734055191.887:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.872017 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:59:51.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.886531 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:59:51.886634 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:59:51.888299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:59:51.888382 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:59:51.892903 systemd[1]: Stopped target paths.target. Dec 13 01:59:51.894339 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:59:51.897773 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:59:51.898039 systemd[1]: Stopped target slices.target. Dec 13 01:59:51.898368 systemd[1]: Stopped target sockets.target. Dec 13 01:59:51.901931 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:59:51.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.901996 systemd[1]: Closed iscsid.socket. Dec 13 01:59:51.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.902855 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:59:51.902913 systemd[1]: Closed iscsiuio.socket. Dec 13 01:59:51.904204 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:59:51.904293 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:59:51.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.905519 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:59:51.905602 systemd[1]: Stopped ignition-files.service. Dec 13 01:59:51.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.919550 ignition[883]: INFO : Ignition 2.14.0 Dec 13 01:59:51.919550 ignition[883]: INFO : Stage: umount Dec 13 01:59:51.919550 ignition[883]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:59:51.919550 ignition[883]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:59:51.919550 ignition[883]: INFO : umount: umount passed Dec 13 01:59:51.919550 ignition[883]: INFO : Ignition finished successfully Dec 13 01:59:51.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.908872 systemd[1]: Stopping ignition-mount.service... Dec 13 01:59:51.910767 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:59:51.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.910874 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:59:51.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.913677 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:59:51.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.916586 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:59:51.916782 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:59:51.918810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:59:51.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.918933 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:59:51.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.922006 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:59:51.922100 systemd[1]: Stopped ignition-mount.service. Dec 13 01:59:51.925421 systemd[1]: Stopped target network.target. Dec 13 01:59:51.927935 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:59:51.927974 systemd[1]: Stopped ignition-disks.service. Dec 13 01:59:51.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.929565 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:59:51.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.929608 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:59:51.931511 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:59:51.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.931545 systemd[1]: Stopped ignition-setup.service. Dec 13 01:59:51.933264 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:59:51.943357 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:59:51.944806 systemd-networkd[727]: eth0: DHCPv6 lease lost Dec 13 01:59:51.968000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:59:51.945266 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:59:51.945342 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:59:51.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.946812 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:59:51.946893 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:59:51.952065 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:59:51.952108 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:59:51.953239 systemd[1]: Stopping network-cleanup.service... Dec 13 01:59:51.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.955172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:59:51.955233 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:59:51.958974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:59:51.959096 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:59:51.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.983000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:59:51.961587 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:59:51.961654 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:59:51.962621 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:59:51.971012 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:59:51.971683 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:59:51.971814 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:59:51.978302 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:59:51.978471 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:59:51.980494 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:59:51.980602 systemd[1]: Stopped network-cleanup.service. Dec 13 01:59:51.987824 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:59:51.988821 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:59:52.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.993993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:59:51.994965 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:59:51.998844 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:59:52.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.000947 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:59:52.004640 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:59:52.004733 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:59:52.009119 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:59:52.009196 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:59:52.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.018321 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:59:52.020581 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:59:52.020652 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:59:52.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.025267 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:59:52.027045 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:59:52.028509 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:59:52.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.155688 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:59:52.155856 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:59:52.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.158690 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:59:52.160646 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:59:52.161773 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:59:52.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:52.164894 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:59:52.172849 systemd[1]: Switching root. Dec 13 01:59:52.176000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:59:52.177000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:59:52.177000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:59:52.177000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:59:52.177000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:59:52.197075 iscsid[732]: iscsid shutting down. Dec 13 01:59:52.198022 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 13 01:59:52.198070 systemd-journald[197]: Journal stopped Dec 13 01:59:55.451011 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:59:55.451072 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:59:55.451084 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:59:55.451094 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:59:55.451110 kernel: SELinux: policy capability open_perms=1 Dec 13 01:59:55.451120 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:59:55.451135 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:59:55.451147 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:59:55.451160 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:59:55.451170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:59:55.451179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:59:55.451190 systemd[1]: Successfully loaded SELinux policy in 61.084ms. Dec 13 01:59:55.451204 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.099ms. Dec 13 01:59:55.451216 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:59:55.451227 systemd[1]: Detected virtualization kvm. Dec 13 01:59:55.451238 systemd[1]: Detected architecture x86-64. Dec 13 01:59:55.451249 systemd[1]: Detected first boot. Dec 13 01:59:55.451261 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:59:55.451273 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:59:55.451283 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:59:55.451294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:59:55.451309 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:59:55.451320 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:59:55.451332 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:59:55.451344 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:59:55.451356 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:59:55.451366 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:59:55.451376 systemd[1]: Created slice system-getty.slice. Dec 13 01:59:55.451387 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:59:55.451398 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:59:55.451408 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:59:55.451419 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:59:55.451429 systemd[1]: Created slice user.slice. Dec 13 01:59:55.451444 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:59:55.451456 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:59:55.451468 systemd[1]: Set up automount boot.automount. Dec 13 01:59:55.451478 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:59:55.451489 systemd[1]: Reached target integritysetup.target. Dec 13 01:59:55.451499 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:59:55.451509 systemd[1]: Reached target remote-fs.target. Dec 13 01:59:55.451520 systemd[1]: Reached target slices.target. Dec 13 01:59:55.451530 systemd[1]: Reached target swap.target. Dec 13 01:59:55.451542 systemd[1]: Reached target torcx.target. Dec 13 01:59:55.451552 systemd[1]: Reached target veritysetup.target. Dec 13 01:59:55.451563 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:59:55.451574 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:59:55.451585 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:59:55.451596 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:59:55.451606 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:59:55.451616 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:59:55.451626 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:59:55.451636 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:59:55.451648 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:59:55.451659 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:59:55.451669 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:59:55.451679 systemd[1]: Mounting media.mount... Dec 13 01:59:55.451690 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:55.451700 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:59:55.451731 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:59:55.451742 systemd[1]: Mounting tmp.mount... Dec 13 01:59:55.451753 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:59:55.451765 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:59:55.451775 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:59:55.451785 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:59:55.451795 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:59:55.451806 systemd[1]: Starting modprobe@drm.service... Dec 13 01:59:55.451818 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:59:55.451828 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:59:55.451838 systemd[1]: Starting modprobe@loop.service... Dec 13 01:59:55.451849 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:59:55.451861 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:59:55.451871 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:59:55.451882 systemd[1]: Starting systemd-journald.service... Dec 13 01:59:55.451892 kernel: fuse: init (API version 7.34) Dec 13 01:59:55.451901 kernel: loop: module loaded Dec 13 01:59:55.451911 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:59:55.451922 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:59:55.451932 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:59:55.451943 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:59:55.451956 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:55.451966 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:59:55.451976 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:59:55.451987 systemd[1]: Mounted media.mount. Dec 13 01:59:55.451997 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:59:55.452010 systemd-journald[1028]: Journal started Dec 13 01:59:55.452057 systemd-journald[1028]: Runtime Journal (/run/log/journal/cadc215e0f564993a885de268629bb04) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:59:55.365000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:59:55.365000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 01:59:55.449000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:59:55.449000 audit[1028]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc538cbe10 a2=4000 a3=7ffc538cbeac items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:55.449000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:59:55.454903 systemd[1]: Started systemd-journald.service. Dec 13 01:59:55.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.455761 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:59:55.456695 systemd[1]: Mounted tmp.mount. Dec 13 01:59:55.457908 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:59:55.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.459057 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:59:55.459267 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:59:55.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.460479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:59:55.460738 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:59:55.461872 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:59:55.462101 systemd[1]: Finished modprobe@drm.service. Dec 13 01:59:55.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.463237 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:59:55.463449 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:59:55.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.464656 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:59:55.464805 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:59:55.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.465944 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:59:55.466303 systemd[1]: Finished modprobe@loop.service. Dec 13 01:59:55.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.467766 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:59:55.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.469210 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:59:55.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.470681 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:59:55.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.472239 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:59:55.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.473728 systemd[1]: Reached target network-pre.target. Dec 13 01:59:55.475953 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:59:55.477984 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:59:55.479002 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:59:55.480532 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:59:55.482662 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:59:55.483809 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:59:55.485220 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:59:55.486352 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:59:55.488836 systemd-journald[1028]: Time spent on flushing to /var/log/journal/cadc215e0f564993a885de268629bb04 is 18.022ms for 1033 entries. Dec 13 01:59:55.488836 systemd-journald[1028]: System Journal (/var/log/journal/cadc215e0f564993a885de268629bb04) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:59:55.528368 systemd-journald[1028]: Received client request to flush runtime journal. Dec 13 01:59:55.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.487327 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:59:55.491252 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:59:55.495126 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:59:55.496273 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:59:55.529513 udevadm[1071]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:59:55.498846 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:59:55.499937 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:59:55.505589 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:59:55.507848 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:59:55.508912 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:59:55.510887 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:59:55.517261 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:59:55.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.529332 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:59:55.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.533830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:59:55.910503 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:59:55.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.912649 systemd[1]: Starting systemd-udevd.service... Dec 13 01:59:55.929423 systemd-udevd[1080]: Using default interface naming scheme 'v252'. Dec 13 01:59:55.944604 systemd[1]: Started systemd-udevd.service. Dec 13 01:59:55.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:55.948097 systemd[1]: Starting systemd-networkd.service... Dec 13 01:59:55.956206 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:59:56.001437 systemd[1]: Found device dev-ttyS0.device. Dec 13 01:59:56.007777 systemd[1]: Started systemd-userdbd.service. Dec 13 01:59:56.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.031748 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:59:56.032248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:59:56.045783 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:59:56.055000 audit[1088]: AVC avc: denied { confidentiality } for pid=1088 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:59:56.055000 audit[1088]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55862ead8e70 a1=337fc a2=7fb313157bc5 a3=5 items=110 ppid=1080 pid=1088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.055000 audit: CWD cwd="/" Dec 13 01:59:56.055000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=1 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=2 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=3 name=(null) inode=13664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=4 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=5 name=(null) inode=13665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=6 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=7 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=8 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=9 name=(null) inode=13667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=10 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=11 name=(null) inode=13668 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=12 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=13 name=(null) inode=13669 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=14 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=15 name=(null) inode=13670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=16 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=17 name=(null) inode=13671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=18 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=19 name=(null) inode=13672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=20 name=(null) inode=13672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=21 name=(null) inode=13673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=22 name=(null) inode=13672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=23 name=(null) inode=13674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=24 name=(null) inode=13672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=25 name=(null) inode=13675 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=26 name=(null) inode=13672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=27 name=(null) inode=13676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=28 name=(null) inode=13672 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=29 name=(null) inode=13677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=30 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=31 name=(null) inode=13678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=32 name=(null) inode=13678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=33 name=(null) inode=13679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=34 name=(null) inode=13678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=35 name=(null) inode=13680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=36 name=(null) inode=13678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=37 name=(null) inode=13681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=38 name=(null) inode=13678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=39 name=(null) inode=13682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=40 name=(null) inode=13678 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=41 name=(null) inode=13683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=42 name=(null) inode=13663 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=43 name=(null) inode=13684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=44 name=(null) inode=13684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=45 name=(null) inode=13685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=46 name=(null) inode=13684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=47 name=(null) inode=13686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=48 name=(null) inode=13684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=49 name=(null) inode=13687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=50 name=(null) inode=13684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=51 name=(null) inode=13688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=52 name=(null) inode=13684 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=53 name=(null) inode=13689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=55 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=56 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=57 name=(null) inode=13691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=58 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=59 name=(null) inode=13692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=60 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=61 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=62 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=63 name=(null) inode=13694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=64 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=65 name=(null) inode=13695 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=66 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=67 name=(null) inode=13696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=68 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=69 name=(null) inode=13697 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=70 name=(null) inode=13693 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=71 name=(null) inode=13698 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=72 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=73 name=(null) inode=13699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=74 name=(null) inode=13699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=75 name=(null) inode=13700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=76 name=(null) inode=13699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=77 name=(null) inode=13701 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=78 name=(null) inode=13699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=79 name=(null) inode=13702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=80 name=(null) inode=13699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=81 name=(null) inode=13703 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=82 name=(null) inode=13699 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=83 name=(null) inode=13704 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=84 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=85 name=(null) inode=13705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=86 name=(null) inode=13705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=87 name=(null) inode=13706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=88 name=(null) inode=13705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=89 name=(null) inode=13707 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=90 name=(null) inode=13705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=91 name=(null) inode=13708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=92 name=(null) inode=13705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=93 name=(null) inode=13709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=94 name=(null) inode=13705 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=95 name=(null) inode=13710 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=96 name=(null) inode=13690 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=97 name=(null) inode=13711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=98 name=(null) inode=13711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=99 name=(null) inode=13712 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=100 name=(null) inode=13711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=101 name=(null) inode=13713 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=102 name=(null) inode=13711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=103 name=(null) inode=13714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=104 name=(null) inode=13711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=105 name=(null) inode=13715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=106 name=(null) inode=13711 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=107 name=(null) inode=13716 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PATH item=109 name=(null) inode=13717 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:56.055000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:59:56.080051 systemd-networkd[1090]: lo: Link UP Dec 13 01:59:56.080066 systemd-networkd[1090]: lo: Gained carrier Dec 13 01:59:56.080599 systemd-networkd[1090]: Enumeration completed Dec 13 01:59:56.080776 systemd-networkd[1090]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:59:56.080812 systemd[1]: Started systemd-networkd.service. Dec 13 01:59:56.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.083204 systemd-networkd[1090]: eth0: Link UP Dec 13 01:59:56.083217 systemd-networkd[1090]: eth0: Gained carrier Dec 13 01:59:56.092792 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:59:56.095877 systemd-networkd[1090]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:59:56.096728 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:59:56.105753 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:59:56.116055 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:59:56.116274 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:59:56.152773 kernel: kvm: Nested Virtualization enabled Dec 13 01:59:56.152964 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:59:56.152993 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:59:56.153150 kernel: SVM: Virtual GIF supported Dec 13 01:59:56.171734 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:59:56.204456 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:59:56.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.207556 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:59:56.215694 lvm[1116]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:59:56.244033 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:59:56.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.245246 systemd[1]: Reached target cryptsetup.target. Dec 13 01:59:56.247651 systemd[1]: Starting lvm2-activation.service... Dec 13 01:59:56.251176 lvm[1118]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:59:56.277177 systemd[1]: Finished lvm2-activation.service. Dec 13 01:59:56.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.278344 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:59:56.279294 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:59:56.279326 systemd[1]: Reached target local-fs.target. Dec 13 01:59:56.280164 systemd[1]: Reached target machines.target. Dec 13 01:59:56.282698 systemd[1]: Starting ldconfig.service... Dec 13 01:59:56.283815 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.283895 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:56.285459 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:59:56.287402 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:59:56.289841 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:59:56.292316 systemd[1]: Starting systemd-sysext.service... Dec 13 01:59:56.293546 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1121 (bootctl) Dec 13 01:59:56.294797 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:59:56.301967 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:59:56.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.309296 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:59:56.314461 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:59:56.314841 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:59:56.328801 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:59:56.348790 systemd-fsck[1131]: fsck.fat 4.2 (2021-01-31) Dec 13 01:59:56.348790 systemd-fsck[1131]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 01:59:56.350483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:59:56.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.353572 systemd[1]: Mounting boot.mount... Dec 13 01:59:56.577883 systemd[1]: Mounted boot.mount. Dec 13 01:59:56.590231 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:59:56.592770 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:59:56.593367 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:59:56.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.596922 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:59:56.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.610734 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:59:56.615747 (sd-sysext)[1143]: Using extensions 'kubernetes'. Dec 13 01:59:56.616208 (sd-sysext)[1143]: Merged extensions into '/usr'. Dec 13 01:59:56.636209 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:56.638202 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:59:56.639809 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.641170 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:59:56.643455 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:59:56.652812 systemd[1]: Starting modprobe@loop.service... Dec 13 01:59:56.653897 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.654057 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:56.654200 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:56.656398 ldconfig[1120]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:59:56.658063 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:59:56.659669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:59:56.659910 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:59:56.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.661647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:59:56.661856 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:59:56.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.663596 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:59:56.663821 systemd[1]: Finished modprobe@loop.service. Dec 13 01:59:56.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.665621 systemd[1]: Finished ldconfig.service. Dec 13 01:59:56.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.667349 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:59:56.667457 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.668726 systemd[1]: Finished systemd-sysext.service. Dec 13 01:59:56.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.671679 systemd[1]: Starting ensure-sysext.service... Dec 13 01:59:56.674443 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:59:56.680648 systemd[1]: Reloading. Dec 13 01:59:56.683480 systemd-tmpfiles[1158]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:59:56.684754 systemd-tmpfiles[1158]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:59:56.686246 systemd-tmpfiles[1158]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:59:56.730886 /usr/lib/systemd/system-generators/torcx-generator[1177]: time="2024-12-13T01:59:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:59:56.730913 /usr/lib/systemd/system-generators/torcx-generator[1177]: time="2024-12-13T01:59:56Z" level=info msg="torcx already run" Dec 13 01:59:56.815557 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:59:56.815580 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:59:56.840900 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:59:56.912627 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:59:56.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.916336 systemd[1]: Starting audit-rules.service... Dec 13 01:59:56.917769 kernel: kauditd_printk_skb: 207 callbacks suppressed Dec 13 01:59:56.917811 kernel: audit: type=1130 audit(1734055196.913:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.919333 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:59:56.921196 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:59:56.924380 systemd[1]: Starting systemd-resolved.service... Dec 13 01:59:56.926700 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:59:56.928612 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:59:56.930099 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:59:56.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.931000 audit[1240]: SYSTEM_BOOT pid=1240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.936294 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:56.936520 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.939106 kernel: audit: type=1130 audit(1734055196.931:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.939158 kernel: audit: type=1127 audit(1734055196.931:131): pid=1240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.937955 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:59:56.940050 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:59:56.942177 systemd[1]: Starting modprobe@loop.service... Dec 13 01:59:56.943046 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.943195 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:56.943396 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:59:56.943503 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:56.945176 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:59:56.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.946694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:59:56.946955 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:59:56.950735 kernel: audit: type=1130 audit(1734055196.946:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.951387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:59:56.951506 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:59:56.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.954735 kernel: audit: type=1130 audit(1734055196.950:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.954765 kernel: audit: type=1131 audit(1734055196.950:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.959105 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:59:56.960067 augenrules[1256]: No rules Dec 13 01:59:56.959277 systemd[1]: Finished modprobe@loop.service. Dec 13 01:59:56.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.965095 kernel: audit: type=1130 audit(1734055196.958:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.965143 kernel: audit: type=1131 audit(1734055196.958:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.965162 kernel: audit: type=1305 audit(1734055196.958:137): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:59:56.958000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:59:56.967125 kernel: audit: type=1300 audit(1734055196.958:137): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe496c0c60 a2=420 a3=0 items=0 ppid=1228 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.958000 audit[1256]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe496c0c60 a2=420 a3=0 items=0 ppid=1228 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.958000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:59:56.972904 systemd[1]: Finished audit-rules.service. Dec 13 01:59:56.974172 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:59:56.974300 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.975701 systemd[1]: Starting systemd-update-done.service... Dec 13 01:59:56.976981 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:59:56.982156 systemd[1]: Finished systemd-update-done.service. Dec 13 01:59:56.984445 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:56.984624 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.985816 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:59:56.987584 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:59:56.989853 systemd[1]: Starting modprobe@loop.service... Dec 13 01:59:56.991962 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.992079 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:56.992168 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:59:56.992237 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:56.993159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:59:56.993305 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:59:56.994556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:59:56.994690 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:59:56.995966 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:59:56.996154 systemd[1]: Finished modprobe@loop.service. Dec 13 01:59:56.997277 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:59:56.997367 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:59:56.999823 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:57.000043 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:59:57.001025 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:59:57.002919 systemd[1]: Starting modprobe@drm.service... Dec 13 01:59:57.004667 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:59:57.006558 systemd[1]: Starting modprobe@loop.service... Dec 13 01:59:57.007591 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:59:57.007727 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:57.009230 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:59:57.010363 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:59:57.010454 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:59:57.011444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:59:57.011586 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:59:57.012978 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:59:57.013113 systemd[1]: Finished modprobe@drm.service. Dec 13 01:59:57.014354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:59:57.014474 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:59:57.015780 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:59:57.015966 systemd[1]: Finished modprobe@loop.service. Dec 13 01:59:57.017343 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:59:57.017423 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:59:57.018529 systemd[1]: Finished ensure-sysext.service. Dec 13 01:59:57.024693 systemd-resolved[1235]: Positive Trust Anchors: Dec 13 01:59:57.024727 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:59:57.024755 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:59:57.032149 systemd-resolved[1235]: Defaulting to hostname 'linux'. Dec 13 01:59:57.033443 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:59:57.034492 systemd[1]: Started systemd-resolved.service. Dec 13 01:59:57.035768 systemd-timesyncd[1236]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:59:57.035797 systemd[1]: Reached target network.target. Dec 13 01:59:57.035814 systemd-timesyncd[1236]: Initial clock synchronization to Fri 2024-12-13 01:59:57.272703 UTC. Dec 13 01:59:57.036670 systemd[1]: Reached target nss-lookup.target. Dec 13 01:59:57.037656 systemd[1]: Reached target sysinit.target. Dec 13 01:59:57.038578 systemd[1]: Started motdgen.path. Dec 13 01:59:57.039367 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:59:57.040559 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:59:57.041492 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:59:57.041516 systemd[1]: Reached target paths.target. Dec 13 01:59:57.042345 systemd[1]: Reached target time-set.target. Dec 13 01:59:57.043332 systemd[1]: Started logrotate.timer. Dec 13 01:59:57.044189 systemd[1]: Started mdadm.timer. Dec 13 01:59:57.044948 systemd[1]: Reached target timers.target. Dec 13 01:59:57.046059 systemd[1]: Listening on dbus.socket. Dec 13 01:59:57.047938 systemd[1]: Starting docker.socket... Dec 13 01:59:57.049514 systemd[1]: Listening on sshd.socket. Dec 13 01:59:57.050418 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:57.050697 systemd[1]: Listening on docker.socket. Dec 13 01:59:57.051587 systemd[1]: Reached target sockets.target. Dec 13 01:59:57.052457 systemd[1]: Reached target basic.target. Dec 13 01:59:57.053404 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:59:57.053443 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:59:57.053461 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:59:57.054261 systemd[1]: Starting containerd.service... Dec 13 01:59:57.055850 systemd[1]: Starting dbus.service... Dec 13 01:59:57.057399 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:59:57.059348 systemd[1]: Starting extend-filesystems.service... Dec 13 01:59:57.060479 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:59:57.061533 systemd[1]: Starting motdgen.service... Dec 13 01:59:57.063248 systemd[1]: Starting prepare-helm.service... Dec 13 01:59:57.064872 jq[1291]: false Dec 13 01:59:57.064953 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:59:57.066940 systemd[1]: Starting sshd-keygen.service... Dec 13 01:59:57.069764 systemd[1]: Starting systemd-logind.service... Dec 13 01:59:57.070862 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:59:57.070924 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:59:57.072026 systemd[1]: Starting update-engine.service... Dec 13 01:59:57.074106 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:59:57.076632 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:59:57.076868 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:59:57.077661 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:59:57.077906 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:59:57.078014 dbus-daemon[1290]: [system] SELinux support is enabled Dec 13 01:59:57.079333 systemd[1]: Started dbus.service. Dec 13 01:59:57.082039 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:59:57.082061 systemd[1]: Reached target system-config.target. Dec 13 01:59:57.083418 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:59:57.083439 systemd[1]: Reached target user-config.target. Dec 13 01:59:57.085794 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:59:57.086004 systemd[1]: Finished motdgen.service. Dec 13 01:59:57.087048 jq[1307]: true Dec 13 01:59:57.092159 tar[1312]: linux-amd64/helm Dec 13 01:59:57.095123 jq[1319]: true Dec 13 01:59:57.100582 extend-filesystems[1292]: Found loop1 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found sr0 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda1 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda2 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda3 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found usr Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda4 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda6 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda7 Dec 13 01:59:57.101735 extend-filesystems[1292]: Found vda9 Dec 13 01:59:57.101735 extend-filesystems[1292]: Checking size of /dev/vda9 Dec 13 01:59:57.127794 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:59:57.127892 update_engine[1306]: I1213 01:59:57.119465 1306 main.cc:92] Flatcar Update Engine starting Dec 13 01:59:57.127892 update_engine[1306]: I1213 01:59:57.122599 1306 update_check_scheduler.cc:74] Next update check in 7m17s Dec 13 01:59:57.132798 env[1321]: time="2024-12-13T01:59:57.126396827Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:59:57.132962 extend-filesystems[1292]: Resized partition /dev/vda9 Dec 13 01:59:57.122564 systemd[1]: Started update-engine.service. Dec 13 01:59:57.134847 extend-filesystems[1344]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:59:57.125122 systemd[1]: Started locksmithd.service. Dec 13 01:59:57.142759 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:59:57.155663 env[1321]: time="2024-12-13T01:59:57.155619571Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:59:57.170449 extend-filesystems[1344]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:59:57.170449 extend-filesystems[1344]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:59:57.170449 extend-filesystems[1344]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.170348963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.172459932Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.172513974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.172857868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.172885350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.172905107Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.172917841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.173017548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.173270232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:59:57.183858 env[1321]: time="2024-12-13T01:59:57.173658290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:59:57.170354 systemd-logind[1301]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:59:57.185449 bash[1352]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:59:57.185527 extend-filesystems[1292]: Resized filesystem in /dev/vda9 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.173677776Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.173784436Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.173799655Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186598166Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186638131Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186650174Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186682565Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186698535Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186724694Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186736766Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186749520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186761052Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186777102Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188392 env[1321]: time="2024-12-13T01:59:57.186788303Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.170369 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.186800726Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.186915381Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.186993969Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187278051Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187299532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187312596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187357521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187374472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187386665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187397285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187412774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187423324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187433423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.188772 env[1321]: time="2024-12-13T01:59:57.187443442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.171018 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187455124Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187563787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187578034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187590608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187600636Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187613721Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187623620Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187642345Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:59:57.189104 env[1321]: time="2024-12-13T01:59:57.187675577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:59:57.171255 systemd[1]: Finished extend-filesystems.service. Dec 13 01:59:57.171600 systemd-logind[1301]: New seat seat0. Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.187861596Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.187937127Z" level=info msg="Connect containerd service" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.187966693Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.188924018Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.189118473Z" level=info msg="Start subscribing containerd event" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.189155001Z" level=info msg="Start recovering state" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.189209944Z" level=info msg="Start event monitor" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.189221817Z" level=info msg="Start snapshots syncer" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.189228970Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:59:57.189352 env[1321]: time="2024-12-13T01:59:57.189236134Z" level=info msg="Start streaming server" Dec 13 01:59:57.178533 systemd[1]: Started systemd-logind.service. Dec 13 01:59:57.191210 env[1321]: time="2024-12-13T01:59:57.189428344Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:59:57.191210 env[1321]: time="2024-12-13T01:59:57.189476104Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:59:57.191210 env[1321]: time="2024-12-13T01:59:57.189525697Z" level=info msg="containerd successfully booted in 0.065309s" Dec 13 01:59:57.185475 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:59:57.189607 systemd[1]: Started containerd.service. Dec 13 01:59:57.218378 locksmithd[1345]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:59:57.554195 tar[1312]: linux-amd64/LICENSE Dec 13 01:59:57.554195 tar[1312]: linux-amd64/README.md Dec 13 01:59:57.560946 systemd[1]: Finished prepare-helm.service. Dec 13 01:59:57.823979 systemd-networkd[1090]: eth0: Gained IPv6LL Dec 13 01:59:57.828281 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:59:57.831705 systemd[1]: Reached target network-online.target. Dec 13 01:59:57.836805 systemd[1]: Starting kubelet.service... Dec 13 01:59:57.842173 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:59:57.881911 systemd[1]: Finished sshd-keygen.service. Dec 13 01:59:57.885145 systemd[1]: Starting issuegen.service... Dec 13 01:59:57.891986 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:59:57.892294 systemd[1]: Finished issuegen.service. Dec 13 01:59:57.894887 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:59:57.901058 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:59:57.903631 systemd[1]: Started getty@tty1.service. Dec 13 01:59:57.905917 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:59:57.907219 systemd[1]: Reached target getty.target. Dec 13 01:59:58.562043 systemd[1]: Started kubelet.service. Dec 13 01:59:58.564002 systemd[1]: Reached target multi-user.target. Dec 13 01:59:58.567603 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:59:58.581208 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:59:58.581548 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:59:58.583797 systemd[1]: Startup finished in 6.265s (kernel) + 6.287s (userspace) = 12.553s. Dec 13 01:59:59.296301 kubelet[1393]: E1213 01:59:59.296180 1393 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:59:59.298278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:59:59.298443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:00:01.408090 systemd[1]: Created slice system-sshd.slice. Dec 13 02:00:01.409962 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:52964.service. Dec 13 02:00:01.497337 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 52964 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:01.500573 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:01.516624 systemd[1]: Created slice user-500.slice. Dec 13 02:00:01.518030 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:00:01.521580 systemd-logind[1301]: New session 1 of user core. Dec 13 02:00:01.544971 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:00:01.546977 systemd[1]: Starting user@500.service... Dec 13 02:00:01.552998 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:01.657100 systemd[1409]: Queued start job for default target default.target. Dec 13 02:00:01.657412 systemd[1409]: Reached target paths.target. Dec 13 02:00:01.657434 systemd[1409]: Reached target sockets.target. Dec 13 02:00:01.657450 systemd[1409]: Reached target timers.target. Dec 13 02:00:01.657464 systemd[1409]: Reached target basic.target. Dec 13 02:00:01.657532 systemd[1409]: Reached target default.target. Dec 13 02:00:01.657566 systemd[1409]: Startup finished in 94ms. Dec 13 02:00:01.658195 systemd[1]: Started user@500.service. Dec 13 02:00:01.659611 systemd[1]: Started session-1.scope. Dec 13 02:00:01.719342 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:52970.service. Dec 13 02:00:01.764294 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 52970 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:01.765578 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:01.769907 systemd-logind[1301]: New session 2 of user core. Dec 13 02:00:01.770947 systemd[1]: Started session-2.scope. Dec 13 02:00:01.827719 sshd[1418]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:01.830533 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:52978.service. Dec 13 02:00:01.831201 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:52970.service: Deactivated successfully. Dec 13 02:00:01.832214 systemd-logind[1301]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:00:01.832284 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:00:01.833564 systemd-logind[1301]: Removed session 2. Dec 13 02:00:01.868064 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 52978 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:01.869292 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:01.872913 systemd-logind[1301]: New session 3 of user core. Dec 13 02:00:01.873752 systemd[1]: Started session-3.scope. Dec 13 02:00:01.925311 sshd[1424]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:01.927935 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:52994.service. Dec 13 02:00:01.928429 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:52978.service: Deactivated successfully. Dec 13 02:00:01.929284 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:00:01.929314 systemd-logind[1301]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:00:01.930277 systemd-logind[1301]: Removed session 3. Dec 13 02:00:01.965324 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 52994 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:01.966961 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:01.971278 systemd-logind[1301]: New session 4 of user core. Dec 13 02:00:01.972099 systemd[1]: Started session-4.scope. Dec 13 02:00:02.033108 sshd[1430]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:02.036251 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:53006.service. Dec 13 02:00:02.036786 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:52994.service: Deactivated successfully. Dec 13 02:00:02.037991 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:00:02.038047 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:00:02.039345 systemd-logind[1301]: Removed session 4. Dec 13 02:00:02.080776 sshd[1437]: Accepted publickey for core from 10.0.0.1 port 53006 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:02.082092 sshd[1437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:02.086974 systemd-logind[1301]: New session 5 of user core. Dec 13 02:00:02.087984 systemd[1]: Started session-5.scope. Dec 13 02:00:02.203059 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:00:02.203381 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:00:02.245817 systemd[1]: Starting docker.service... Dec 13 02:00:02.317367 env[1455]: time="2024-12-13T02:00:02.317301301Z" level=info msg="Starting up" Dec 13 02:00:02.318841 env[1455]: time="2024-12-13T02:00:02.318808430Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:00:02.318841 env[1455]: time="2024-12-13T02:00:02.318827490Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:00:02.318841 env[1455]: time="2024-12-13T02:00:02.318853899Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:00:02.318841 env[1455]: time="2024-12-13T02:00:02.318864000Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:00:02.324168 env[1455]: time="2024-12-13T02:00:02.324115421Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:00:02.324168 env[1455]: time="2024-12-13T02:00:02.324155214Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:00:02.324250 env[1455]: time="2024-12-13T02:00:02.324177139Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:00:02.324250 env[1455]: time="2024-12-13T02:00:02.324186873Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:00:02.329523 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3316247698-merged.mount: Deactivated successfully. Dec 13 02:00:03.017792 env[1455]: time="2024-12-13T02:00:03.017695236Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 02:00:03.017792 env[1455]: time="2024-12-13T02:00:03.017768793Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 02:00:03.018021 env[1455]: time="2024-12-13T02:00:03.017972326Z" level=info msg="Loading containers: start." Dec 13 02:00:03.153735 kernel: Initializing XFRM netlink socket Dec 13 02:00:03.185349 env[1455]: time="2024-12-13T02:00:03.185302989Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:00:03.238084 systemd-networkd[1090]: docker0: Link UP Dec 13 02:00:03.256409 env[1455]: time="2024-12-13T02:00:03.256364299Z" level=info msg="Loading containers: done." Dec 13 02:00:03.273268 env[1455]: time="2024-12-13T02:00:03.273129784Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:00:03.273467 env[1455]: time="2024-12-13T02:00:03.273321153Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:00:03.273467 env[1455]: time="2024-12-13T02:00:03.273409295Z" level=info msg="Daemon has completed initialization" Dec 13 02:00:03.295384 systemd[1]: Started docker.service. Dec 13 02:00:03.298986 env[1455]: time="2024-12-13T02:00:03.298935347Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:00:04.737211 env[1321]: time="2024-12-13T02:00:04.737126833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:00:05.477968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925954651.mount: Deactivated successfully. Dec 13 02:00:08.765352 env[1321]: time="2024-12-13T02:00:08.765244303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:08.767702 env[1321]: time="2024-12-13T02:00:08.767649380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:08.772775 env[1321]: time="2024-12-13T02:00:08.772702335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:08.776100 env[1321]: time="2024-12-13T02:00:08.776008701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:08.776955 env[1321]: time="2024-12-13T02:00:08.776879335Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:00:08.805786 env[1321]: time="2024-12-13T02:00:08.805734400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:00:09.549623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:00:09.549824 systemd[1]: Stopped kubelet.service. Dec 13 02:00:09.551358 systemd[1]: Starting kubelet.service... Dec 13 02:00:09.636843 systemd[1]: Started kubelet.service. Dec 13 02:00:10.091068 kubelet[1607]: E1213 02:00:10.091013 1607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:00:10.094695 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:00:10.094851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:00:12.311645 env[1321]: time="2024-12-13T02:00:12.311443461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:12.325139 env[1321]: time="2024-12-13T02:00:12.325036413Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:12.331534 env[1321]: time="2024-12-13T02:00:12.331486602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:12.338858 env[1321]: time="2024-12-13T02:00:12.338794785Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:12.339650 env[1321]: time="2024-12-13T02:00:12.339613609Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:00:12.352866 env[1321]: time="2024-12-13T02:00:12.352818290Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:00:14.476680 env[1321]: time="2024-12-13T02:00:14.476587741Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:14.565006 env[1321]: time="2024-12-13T02:00:14.564959001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:14.631939 env[1321]: time="2024-12-13T02:00:14.631885899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:14.673408 env[1321]: time="2024-12-13T02:00:14.673362189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:14.674072 env[1321]: time="2024-12-13T02:00:14.674031664Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:00:14.683195 env[1321]: time="2024-12-13T02:00:14.683158890Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:00:18.503641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804353520.mount: Deactivated successfully. Dec 13 02:00:20.345782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:00:20.345982 systemd[1]: Stopped kubelet.service. Dec 13 02:00:20.347560 systemd[1]: Starting kubelet.service... Dec 13 02:00:20.437801 systemd[1]: Started kubelet.service. Dec 13 02:00:20.728500 env[1321]: time="2024-12-13T02:00:20.728346869Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:20.730642 env[1321]: time="2024-12-13T02:00:20.730172720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:20.731630 env[1321]: time="2024-12-13T02:00:20.731583934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:20.733118 env[1321]: time="2024-12-13T02:00:20.733075535Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:20.733374 env[1321]: time="2024-12-13T02:00:20.733339333Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:00:20.807544 kubelet[1637]: E1213 02:00:20.807474 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:00:20.809533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:00:20.809707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:00:20.815830 env[1321]: time="2024-12-13T02:00:20.815769500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:00:22.381365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2891512898.mount: Deactivated successfully. Dec 13 02:00:23.887502 env[1321]: time="2024-12-13T02:00:23.887415665Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:23.898077 env[1321]: time="2024-12-13T02:00:23.897941042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:23.901011 env[1321]: time="2024-12-13T02:00:23.900942057Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:23.903559 env[1321]: time="2024-12-13T02:00:23.903528230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:23.904774 env[1321]: time="2024-12-13T02:00:23.904731448Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:00:23.915057 env[1321]: time="2024-12-13T02:00:23.915007546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:00:24.435099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994138411.mount: Deactivated successfully. Dec 13 02:00:24.443664 env[1321]: time="2024-12-13T02:00:24.443549386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:24.446646 env[1321]: time="2024-12-13T02:00:24.445944165Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:24.448491 env[1321]: time="2024-12-13T02:00:24.448162111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:24.450116 env[1321]: time="2024-12-13T02:00:24.450011426Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:24.450750 env[1321]: time="2024-12-13T02:00:24.450680134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:00:24.466755 env[1321]: time="2024-12-13T02:00:24.466353631Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:00:25.065158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153421461.mount: Deactivated successfully. Dec 13 02:00:27.908184 env[1321]: time="2024-12-13T02:00:27.908109899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:27.910164 env[1321]: time="2024-12-13T02:00:27.910128395Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:27.912087 env[1321]: time="2024-12-13T02:00:27.912038772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:27.914532 env[1321]: time="2024-12-13T02:00:27.914468885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:27.915416 env[1321]: time="2024-12-13T02:00:27.915374821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:00:30.176540 systemd[1]: Stopped kubelet.service. Dec 13 02:00:30.178875 systemd[1]: Starting kubelet.service... Dec 13 02:00:30.196617 systemd[1]: Reloading. Dec 13 02:00:30.270910 /usr/lib/systemd/system-generators/torcx-generator[1769]: time="2024-12-13T02:00:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:00:30.270950 /usr/lib/systemd/system-generators/torcx-generator[1769]: time="2024-12-13T02:00:30Z" level=info msg="torcx already run" Dec 13 02:00:30.525748 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:00:30.525779 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:00:30.552618 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:00:30.664951 systemd[1]: Started kubelet.service. Dec 13 02:00:30.667235 systemd[1]: Stopping kubelet.service... Dec 13 02:00:30.667645 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:00:30.667987 systemd[1]: Stopped kubelet.service. Dec 13 02:00:30.674042 systemd[1]: Starting kubelet.service... Dec 13 02:00:30.769786 systemd[1]: Started kubelet.service. Dec 13 02:00:30.821136 kubelet[1830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:00:30.821136 kubelet[1830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:00:30.821136 kubelet[1830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:00:30.821136 kubelet[1830]: I1213 02:00:30.820778 1830 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:00:31.214983 kubelet[1830]: I1213 02:00:31.214619 1830 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:00:31.214983 kubelet[1830]: I1213 02:00:31.214669 1830 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:00:31.214983 kubelet[1830]: I1213 02:00:31.214978 1830 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:00:31.234358 kubelet[1830]: E1213 02:00:31.234316 1830 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.235689 kubelet[1830]: I1213 02:00:31.235666 1830 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:00:31.243512 kubelet[1830]: I1213 02:00:31.243490 1830 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:00:31.244617 kubelet[1830]: I1213 02:00:31.244598 1830 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:00:31.244768 kubelet[1830]: I1213 02:00:31.244754 1830 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:00:31.244853 kubelet[1830]: I1213 02:00:31.244775 1830 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:00:31.244853 kubelet[1830]: I1213 02:00:31.244782 1830 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:00:31.244906 kubelet[1830]: I1213 02:00:31.244864 1830 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:31.244950 kubelet[1830]: I1213 02:00:31.244942 1830 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:00:31.244975 kubelet[1830]: I1213 02:00:31.244955 1830 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:00:31.244998 kubelet[1830]: I1213 02:00:31.244980 1830 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:00:31.244998 kubelet[1830]: I1213 02:00:31.244993 1830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:00:31.245776 kubelet[1830]: W1213 02:00:31.245742 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.245866 kubelet[1830]: E1213 02:00:31.245850 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.246023 kubelet[1830]: W1213 02:00:31.245996 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.246123 kubelet[1830]: E1213 02:00:31.246106 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.246378 kubelet[1830]: I1213 02:00:31.246363 1830 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:00:31.250696 kubelet[1830]: I1213 02:00:31.250675 1830 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:00:31.251481 kubelet[1830]: W1213 02:00:31.251462 1830 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:00:31.251945 kubelet[1830]: I1213 02:00:31.251928 1830 server.go:1256] "Started kubelet" Dec 13 02:00:31.252355 kubelet[1830]: I1213 02:00:31.252291 1830 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:00:31.253195 kubelet[1830]: I1213 02:00:31.253171 1830 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:00:31.255096 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:00:31.258423 kubelet[1830]: I1213 02:00:31.258400 1830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:00:31.259083 kubelet[1830]: I1213 02:00:31.258738 1830 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:00:31.259083 kubelet[1830]: I1213 02:00:31.258979 1830 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:00:31.259083 kubelet[1830]: I1213 02:00:31.259032 1830 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:00:31.259452 kubelet[1830]: E1213 02:00:31.259435 1830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Dec 13 02:00:31.259649 kubelet[1830]: W1213 02:00:31.259619 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.259700 kubelet[1830]: E1213 02:00:31.259655 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.260696 kubelet[1830]: E1213 02:00:31.260636 1830 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109a0b9fad7767 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 02:00:31.251904359 +0000 UTC m=+0.476177034,LastTimestamp:2024-12-13 02:00:31.251904359 +0000 UTC m=+0.476177034,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 02:00:31.260696 kubelet[1830]: I1213 02:00:31.260637 1830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:00:31.260696 kubelet[1830]: I1213 02:00:31.260674 1830 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:00:31.260865 kubelet[1830]: I1213 02:00:31.260747 1830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:00:31.260865 kubelet[1830]: I1213 02:00:31.260842 1830 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:00:31.261608 kubelet[1830]: E1213 02:00:31.261585 1830 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:00:31.261754 kubelet[1830]: I1213 02:00:31.261739 1830 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:00:31.272341 kubelet[1830]: I1213 02:00:31.272251 1830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:00:31.273118 kubelet[1830]: I1213 02:00:31.273061 1830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:00:31.273118 kubelet[1830]: I1213 02:00:31.273082 1830 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:00:31.273118 kubelet[1830]: I1213 02:00:31.273097 1830 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:00:31.273303 kubelet[1830]: E1213 02:00:31.273136 1830 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:00:31.276904 kubelet[1830]: W1213 02:00:31.276844 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.276904 kubelet[1830]: E1213 02:00:31.276902 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:31.277928 kubelet[1830]: I1213 02:00:31.277908 1830 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:00:31.277928 kubelet[1830]: I1213 02:00:31.277926 1830 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:00:31.277986 kubelet[1830]: I1213 02:00:31.277938 1830 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:31.360234 kubelet[1830]: I1213 02:00:31.360219 1830 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 02:00:31.360491 kubelet[1830]: E1213 02:00:31.360474 1830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 02:00:31.373661 kubelet[1830]: E1213 02:00:31.373639 1830 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:00:31.460085 kubelet[1830]: E1213 02:00:31.460051 1830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Dec 13 02:00:31.562175 kubelet[1830]: I1213 02:00:31.562099 1830 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 02:00:31.562593 kubelet[1830]: E1213 02:00:31.562574 1830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 02:00:31.574678 kubelet[1830]: E1213 02:00:31.574653 1830 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:00:31.597704 kubelet[1830]: I1213 02:00:31.597680 1830 policy_none.go:49] "None policy: Start" Dec 13 02:00:31.598185 kubelet[1830]: I1213 02:00:31.598170 1830 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:00:31.598253 kubelet[1830]: I1213 02:00:31.598192 1830 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:00:31.603817 kubelet[1830]: I1213 02:00:31.603774 1830 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:00:31.603966 kubelet[1830]: I1213 02:00:31.603932 1830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:00:31.605098 kubelet[1830]: E1213 02:00:31.605078 1830 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 02:00:31.861103 kubelet[1830]: E1213 02:00:31.860984 1830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Dec 13 02:00:31.964684 kubelet[1830]: I1213 02:00:31.964642 1830 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 02:00:31.965022 kubelet[1830]: E1213 02:00:31.964997 1830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 02:00:31.975155 kubelet[1830]: I1213 02:00:31.975126 1830 topology_manager.go:215] "Topology Admit Handler" podUID="b38cc958ea441f472fc008e2ce1474e3" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 02:00:31.975869 kubelet[1830]: I1213 02:00:31.975843 1830 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 02:00:31.976477 kubelet[1830]: I1213 02:00:31.976453 1830 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 02:00:32.063186 kubelet[1830]: I1213 02:00:32.063128 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b38cc958ea441f472fc008e2ce1474e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b38cc958ea441f472fc008e2ce1474e3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:32.063383 kubelet[1830]: I1213 02:00:32.063252 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:32.063383 kubelet[1830]: I1213 02:00:32.063294 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:32.063383 kubelet[1830]: I1213 02:00:32.063319 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:32.063383 kubelet[1830]: I1213 02:00:32.063337 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 02:00:32.063383 kubelet[1830]: I1213 02:00:32.063355 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b38cc958ea441f472fc008e2ce1474e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b38cc958ea441f472fc008e2ce1474e3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:32.063557 kubelet[1830]: I1213 02:00:32.063398 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b38cc958ea441f472fc008e2ce1474e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b38cc958ea441f472fc008e2ce1474e3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:32.063557 kubelet[1830]: I1213 02:00:32.063433 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:32.063557 kubelet[1830]: I1213 02:00:32.063468 1830 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:32.280288 kubelet[1830]: E1213 02:00:32.280175 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:32.280878 kubelet[1830]: E1213 02:00:32.280457 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:32.280941 env[1321]: time="2024-12-13T02:00:32.280622211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b38cc958ea441f472fc008e2ce1474e3,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:32.281313 env[1321]: time="2024-12-13T02:00:32.281014543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:32.282391 kubelet[1830]: E1213 02:00:32.282352 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:32.282700 env[1321]: time="2024-12-13T02:00:32.282649926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:32.351344 kubelet[1830]: W1213 02:00:32.351303 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.351344 kubelet[1830]: E1213 02:00:32.351343 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.356579 kubelet[1830]: W1213 02:00:32.356548 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.356579 kubelet[1830]: E1213 02:00:32.356574 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.641318 kubelet[1830]: W1213 02:00:32.641147 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.641318 kubelet[1830]: E1213 02:00:32.641223 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.661808 kubelet[1830]: E1213 02:00:32.661752 1830 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Dec 13 02:00:32.728755 kubelet[1830]: W1213 02:00:32.728638 1830 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.728755 kubelet[1830]: E1213 02:00:32.728750 1830 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Dec 13 02:00:32.769706 kubelet[1830]: I1213 02:00:32.769655 1830 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 02:00:32.770197 kubelet[1830]: E1213 02:00:32.770169 1830 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Dec 13 02:00:32.786972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565951280.mount: Deactivated successfully. Dec 13 02:00:32.791172 env[1321]: time="2024-12-13T02:00:32.791113472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.794231 env[1321]: time="2024-12-13T02:00:32.794199719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.795188 env[1321]: time="2024-12-13T02:00:32.795155418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.796217 env[1321]: time="2024-12-13T02:00:32.796162058Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.799036 env[1321]: time="2024-12-13T02:00:32.798998553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.801321 env[1321]: time="2024-12-13T02:00:32.801287237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.802827 env[1321]: time="2024-12-13T02:00:32.802787771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.805284 env[1321]: time="2024-12-13T02:00:32.805244934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.806917 env[1321]: time="2024-12-13T02:00:32.806881169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.808406 env[1321]: time="2024-12-13T02:00:32.808340123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.809054 env[1321]: time="2024-12-13T02:00:32.809025881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.809884 env[1321]: time="2024-12-13T02:00:32.809857115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:00:32.832455 env[1321]: time="2024-12-13T02:00:32.831267861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:32.832455 env[1321]: time="2024-12-13T02:00:32.831327423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:32.832455 env[1321]: time="2024-12-13T02:00:32.831341546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:32.832455 env[1321]: time="2024-12-13T02:00:32.831674296Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2912c58bfae390b5eb1017a754ad7543451193b3347aefaaf13a4279922b1583 pid=1871 runtime=io.containerd.runc.v2 Dec 13 02:00:32.842270 env[1321]: time="2024-12-13T02:00:32.840458262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:32.842270 env[1321]: time="2024-12-13T02:00:32.840514034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:32.842270 env[1321]: time="2024-12-13T02:00:32.840540367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:32.842270 env[1321]: time="2024-12-13T02:00:32.840697120Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/caf98e86e63e1bba048f09c8344ce131cbbf66ce1d6cd4e13a09cc64b29cfb21 pid=1889 runtime=io.containerd.runc.v2 Dec 13 02:00:32.847324 env[1321]: time="2024-12-13T02:00:32.847224999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:32.847324 env[1321]: time="2024-12-13T02:00:32.847280189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:32.847324 env[1321]: time="2024-12-13T02:00:32.847296479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:32.848828 env[1321]: time="2024-12-13T02:00:32.847751599Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75c8813edfeaf59211e2ea1d477a95a50874ca72d2c9ef38fab0c052eca34f86 pid=1909 runtime=io.containerd.runc.v2 Dec 13 02:00:32.895279 env[1321]: time="2024-12-13T02:00:32.894255544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b38cc958ea441f472fc008e2ce1474e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"75c8813edfeaf59211e2ea1d477a95a50874ca72d2c9ef38fab0c052eca34f86\"" Dec 13 02:00:32.896783 kubelet[1830]: E1213 02:00:32.896762 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:32.897400 env[1321]: time="2024-12-13T02:00:32.897366370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"caf98e86e63e1bba048f09c8344ce131cbbf66ce1d6cd4e13a09cc64b29cfb21\"" Dec 13 02:00:32.897850 kubelet[1830]: E1213 02:00:32.897807 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:32.900157 env[1321]: time="2024-12-13T02:00:32.900119848Z" level=info msg="CreateContainer within sandbox \"caf98e86e63e1bba048f09c8344ce131cbbf66ce1d6cd4e13a09cc64b29cfb21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:00:32.900739 env[1321]: time="2024-12-13T02:00:32.900681041Z" level=info msg="CreateContainer within sandbox \"75c8813edfeaf59211e2ea1d477a95a50874ca72d2c9ef38fab0c052eca34f86\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:00:32.907082 env[1321]: time="2024-12-13T02:00:32.907041271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2912c58bfae390b5eb1017a754ad7543451193b3347aefaaf13a4279922b1583\"" Dec 13 02:00:32.908925 kubelet[1830]: E1213 02:00:32.908798 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:32.910496 env[1321]: time="2024-12-13T02:00:32.910452019Z" level=info msg="CreateContainer within sandbox \"2912c58bfae390b5eb1017a754ad7543451193b3347aefaaf13a4279922b1583\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:00:32.925315 env[1321]: time="2024-12-13T02:00:32.925284296Z" level=info msg="CreateContainer within sandbox \"caf98e86e63e1bba048f09c8344ce131cbbf66ce1d6cd4e13a09cc64b29cfb21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ea3e209e61e26bee82f773281496e8773cc961a8682b8517341859df4f1076dd\"" Dec 13 02:00:32.925792 env[1321]: time="2024-12-13T02:00:32.925762953Z" level=info msg="StartContainer for \"ea3e209e61e26bee82f773281496e8773cc961a8682b8517341859df4f1076dd\"" Dec 13 02:00:32.926304 env[1321]: time="2024-12-13T02:00:32.926273795Z" level=info msg="CreateContainer within sandbox \"75c8813edfeaf59211e2ea1d477a95a50874ca72d2c9ef38fab0c052eca34f86\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"45c72d2c0820e940aa34b7d27d57c30d3fcdcece7a82a9e864422914f9eb9a18\"" Dec 13 02:00:32.926544 env[1321]: time="2024-12-13T02:00:32.926523077Z" level=info msg="StartContainer for \"45c72d2c0820e940aa34b7d27d57c30d3fcdcece7a82a9e864422914f9eb9a18\"" Dec 13 02:00:32.933358 env[1321]: time="2024-12-13T02:00:32.933315325Z" level=info msg="CreateContainer within sandbox \"2912c58bfae390b5eb1017a754ad7543451193b3347aefaaf13a4279922b1583\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"47b4f1bb511b22763b4574b1fe765276575f6712e20f96f2fb55ac562097821b\"" Dec 13 02:00:32.934030 env[1321]: time="2024-12-13T02:00:32.934010986Z" level=info msg="StartContainer for \"47b4f1bb511b22763b4574b1fe765276575f6712e20f96f2fb55ac562097821b\"" Dec 13 02:00:32.988213 env[1321]: time="2024-12-13T02:00:32.987807727Z" level=info msg="StartContainer for \"45c72d2c0820e940aa34b7d27d57c30d3fcdcece7a82a9e864422914f9eb9a18\" returns successfully" Dec 13 02:00:32.990738 env[1321]: time="2024-12-13T02:00:32.988426236Z" level=info msg="StartContainer for \"ea3e209e61e26bee82f773281496e8773cc961a8682b8517341859df4f1076dd\" returns successfully" Dec 13 02:00:32.994917 env[1321]: time="2024-12-13T02:00:32.994863330Z" level=info msg="StartContainer for \"47b4f1bb511b22763b4574b1fe765276575f6712e20f96f2fb55ac562097821b\" returns successfully" Dec 13 02:00:33.282480 kubelet[1830]: E1213 02:00:33.282369 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:33.285637 kubelet[1830]: E1213 02:00:33.285608 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:33.288213 kubelet[1830]: E1213 02:00:33.288187 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:34.150657 kubelet[1830]: E1213 02:00:34.150615 1830 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 02:00:34.264544 kubelet[1830]: E1213 02:00:34.264511 1830 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 02:00:34.289453 kubelet[1830]: E1213 02:00:34.289424 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:34.289581 kubelet[1830]: E1213 02:00:34.289499 1830 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:34.371846 kubelet[1830]: I1213 02:00:34.371821 1830 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 02:00:34.378115 kubelet[1830]: I1213 02:00:34.378084 1830 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 02:00:34.383258 kubelet[1830]: E1213 02:00:34.383235 1830 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 02:00:35.246825 kubelet[1830]: I1213 02:00:35.246784 1830 apiserver.go:52] "Watching apiserver" Dec 13 02:00:35.259271 kubelet[1830]: I1213 02:00:35.259250 1830 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:00:36.375285 systemd[1]: Reloading. Dec 13 02:00:36.441496 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2024-12-13T02:00:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:00:36.441527 /usr/lib/systemd/system-generators/torcx-generator[2123]: time="2024-12-13T02:00:36Z" level=info msg="torcx already run" Dec 13 02:00:36.513442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:00:36.513462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:00:36.530571 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:00:36.613415 systemd[1]: Stopping kubelet.service... Dec 13 02:00:36.635488 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:00:36.635930 systemd[1]: Stopped kubelet.service. Dec 13 02:00:36.638180 systemd[1]: Starting kubelet.service... Dec 13 02:00:36.712258 systemd[1]: Started kubelet.service. Dec 13 02:00:36.757203 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:00:36.757203 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:00:36.757203 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:00:36.758057 kubelet[2180]: I1213 02:00:36.757284 2180 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:00:36.764356 kubelet[2180]: I1213 02:00:36.764043 2180 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:00:36.764356 kubelet[2180]: I1213 02:00:36.764088 2180 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:00:36.764356 kubelet[2180]: I1213 02:00:36.764274 2180 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:00:36.765733 kubelet[2180]: I1213 02:00:36.765685 2180 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:00:36.767537 kubelet[2180]: I1213 02:00:36.767516 2180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:00:36.778039 kubelet[2180]: I1213 02:00:36.778004 2180 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:00:36.778417 kubelet[2180]: I1213 02:00:36.778391 2180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:00:36.778660 kubelet[2180]: I1213 02:00:36.778590 2180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:00:36.778660 kubelet[2180]: I1213 02:00:36.778624 2180 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:00:36.778660 kubelet[2180]: I1213 02:00:36.778632 2180 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:00:36.778660 kubelet[2180]: I1213 02:00:36.778659 2180 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:36.778879 kubelet[2180]: I1213 02:00:36.778765 2180 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:00:36.778879 kubelet[2180]: I1213 02:00:36.778777 2180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:00:36.778879 kubelet[2180]: I1213 02:00:36.778796 2180 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:00:36.782562 kubelet[2180]: I1213 02:00:36.782534 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:00:36.784106 kubelet[2180]: I1213 02:00:36.784087 2180 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:00:36.784318 sudo[2195]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:00:36.784537 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:00:36.785243 kubelet[2180]: I1213 02:00:36.785030 2180 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:00:36.792530 kubelet[2180]: I1213 02:00:36.791705 2180 server.go:1256] "Started kubelet" Dec 13 02:00:36.793738 kubelet[2180]: I1213 02:00:36.793639 2180 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:00:36.796914 kubelet[2180]: I1213 02:00:36.796847 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:00:36.797734 kubelet[2180]: I1213 02:00:36.797632 2180 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:00:36.798327 kubelet[2180]: I1213 02:00:36.798288 2180 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:00:36.799220 kubelet[2180]: I1213 02:00:36.799196 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:00:36.804141 kubelet[2180]: I1213 02:00:36.801339 2180 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:00:36.804141 kubelet[2180]: I1213 02:00:36.801411 2180 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:00:36.804141 kubelet[2180]: I1213 02:00:36.801512 2180 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:00:36.804141 kubelet[2180]: E1213 02:00:36.803048 2180 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:00:36.804141 kubelet[2180]: I1213 02:00:36.803283 2180 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:00:36.806805 kubelet[2180]: I1213 02:00:36.806012 2180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:00:36.807349 kubelet[2180]: I1213 02:00:36.807172 2180 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:00:36.808229 kubelet[2180]: I1213 02:00:36.807870 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:00:36.809135 kubelet[2180]: I1213 02:00:36.809083 2180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:00:36.809135 kubelet[2180]: I1213 02:00:36.809101 2180 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:00:36.809135 kubelet[2180]: I1213 02:00:36.809116 2180 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:00:36.809366 kubelet[2180]: E1213 02:00:36.809155 2180 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:00:36.849519 kubelet[2180]: I1213 02:00:36.849476 2180 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:00:36.849519 kubelet[2180]: I1213 02:00:36.849511 2180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:00:36.849519 kubelet[2180]: I1213 02:00:36.849526 2180 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:00:36.849794 kubelet[2180]: I1213 02:00:36.849657 2180 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:00:36.849794 kubelet[2180]: I1213 02:00:36.849675 2180 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:00:36.849794 kubelet[2180]: I1213 02:00:36.849680 2180 policy_none.go:49] "None policy: Start" Dec 13 02:00:36.850239 kubelet[2180]: I1213 02:00:36.850220 2180 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:00:36.850239 kubelet[2180]: I1213 02:00:36.850239 2180 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:00:36.850403 kubelet[2180]: I1213 02:00:36.850385 2180 state_mem.go:75] "Updated machine memory state" Dec 13 02:00:36.851829 kubelet[2180]: I1213 02:00:36.851813 2180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:00:36.852447 kubelet[2180]: I1213 02:00:36.852428 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:00:36.909813 kubelet[2180]: I1213 02:00:36.909678 2180 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 02:00:36.911311 kubelet[2180]: I1213 02:00:36.911291 2180 topology_manager.go:215] "Topology Admit Handler" podUID="b38cc958ea441f472fc008e2ce1474e3" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 02:00:36.911533 kubelet[2180]: I1213 02:00:36.911384 2180 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 02:00:36.959335 kubelet[2180]: I1213 02:00:36.959301 2180 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 02:00:36.965780 kubelet[2180]: I1213 02:00:36.965750 2180 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 02:00:36.965911 kubelet[2180]: I1213 02:00:36.965893 2180 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 02:00:37.002666 kubelet[2180]: I1213 02:00:37.002626 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b38cc958ea441f472fc008e2ce1474e3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b38cc958ea441f472fc008e2ce1474e3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:37.002666 kubelet[2180]: I1213 02:00:37.002672 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:37.002666 kubelet[2180]: I1213 02:00:37.002695 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 02:00:37.002955 kubelet[2180]: I1213 02:00:37.002725 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b38cc958ea441f472fc008e2ce1474e3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b38cc958ea441f472fc008e2ce1474e3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:37.002955 kubelet[2180]: I1213 02:00:37.002746 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b38cc958ea441f472fc008e2ce1474e3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b38cc958ea441f472fc008e2ce1474e3\") " pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:37.002955 kubelet[2180]: I1213 02:00:37.002778 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:37.002955 kubelet[2180]: I1213 02:00:37.002798 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:37.002955 kubelet[2180]: I1213 02:00:37.002818 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:37.003132 kubelet[2180]: I1213 02:00:37.002840 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:37.220642 kubelet[2180]: E1213 02:00:37.220505 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:37.223524 kubelet[2180]: E1213 02:00:37.223502 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:37.223695 kubelet[2180]: E1213 02:00:37.223615 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:37.329855 sudo[2195]: pam_unix(sudo:session): session closed for user root Dec 13 02:00:37.783594 kubelet[2180]: I1213 02:00:37.783542 2180 apiserver.go:52] "Watching apiserver" Dec 13 02:00:37.801978 kubelet[2180]: I1213 02:00:37.801948 2180 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:00:37.827551 kubelet[2180]: E1213 02:00:37.827520 2180 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 02:00:37.828218 kubelet[2180]: E1213 02:00:37.828198 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:37.828773 kubelet[2180]: E1213 02:00:37.828739 2180 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 02:00:37.829009 kubelet[2180]: E1213 02:00:37.828989 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:37.829074 kubelet[2180]: E1213 02:00:37.829056 2180 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 02:00:37.829440 kubelet[2180]: E1213 02:00:37.829421 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:37.845475 kubelet[2180]: I1213 02:00:37.845433 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.845389811 podStartE2EDuration="1.845389811s" podCreationTimestamp="2024-12-13 02:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:37.838292243 +0000 UTC m=+1.121962141" watchObservedRunningTime="2024-12-13 02:00:37.845389811 +0000 UTC m=+1.129059709" Dec 13 02:00:37.850874 kubelet[2180]: I1213 02:00:37.850807 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8507614879999998 podStartE2EDuration="1.850761488s" podCreationTimestamp="2024-12-13 02:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:37.845543507 +0000 UTC m=+1.129213425" watchObservedRunningTime="2024-12-13 02:00:37.850761488 +0000 UTC m=+1.134431416" Dec 13 02:00:37.857646 kubelet[2180]: I1213 02:00:37.857611 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.857587071 podStartE2EDuration="1.857587071s" podCreationTimestamp="2024-12-13 02:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:37.850952007 +0000 UTC m=+1.134621905" watchObservedRunningTime="2024-12-13 02:00:37.857587071 +0000 UTC m=+1.141256969" Dec 13 02:00:38.824193 kubelet[2180]: E1213 02:00:38.824162 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:38.824649 kubelet[2180]: E1213 02:00:38.824269 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:38.824649 kubelet[2180]: E1213 02:00:38.824321 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:39.795262 sudo[1443]: pam_unix(sudo:session): session closed for user root Dec 13 02:00:39.796801 sshd[1437]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:39.799209 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:53006.service: Deactivated successfully. Dec 13 02:00:39.800331 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:00:39.800351 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:00:39.801529 systemd-logind[1301]: Removed session 5. Dec 13 02:00:39.826564 kubelet[2180]: E1213 02:00:39.826513 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:42.031234 update_engine[1306]: I1213 02:00:42.031095 1306 update_attempter.cc:509] Updating boot flags... Dec 13 02:00:47.242541 kubelet[2180]: E1213 02:00:47.242504 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:47.287603 kubelet[2180]: E1213 02:00:47.287573 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:49.172878 kubelet[2180]: I1213 02:00:49.172832 2180 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:00:49.173320 env[1321]: time="2024-12-13T02:00:49.173242994Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:00:49.173529 kubelet[2180]: I1213 02:00:49.173433 2180 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:00:49.414767 kubelet[2180]: I1213 02:00:49.414687 2180 topology_manager.go:215] "Topology Admit Handler" podUID="fa78280f-0822-41cf-8244-baf7faff5496" podNamespace="kube-system" podName="kube-proxy-98cfh" Dec 13 02:00:49.419548 kubelet[2180]: I1213 02:00:49.419499 2180 topology_manager.go:215] "Topology Admit Handler" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" podNamespace="kube-system" podName="cilium-rxm8x" Dec 13 02:00:49.498228 kubelet[2180]: I1213 02:00:49.498103 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa78280f-0822-41cf-8244-baf7faff5496-kube-proxy\") pod \"kube-proxy-98cfh\" (UID: \"fa78280f-0822-41cf-8244-baf7faff5496\") " pod="kube-system/kube-proxy-98cfh" Dec 13 02:00:49.498228 kubelet[2180]: I1213 02:00:49.498151 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-bpf-maps\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.498228 kubelet[2180]: I1213 02:00:49.498169 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cni-path\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.498228 kubelet[2180]: I1213 02:00:49.498185 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-etc-cni-netd\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.498228 kubelet[2180]: I1213 02:00:49.498202 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-xtables-lock\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.498228 kubelet[2180]: I1213 02:00:49.498220 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-lib-modules\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.498499 kubelet[2180]: I1213 02:00:49.498277 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-net\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500048 kubelet[2180]: I1213 02:00:49.499994 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa78280f-0822-41cf-8244-baf7faff5496-lib-modules\") pod \"kube-proxy-98cfh\" (UID: \"fa78280f-0822-41cf-8244-baf7faff5496\") " pod="kube-system/kube-proxy-98cfh" Dec 13 02:00:49.500117 kubelet[2180]: I1213 02:00:49.500067 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-config-path\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500117 kubelet[2180]: I1213 02:00:49.500097 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-clustermesh-secrets\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500117 kubelet[2180]: I1213 02:00:49.500117 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hubble-tls\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500194 kubelet[2180]: I1213 02:00:49.500135 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzh44\" (UniqueName: \"kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-kube-api-access-xzh44\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500194 kubelet[2180]: I1213 02:00:49.500156 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa78280f-0822-41cf-8244-baf7faff5496-xtables-lock\") pod \"kube-proxy-98cfh\" (UID: \"fa78280f-0822-41cf-8244-baf7faff5496\") " pod="kube-system/kube-proxy-98cfh" Dec 13 02:00:49.500194 kubelet[2180]: I1213 02:00:49.500172 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-run\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500194 kubelet[2180]: I1213 02:00:49.500189 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hostproc\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500290 kubelet[2180]: I1213 02:00:49.500208 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89cw\" (UniqueName: \"kubernetes.io/projected/fa78280f-0822-41cf-8244-baf7faff5496-kube-api-access-s89cw\") pod \"kube-proxy-98cfh\" (UID: \"fa78280f-0822-41cf-8244-baf7faff5496\") " pod="kube-system/kube-proxy-98cfh" Dec 13 02:00:49.500290 kubelet[2180]: I1213 02:00:49.500263 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-cgroup\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.500340 kubelet[2180]: I1213 02:00:49.500312 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-kernel\") pod \"cilium-rxm8x\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " pod="kube-system/cilium-rxm8x" Dec 13 02:00:49.619580 kubelet[2180]: E1213 02:00:49.619513 2180 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 02:00:49.619580 kubelet[2180]: E1213 02:00:49.619553 2180 projected.go:200] Error preparing data for projected volume kube-api-access-s89cw for pod kube-system/kube-proxy-98cfh: configmap "kube-root-ca.crt" not found Dec 13 02:00:49.619825 kubelet[2180]: E1213 02:00:49.619624 2180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa78280f-0822-41cf-8244-baf7faff5496-kube-api-access-s89cw podName:fa78280f-0822-41cf-8244-baf7faff5496 nodeName:}" failed. No retries permitted until 2024-12-13 02:00:50.119600552 +0000 UTC m=+13.403270450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s89cw" (UniqueName: "kubernetes.io/projected/fa78280f-0822-41cf-8244-baf7faff5496-kube-api-access-s89cw") pod "kube-proxy-98cfh" (UID: "fa78280f-0822-41cf-8244-baf7faff5496") : configmap "kube-root-ca.crt" not found Dec 13 02:00:49.621265 kubelet[2180]: E1213 02:00:49.620996 2180 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 02:00:49.621265 kubelet[2180]: E1213 02:00:49.621049 2180 projected.go:200] Error preparing data for projected volume kube-api-access-xzh44 for pod kube-system/cilium-rxm8x: configmap "kube-root-ca.crt" not found Dec 13 02:00:49.621265 kubelet[2180]: E1213 02:00:49.621145 2180 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-kube-api-access-xzh44 podName:fad185a2-169d-4ef1-9d53-d8b47e3c30b8 nodeName:}" failed. No retries permitted until 2024-12-13 02:00:50.121118183 +0000 UTC m=+13.404788081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xzh44" (UniqueName: "kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-kube-api-access-xzh44") pod "cilium-rxm8x" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8") : configmap "kube-root-ca.crt" not found Dec 13 02:00:49.700464 kubelet[2180]: E1213 02:00:49.700384 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.259021 kubelet[2180]: I1213 02:00:50.258958 2180 topology_manager.go:215] "Topology Admit Handler" podUID="f4b1a1d3-d835-498a-8cc7-e5511e294ad1" podNamespace="kube-system" podName="cilium-operator-5cc964979-td5p4" Dec 13 02:00:50.313744 kubelet[2180]: I1213 02:00:50.313696 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g7f4\" (UniqueName: \"kubernetes.io/projected/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-kube-api-access-2g7f4\") pod \"cilium-operator-5cc964979-td5p4\" (UID: \"f4b1a1d3-d835-498a-8cc7-e5511e294ad1\") " pod="kube-system/cilium-operator-5cc964979-td5p4" Dec 13 02:00:50.313744 kubelet[2180]: I1213 02:00:50.313750 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-cilium-config-path\") pod \"cilium-operator-5cc964979-td5p4\" (UID: \"f4b1a1d3-d835-498a-8cc7-e5511e294ad1\") " pod="kube-system/cilium-operator-5cc964979-td5p4" Dec 13 02:00:50.318901 kubelet[2180]: E1213 02:00:50.318858 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.319362 env[1321]: time="2024-12-13T02:00:50.319316684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98cfh,Uid:fa78280f-0822-41cf-8244-baf7faff5496,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:50.323993 kubelet[2180]: E1213 02:00:50.323964 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.324640 env[1321]: time="2024-12-13T02:00:50.324326215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxm8x,Uid:fad185a2-169d-4ef1-9d53-d8b47e3c30b8,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:50.341399 env[1321]: time="2024-12-13T02:00:50.341326518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:50.341574 env[1321]: time="2024-12-13T02:00:50.341373997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:50.341574 env[1321]: time="2024-12-13T02:00:50.341388256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:50.341769 env[1321]: time="2024-12-13T02:00:50.341642684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e716511651ed613acb180d0b3571fa08a426b4385e2ea8996a67df609199d6c pid=2288 runtime=io.containerd.runc.v2 Dec 13 02:00:50.349343 env[1321]: time="2024-12-13T02:00:50.349273323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:50.349680 env[1321]: time="2024-12-13T02:00:50.349320590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:50.349680 env[1321]: time="2024-12-13T02:00:50.349331693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:50.349680 env[1321]: time="2024-12-13T02:00:50.349596603Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493 pid=2307 runtime=io.containerd.runc.v2 Dec 13 02:00:50.384018 env[1321]: time="2024-12-13T02:00:50.383965092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98cfh,Uid:fa78280f-0822-41cf-8244-baf7faff5496,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e716511651ed613acb180d0b3571fa08a426b4385e2ea8996a67df609199d6c\"" Dec 13 02:00:50.384188 env[1321]: time="2024-12-13T02:00:50.384024926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rxm8x,Uid:fad185a2-169d-4ef1-9d53-d8b47e3c30b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\"" Dec 13 02:00:50.384530 kubelet[2180]: E1213 02:00:50.384509 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.384759 kubelet[2180]: E1213 02:00:50.384738 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.386285 env[1321]: time="2024-12-13T02:00:50.386241273Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:00:50.387454 env[1321]: time="2024-12-13T02:00:50.387415850Z" level=info msg="CreateContainer within sandbox \"1e716511651ed613acb180d0b3571fa08a426b4385e2ea8996a67df609199d6c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:00:50.405844 env[1321]: time="2024-12-13T02:00:50.405807127Z" level=info msg="CreateContainer within sandbox \"1e716511651ed613acb180d0b3571fa08a426b4385e2ea8996a67df609199d6c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fac620742bd9762404133253f1b050fb3f45ebe0b84346846002eebc6812d4c2\"" Dec 13 02:00:50.406445 env[1321]: time="2024-12-13T02:00:50.406406421Z" level=info msg="StartContainer for \"fac620742bd9762404133253f1b050fb3f45ebe0b84346846002eebc6812d4c2\"" Dec 13 02:00:50.454054 env[1321]: time="2024-12-13T02:00:50.454007788Z" level=info msg="StartContainer for \"fac620742bd9762404133253f1b050fb3f45ebe0b84346846002eebc6812d4c2\" returns successfully" Dec 13 02:00:50.562789 kubelet[2180]: E1213 02:00:50.562623 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.563315 env[1321]: time="2024-12-13T02:00:50.563266810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-td5p4,Uid:f4b1a1d3-d835-498a-8cc7-e5511e294ad1,Namespace:kube-system,Attempt:0,}" Dec 13 02:00:50.579786 env[1321]: time="2024-12-13T02:00:50.579678030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:00:50.579786 env[1321]: time="2024-12-13T02:00:50.579758778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:00:50.579786 env[1321]: time="2024-12-13T02:00:50.579780753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:00:50.580179 env[1321]: time="2024-12-13T02:00:50.580140369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b pid=2434 runtime=io.containerd.runc.v2 Dec 13 02:00:50.640176 env[1321]: time="2024-12-13T02:00:50.639265280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-td5p4,Uid:f4b1a1d3-d835-498a-8cc7-e5511e294ad1,Namespace:kube-system,Attempt:0,} returns sandbox id \"850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b\"" Dec 13 02:00:50.640347 kubelet[2180]: E1213 02:00:50.639930 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:50.859970 kubelet[2180]: E1213 02:00:50.859853 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:58.217126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947930569.mount: Deactivated successfully. Dec 13 02:01:02.486148 env[1321]: time="2024-12-13T02:01:02.486090065Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:02.488264 env[1321]: time="2024-12-13T02:01:02.488226208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:02.490117 env[1321]: time="2024-12-13T02:01:02.490055018Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:02.490817 env[1321]: time="2024-12-13T02:01:02.490786150Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:01:02.494022 env[1321]: time="2024-12-13T02:01:02.493993036Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:01:02.495061 env[1321]: time="2024-12-13T02:01:02.495029078Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:01:02.509494 env[1321]: time="2024-12-13T02:01:02.509445900Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\"" Dec 13 02:01:02.509995 env[1321]: time="2024-12-13T02:01:02.509959988Z" level=info msg="StartContainer for \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\"" Dec 13 02:01:02.555677 env[1321]: time="2024-12-13T02:01:02.555618851Z" level=info msg="StartContainer for \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\" returns successfully" Dec 13 02:01:03.089664 kubelet[2180]: E1213 02:01:03.089616 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:03.244372 kubelet[2180]: I1213 02:01:03.244325 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-98cfh" podStartSLOduration=14.244286335 podStartE2EDuration="14.244286335s" podCreationTimestamp="2024-12-13 02:00:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:00:50.874576505 +0000 UTC m=+14.158246393" watchObservedRunningTime="2024-12-13 02:01:03.244286335 +0000 UTC m=+26.527956233" Dec 13 02:01:03.429585 env[1321]: time="2024-12-13T02:01:03.429517159Z" level=info msg="shim disconnected" id=51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033 Dec 13 02:01:03.429585 env[1321]: time="2024-12-13T02:01:03.429583301Z" level=warning msg="cleaning up after shim disconnected" id=51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033 namespace=k8s.io Dec 13 02:01:03.429585 env[1321]: time="2024-12-13T02:01:03.429606748Z" level=info msg="cleaning up dead shim" Dec 13 02:01:03.437693 env[1321]: time="2024-12-13T02:01:03.437638384Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2608 runtime=io.containerd.runc.v2\n" Dec 13 02:01:03.505897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033-rootfs.mount: Deactivated successfully. Dec 13 02:01:04.093818 kubelet[2180]: E1213 02:01:04.093763 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:04.096311 env[1321]: time="2024-12-13T02:01:04.096244711Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:01:04.534088 env[1321]: time="2024-12-13T02:01:04.533993337Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\"" Dec 13 02:01:04.535290 env[1321]: time="2024-12-13T02:01:04.535240432Z" level=info msg="StartContainer for \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\"" Dec 13 02:01:04.569213 systemd[1]: run-containerd-runc-k8s.io-10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd-runc.0cGEnV.mount: Deactivated successfully. Dec 13 02:01:04.600070 env[1321]: time="2024-12-13T02:01:04.600018750Z" level=info msg="StartContainer for \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\" returns successfully" Dec 13 02:01:04.612156 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:01:04.612592 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:01:04.612851 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:01:04.615290 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:01:04.617918 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:01:04.624904 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:01:04.649079 env[1321]: time="2024-12-13T02:01:04.649016940Z" level=info msg="shim disconnected" id=10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd Dec 13 02:01:04.649079 env[1321]: time="2024-12-13T02:01:04.649066929Z" level=warning msg="cleaning up after shim disconnected" id=10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd namespace=k8s.io Dec 13 02:01:04.649079 env[1321]: time="2024-12-13T02:01:04.649077009Z" level=info msg="cleaning up dead shim" Dec 13 02:01:04.656531 env[1321]: time="2024-12-13T02:01:04.656462739Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2671 runtime=io.containerd.runc.v2\n" Dec 13 02:01:05.096379 kubelet[2180]: E1213 02:01:05.096345 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:05.098056 env[1321]: time="2024-12-13T02:01:05.098011873Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:01:05.113487 env[1321]: time="2024-12-13T02:01:05.113431290Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\"" Dec 13 02:01:05.113938 env[1321]: time="2024-12-13T02:01:05.113906314Z" level=info msg="StartContainer for \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\"" Dec 13 02:01:05.144265 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:58512.service. Dec 13 02:01:05.161699 env[1321]: time="2024-12-13T02:01:05.161644848Z" level=info msg="StartContainer for \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\" returns successfully" Dec 13 02:01:05.180198 env[1321]: time="2024-12-13T02:01:05.180152949Z" level=info msg="shim disconnected" id=a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85 Dec 13 02:01:05.180399 env[1321]: time="2024-12-13T02:01:05.180375522Z" level=warning msg="cleaning up after shim disconnected" id=a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85 namespace=k8s.io Dec 13 02:01:05.180399 env[1321]: time="2024-12-13T02:01:05.180395402Z" level=info msg="cleaning up dead shim" Dec 13 02:01:05.183052 sshd[2706]: Accepted publickey for core from 10.0.0.1 port 58512 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:05.184274 sshd[2706]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:05.186419 env[1321]: time="2024-12-13T02:01:05.186389547Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2728 runtime=io.containerd.runc.v2\n" Dec 13 02:01:05.188116 systemd-logind[1301]: New session 6 of user core. Dec 13 02:01:05.188883 systemd[1]: Started session-6.scope. Dec 13 02:01:05.293573 sshd[2706]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:05.295911 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:58512.service: Deactivated successfully. Dec 13 02:01:05.296923 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:01:05.297376 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:01:05.298083 systemd-logind[1301]: Removed session 6. Dec 13 02:01:05.547734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd-rootfs.mount: Deactivated successfully. Dec 13 02:01:06.099137 kubelet[2180]: E1213 02:01:06.099107 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:06.101308 env[1321]: time="2024-12-13T02:01:06.101243851Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:01:06.117452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1473329791.mount: Deactivated successfully. Dec 13 02:01:06.118286 env[1321]: time="2024-12-13T02:01:06.118234316Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\"" Dec 13 02:01:06.118744 env[1321]: time="2024-12-13T02:01:06.118687577Z" level=info msg="StartContainer for \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\"" Dec 13 02:01:06.154939 env[1321]: time="2024-12-13T02:01:06.154876756Z" level=info msg="StartContainer for \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\" returns successfully" Dec 13 02:01:06.174440 env[1321]: time="2024-12-13T02:01:06.174392438Z" level=info msg="shim disconnected" id=a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef Dec 13 02:01:06.174440 env[1321]: time="2024-12-13T02:01:06.174437918Z" level=warning msg="cleaning up after shim disconnected" id=a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef namespace=k8s.io Dec 13 02:01:06.174440 env[1321]: time="2024-12-13T02:01:06.174447187Z" level=info msg="cleaning up dead shim" Dec 13 02:01:06.181619 env[1321]: time="2024-12-13T02:01:06.181597939Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2798 runtime=io.containerd.runc.v2\n" Dec 13 02:01:06.547836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef-rootfs.mount: Deactivated successfully. Dec 13 02:01:07.102433 kubelet[2180]: E1213 02:01:07.102402 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:07.105599 env[1321]: time="2024-12-13T02:01:07.105465576Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:01:07.121481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272729932.mount: Deactivated successfully. Dec 13 02:01:07.125841 env[1321]: time="2024-12-13T02:01:07.125791747Z" level=info msg="CreateContainer within sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\"" Dec 13 02:01:07.126407 env[1321]: time="2024-12-13T02:01:07.126353290Z" level=info msg="StartContainer for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\"" Dec 13 02:01:07.170010 env[1321]: time="2024-12-13T02:01:07.169937836Z" level=info msg="StartContainer for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" returns successfully" Dec 13 02:01:07.278884 kubelet[2180]: I1213 02:01:07.278845 2180 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:01:07.298213 kubelet[2180]: I1213 02:01:07.298159 2180 topology_manager.go:215] "Topology Admit Handler" podUID="5a777b13-48e4-4a7f-8cae-a908c875f358" podNamespace="kube-system" podName="coredns-76f75df574-kdlcl" Dec 13 02:01:07.300503 kubelet[2180]: I1213 02:01:07.300455 2180 topology_manager.go:215] "Topology Admit Handler" podUID="224a8c62-eaa0-4fd0-87ae-cb6f78de5015" podNamespace="kube-system" podName="coredns-76f75df574-fjgm2" Dec 13 02:01:07.341806 kubelet[2180]: I1213 02:01:07.341762 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/224a8c62-eaa0-4fd0-87ae-cb6f78de5015-config-volume\") pod \"coredns-76f75df574-fjgm2\" (UID: \"224a8c62-eaa0-4fd0-87ae-cb6f78de5015\") " pod="kube-system/coredns-76f75df574-fjgm2" Dec 13 02:01:07.341806 kubelet[2180]: I1213 02:01:07.341817 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a777b13-48e4-4a7f-8cae-a908c875f358-config-volume\") pod \"coredns-76f75df574-kdlcl\" (UID: \"5a777b13-48e4-4a7f-8cae-a908c875f358\") " pod="kube-system/coredns-76f75df574-kdlcl" Dec 13 02:01:07.342015 kubelet[2180]: I1213 02:01:07.341842 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td24q\" (UniqueName: \"kubernetes.io/projected/224a8c62-eaa0-4fd0-87ae-cb6f78de5015-kube-api-access-td24q\") pod \"coredns-76f75df574-fjgm2\" (UID: \"224a8c62-eaa0-4fd0-87ae-cb6f78de5015\") " pod="kube-system/coredns-76f75df574-fjgm2" Dec 13 02:01:07.342015 kubelet[2180]: I1213 02:01:07.341880 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5x7d\" (UniqueName: \"kubernetes.io/projected/5a777b13-48e4-4a7f-8cae-a908c875f358-kube-api-access-j5x7d\") pod \"coredns-76f75df574-kdlcl\" (UID: \"5a777b13-48e4-4a7f-8cae-a908c875f358\") " pod="kube-system/coredns-76f75df574-kdlcl" Dec 13 02:01:07.606800 kubelet[2180]: E1213 02:01:07.606681 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:07.607021 kubelet[2180]: E1213 02:01:07.606681 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:07.607619 env[1321]: time="2024-12-13T02:01:07.607569481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kdlcl,Uid:5a777b13-48e4-4a7f-8cae-a908c875f358,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:07.607876 env[1321]: time="2024-12-13T02:01:07.607833465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fjgm2,Uid:224a8c62-eaa0-4fd0-87ae-cb6f78de5015,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:08.110849 kubelet[2180]: E1213 02:01:08.110816 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:08.126877 kubelet[2180]: I1213 02:01:08.126835 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rxm8x" podStartSLOduration=7.021385265 podStartE2EDuration="19.126789943s" podCreationTimestamp="2024-12-13 02:00:49 +0000 UTC" firstStartedPulling="2024-12-13 02:00:50.385819057 +0000 UTC m=+13.669488955" lastFinishedPulling="2024-12-13 02:01:02.491223735 +0000 UTC m=+25.774893633" observedRunningTime="2024-12-13 02:01:08.126650075 +0000 UTC m=+31.410319994" watchObservedRunningTime="2024-12-13 02:01:08.126789943 +0000 UTC m=+31.410459841" Dec 13 02:01:09.111348 kubelet[2180]: E1213 02:01:09.111298 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:09.730656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890691008.mount: Deactivated successfully. Dec 13 02:01:10.114411 kubelet[2180]: E1213 02:01:10.114281 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:10.297731 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:53338.service. Dec 13 02:01:10.336819 sshd[2969]: Accepted publickey for core from 10.0.0.1 port 53338 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:10.338385 sshd[2969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:10.342490 systemd-logind[1301]: New session 7 of user core. Dec 13 02:01:10.343266 systemd[1]: Started session-7.scope. Dec 13 02:01:10.446163 sshd[2969]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:10.448742 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:53338.service: Deactivated successfully. Dec 13 02:01:10.449616 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:01:10.450473 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:01:10.451351 systemd-logind[1301]: Removed session 7. Dec 13 02:01:11.869033 kubelet[2180]: E1213 02:01:11.868987 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:12.247218 env[1321]: time="2024-12-13T02:01:12.247167730Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:12.248875 env[1321]: time="2024-12-13T02:01:12.248837400Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:12.250315 env[1321]: time="2024-12-13T02:01:12.250274952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:12.250856 env[1321]: time="2024-12-13T02:01:12.250824766Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:01:12.252358 env[1321]: time="2024-12-13T02:01:12.252336255Z" level=info msg="CreateContainer within sandbox \"850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:01:12.265007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331060928.mount: Deactivated successfully. Dec 13 02:01:12.265866 env[1321]: time="2024-12-13T02:01:12.265818793Z" level=info msg="CreateContainer within sandbox \"850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\"" Dec 13 02:01:12.266320 env[1321]: time="2024-12-13T02:01:12.266260724Z" level=info msg="StartContainer for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\"" Dec 13 02:01:12.305005 env[1321]: time="2024-12-13T02:01:12.304946497Z" level=info msg="StartContainer for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" returns successfully" Dec 13 02:01:13.120235 kubelet[2180]: E1213 02:01:13.120198 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:14.121652 kubelet[2180]: E1213 02:01:14.121613 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:15.450154 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:53344.service. Dec 13 02:01:15.490930 sshd[3023]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:15.492297 sshd[3023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:15.496161 systemd-logind[1301]: New session 8 of user core. Dec 13 02:01:15.497000 systemd[1]: Started session-8.scope. Dec 13 02:01:15.602322 sshd[3023]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:15.604404 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:53344.service: Deactivated successfully. Dec 13 02:01:15.605625 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:01:15.605662 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:01:15.606624 systemd-logind[1301]: Removed session 8. Dec 13 02:01:16.202293 systemd-networkd[1090]: cilium_host: Link UP Dec 13 02:01:16.202483 systemd-networkd[1090]: cilium_net: Link UP Dec 13 02:01:16.202488 systemd-networkd[1090]: cilium_net: Gained carrier Dec 13 02:01:16.202703 systemd-networkd[1090]: cilium_host: Gained carrier Dec 13 02:01:16.216172 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:01:16.206892 systemd-networkd[1090]: cilium_host: Gained IPv6LL Dec 13 02:01:16.281585 systemd-networkd[1090]: cilium_vxlan: Link UP Dec 13 02:01:16.281594 systemd-networkd[1090]: cilium_vxlan: Gained carrier Dec 13 02:01:16.475809 kernel: NET: Registered PF_ALG protocol family Dec 13 02:01:16.479849 systemd-networkd[1090]: cilium_net: Gained IPv6LL Dec 13 02:01:17.019786 systemd-networkd[1090]: lxc_health: Link UP Dec 13 02:01:17.030845 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:01:17.030691 systemd-networkd[1090]: lxc_health: Gained carrier Dec 13 02:01:17.191122 systemd-networkd[1090]: lxceb7e13f611d1: Link UP Dec 13 02:01:17.213771 kernel: eth0: renamed from tmp7ffae Dec 13 02:01:17.214551 systemd-networkd[1090]: lxca5184c78711d: Link UP Dec 13 02:01:17.223190 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb7e13f611d1: link becomes ready Dec 13 02:01:17.222180 systemd-networkd[1090]: lxceb7e13f611d1: Gained carrier Dec 13 02:01:17.227738 kernel: eth0: renamed from tmp82040 Dec 13 02:01:17.236639 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:01:17.236695 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca5184c78711d: link becomes ready Dec 13 02:01:17.236851 systemd-networkd[1090]: lxca5184c78711d: Gained carrier Dec 13 02:01:17.709832 systemd-networkd[1090]: cilium_vxlan: Gained IPv6LL Dec 13 02:01:18.271908 systemd-networkd[1090]: lxc_health: Gained IPv6LL Dec 13 02:01:18.326313 kubelet[2180]: E1213 02:01:18.326280 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:18.402823 kubelet[2180]: I1213 02:01:18.402782 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-td5p4" podStartSLOduration=6.792419097 podStartE2EDuration="28.402745675s" podCreationTimestamp="2024-12-13 02:00:50 +0000 UTC" firstStartedPulling="2024-12-13 02:00:50.640707421 +0000 UTC m=+13.924377319" lastFinishedPulling="2024-12-13 02:01:12.251033999 +0000 UTC m=+35.534703897" observedRunningTime="2024-12-13 02:01:13.215557849 +0000 UTC m=+36.499227748" watchObservedRunningTime="2024-12-13 02:01:18.402745675 +0000 UTC m=+41.686415573" Dec 13 02:01:18.463885 systemd-networkd[1090]: lxceb7e13f611d1: Gained IPv6LL Dec 13 02:01:19.130499 kubelet[2180]: E1213 02:01:19.130458 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:19.167901 systemd-networkd[1090]: lxca5184c78711d: Gained IPv6LL Dec 13 02:01:20.607071 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:60074.service. Dec 13 02:01:21.064547 env[1321]: time="2024-12-13T02:01:21.064441598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:21.064547 env[1321]: time="2024-12-13T02:01:21.064518558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:21.065431 env[1321]: time="2024-12-13T02:01:21.065378050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:21.065754 env[1321]: time="2024-12-13T02:01:21.065692314Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82040e326ff890649dfcdd81637c35193a979aeb2bdc6a4524d454a3936f0d39 pid=3435 runtime=io.containerd.runc.v2 Dec 13 02:01:21.069464 env[1321]: time="2024-12-13T02:01:21.067790128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:21.069464 env[1321]: time="2024-12-13T02:01:21.067858071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:21.069464 env[1321]: time="2024-12-13T02:01:21.067868231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:21.071571 env[1321]: time="2024-12-13T02:01:21.071216399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ffae219e60d040971d403632e82e13e080d7fe88c6030d063d967c170501421 pid=3451 runtime=io.containerd.runc.v2 Dec 13 02:01:21.092563 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:01:21.104540 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:01:21.125817 env[1321]: time="2024-12-13T02:01:21.124848244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kdlcl,Uid:5a777b13-48e4-4a7f-8cae-a908c875f358,Namespace:kube-system,Attempt:0,} returns sandbox id \"82040e326ff890649dfcdd81637c35193a979aeb2bdc6a4524d454a3936f0d39\"" Dec 13 02:01:21.126008 kubelet[2180]: E1213 02:01:21.125755 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:21.127357 env[1321]: time="2024-12-13T02:01:21.127302885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fjgm2,Uid:224a8c62-eaa0-4fd0-87ae-cb6f78de5015,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ffae219e60d040971d403632e82e13e080d7fe88c6030d063d967c170501421\"" Dec 13 02:01:21.129470 kubelet[2180]: E1213 02:01:21.129314 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:21.130960 env[1321]: time="2024-12-13T02:01:21.130917926Z" level=info msg="CreateContainer within sandbox \"82040e326ff890649dfcdd81637c35193a979aeb2bdc6a4524d454a3936f0d39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:01:21.133840 env[1321]: time="2024-12-13T02:01:21.133792660Z" level=info msg="CreateContainer within sandbox \"7ffae219e60d040971d403632e82e13e080d7fe88c6030d063d967c170501421\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:01:21.153653 env[1321]: time="2024-12-13T02:01:21.153588029Z" level=info msg="CreateContainer within sandbox \"7ffae219e60d040971d403632e82e13e080d7fe88c6030d063d967c170501421\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe1c9846fa1a2d19f08ad8ffbbe98850e035e848b25a966b67efa058196559b3\"" Dec 13 02:01:21.154263 env[1321]: time="2024-12-13T02:01:21.154226358Z" level=info msg="StartContainer for \"fe1c9846fa1a2d19f08ad8ffbbe98850e035e848b25a966b67efa058196559b3\"" Dec 13 02:01:21.157518 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 60074 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:21.159167 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:21.161090 env[1321]: time="2024-12-13T02:01:21.161049724Z" level=info msg="CreateContainer within sandbox \"82040e326ff890649dfcdd81637c35193a979aeb2bdc6a4524d454a3936f0d39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a10cf657973fa9e706520c5e203688e3042c0b9bd4916f9ea5bf3859b7757495\"" Dec 13 02:01:21.162093 env[1321]: time="2024-12-13T02:01:21.161604650Z" level=info msg="StartContainer for \"a10cf657973fa9e706520c5e203688e3042c0b9bd4916f9ea5bf3859b7757495\"" Dec 13 02:01:21.164583 systemd[1]: Started session-9.scope. Dec 13 02:01:21.165631 systemd-logind[1301]: New session 9 of user core. Dec 13 02:01:21.225294 env[1321]: time="2024-12-13T02:01:21.224176721Z" level=info msg="StartContainer for \"a10cf657973fa9e706520c5e203688e3042c0b9bd4916f9ea5bf3859b7757495\" returns successfully" Dec 13 02:01:21.226105 env[1321]: time="2024-12-13T02:01:21.226074092Z" level=info msg="StartContainer for \"fe1c9846fa1a2d19f08ad8ffbbe98850e035e848b25a966b67efa058196559b3\" returns successfully" Dec 13 02:01:21.314301 sshd[3425]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:21.316739 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:60074.service: Deactivated successfully. Dec 13 02:01:21.317861 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:01:21.317936 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:01:21.318748 systemd-logind[1301]: Removed session 9. Dec 13 02:01:22.140706 kubelet[2180]: E1213 02:01:22.140673 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:22.143384 kubelet[2180]: E1213 02:01:22.143325 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:22.151073 kubelet[2180]: I1213 02:01:22.151023 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fjgm2" podStartSLOduration=32.150981866 podStartE2EDuration="32.150981866s" podCreationTimestamp="2024-12-13 02:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:01:22.150514762 +0000 UTC m=+45.434184660" watchObservedRunningTime="2024-12-13 02:01:22.150981866 +0000 UTC m=+45.434651764" Dec 13 02:01:22.166399 kubelet[2180]: I1213 02:01:22.166341 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kdlcl" podStartSLOduration=32.166295571 podStartE2EDuration="32.166295571s" podCreationTimestamp="2024-12-13 02:00:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:01:22.157845412 +0000 UTC m=+45.441515310" watchObservedRunningTime="2024-12-13 02:01:22.166295571 +0000 UTC m=+45.449965469" Dec 13 02:01:23.144879 kubelet[2180]: E1213 02:01:23.144826 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:23.144879 kubelet[2180]: E1213 02:01:23.144889 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:24.146977 kubelet[2180]: E1213 02:01:24.146939 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:26.317565 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:49930.service. Dec 13 02:01:26.354675 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 49930 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:26.355812 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:26.359771 systemd-logind[1301]: New session 10 of user core. Dec 13 02:01:26.360794 systemd[1]: Started session-10.scope. Dec 13 02:01:26.467178 sshd[3603]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:26.470407 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:49934.service. Dec 13 02:01:26.471315 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:49930.service: Deactivated successfully. Dec 13 02:01:26.472491 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:01:26.472537 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:01:26.474027 systemd-logind[1301]: Removed session 10. Dec 13 02:01:26.508036 sshd[3616]: Accepted publickey for core from 10.0.0.1 port 49934 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:26.509468 sshd[3616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:26.513425 systemd-logind[1301]: New session 11 of user core. Dec 13 02:01:26.514405 systemd[1]: Started session-11.scope. Dec 13 02:01:26.664757 sshd[3616]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:26.669804 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:49948.service. Dec 13 02:01:26.670662 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:49934.service: Deactivated successfully. Dec 13 02:01:26.671696 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:01:26.671752 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:01:26.675963 systemd-logind[1301]: Removed session 11. Dec 13 02:01:26.717045 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 49948 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:26.718396 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:26.722184 systemd-logind[1301]: New session 12 of user core. Dec 13 02:01:26.722955 systemd[1]: Started session-12.scope. Dec 13 02:01:26.853275 sshd[3628]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:26.856274 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:49948.service: Deactivated successfully. Dec 13 02:01:26.857448 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:01:26.857796 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:01:26.858794 systemd-logind[1301]: Removed session 12. Dec 13 02:01:31.857011 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:49958.service. Dec 13 02:01:31.896830 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 49958 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:31.898218 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:31.901953 systemd-logind[1301]: New session 13 of user core. Dec 13 02:01:31.902660 systemd[1]: Started session-13.scope. Dec 13 02:01:32.006284 sshd[3645]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:32.008616 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:49958.service: Deactivated successfully. Dec 13 02:01:32.009725 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:01:32.009804 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:01:32.010531 systemd-logind[1301]: Removed session 13. Dec 13 02:01:37.010295 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:49220.service. Dec 13 02:01:37.047802 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 49220 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:37.049260 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:37.052906 systemd-logind[1301]: New session 14 of user core. Dec 13 02:01:37.053849 systemd[1]: Started session-14.scope. Dec 13 02:01:37.165042 sshd[3661]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:37.168445 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:49226.service. Dec 13 02:01:37.169871 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:49220.service: Deactivated successfully. Dec 13 02:01:37.171560 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:01:37.171583 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:01:37.172670 systemd-logind[1301]: Removed session 14. Dec 13 02:01:37.208551 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 49226 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:37.209884 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:37.213551 systemd-logind[1301]: New session 15 of user core. Dec 13 02:01:37.214658 systemd[1]: Started session-15.scope. Dec 13 02:01:37.407882 sshd[3674]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:37.411318 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:49234.service. Dec 13 02:01:37.415424 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:49226.service: Deactivated successfully. Dec 13 02:01:37.416913 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:01:37.417473 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:01:37.418329 systemd-logind[1301]: Removed session 15. Dec 13 02:01:37.449685 sshd[3685]: Accepted publickey for core from 10.0.0.1 port 49234 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:37.451020 sshd[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:37.454781 systemd-logind[1301]: New session 16 of user core. Dec 13 02:01:37.455579 systemd[1]: Started session-16.scope. Dec 13 02:01:39.054508 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:49238.service. Dec 13 02:01:39.057115 sshd[3685]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:39.060556 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:01:39.061353 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:49234.service: Deactivated successfully. Dec 13 02:01:39.061986 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:01:39.066565 systemd-logind[1301]: Removed session 16. Dec 13 02:01:39.100647 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 49238 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:39.101980 sshd[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:39.105647 systemd-logind[1301]: New session 17 of user core. Dec 13 02:01:39.106407 systemd[1]: Started session-17.scope. Dec 13 02:01:39.326826 sshd[3707]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:39.329213 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:49250.service. Dec 13 02:01:39.330174 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:49238.service: Deactivated successfully. Dec 13 02:01:39.331491 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:01:39.331914 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:01:39.333915 systemd-logind[1301]: Removed session 17. Dec 13 02:01:39.366455 sshd[3721]: Accepted publickey for core from 10.0.0.1 port 49250 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:39.367807 sshd[3721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:39.371591 systemd-logind[1301]: New session 18 of user core. Dec 13 02:01:39.372422 systemd[1]: Started session-18.scope. Dec 13 02:01:39.476669 sshd[3721]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:39.478857 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:49250.service: Deactivated successfully. Dec 13 02:01:39.479902 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:01:39.479976 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:01:39.480739 systemd-logind[1301]: Removed session 18. Dec 13 02:01:44.479553 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:49262.service. Dec 13 02:01:44.516636 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 49262 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:44.518184 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:44.521598 systemd-logind[1301]: New session 19 of user core. Dec 13 02:01:44.522431 systemd[1]: Started session-19.scope. Dec 13 02:01:44.626056 sshd[3737]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:44.628350 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:49262.service: Deactivated successfully. Dec 13 02:01:44.629290 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:01:44.629339 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:01:44.630188 systemd-logind[1301]: Removed session 19. Dec 13 02:01:49.628971 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:47308.service. Dec 13 02:01:49.665030 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 47308 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:49.666081 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:49.669312 systemd-logind[1301]: New session 20 of user core. Dec 13 02:01:49.670248 systemd[1]: Started session-20.scope. Dec 13 02:01:49.770149 sshd[3754]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:49.772353 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:47308.service: Deactivated successfully. Dec 13 02:01:49.773552 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:01:49.773607 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:01:49.774575 systemd-logind[1301]: Removed session 20. Dec 13 02:01:54.773646 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:47318.service. Dec 13 02:01:54.810548 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 47318 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:54.811796 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:54.815326 systemd-logind[1301]: New session 21 of user core. Dec 13 02:01:54.816062 systemd[1]: Started session-21.scope. Dec 13 02:01:54.915525 sshd[3770]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:54.918022 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:47318.service: Deactivated successfully. Dec 13 02:01:54.918962 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:01:54.919013 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:01:54.919747 systemd-logind[1301]: Removed session 21. Dec 13 02:01:55.810556 kubelet[2180]: E1213 02:01:55.810485 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:59.919457 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:58410.service. Dec 13 02:01:59.959585 sshd[3784]: Accepted publickey for core from 10.0.0.1 port 58410 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:59.960926 sshd[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:59.964182 systemd-logind[1301]: New session 22 of user core. Dec 13 02:01:59.965196 systemd[1]: Started session-22.scope. Dec 13 02:02:00.072376 sshd[3784]: pam_unix(sshd:session): session closed for user core Dec 13 02:02:00.075568 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:58420.service. Dec 13 02:02:00.076357 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:58410.service: Deactivated successfully. Dec 13 02:02:00.077525 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:02:00.077639 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:02:00.079197 systemd-logind[1301]: Removed session 22. Dec 13 02:02:00.113929 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 58420 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:02:00.115199 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:02:00.119301 systemd-logind[1301]: New session 23 of user core. Dec 13 02:02:00.120400 systemd[1]: Started session-23.scope. Dec 13 02:02:01.589642 env[1321]: time="2024-12-13T02:02:01.589588828Z" level=info msg="StopContainer for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" with timeout 30 (s)" Dec 13 02:02:01.590173 env[1321]: time="2024-12-13T02:02:01.590048940Z" level=info msg="Stop container \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" with signal terminated" Dec 13 02:02:01.609957 env[1321]: time="2024-12-13T02:02:01.609872576Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:02:01.615953 env[1321]: time="2024-12-13T02:02:01.615908314Z" level=info msg="StopContainer for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" with timeout 2 (s)" Dec 13 02:02:01.616209 env[1321]: time="2024-12-13T02:02:01.616183416Z" level=info msg="Stop container \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" with signal terminated" Dec 13 02:02:01.624102 systemd-networkd[1090]: lxc_health: Link DOWN Dec 13 02:02:01.624109 systemd-networkd[1090]: lxc_health: Lost carrier Dec 13 02:02:01.624198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846-rootfs.mount: Deactivated successfully. Dec 13 02:02:01.639968 env[1321]: time="2024-12-13T02:02:01.639918104Z" level=info msg="shim disconnected" id=9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846 Dec 13 02:02:01.639968 env[1321]: time="2024-12-13T02:02:01.639966125Z" level=warning msg="cleaning up after shim disconnected" id=9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846 namespace=k8s.io Dec 13 02:02:01.639968 env[1321]: time="2024-12-13T02:02:01.639974200Z" level=info msg="cleaning up dead shim" Dec 13 02:02:01.646569 env[1321]: time="2024-12-13T02:02:01.646518713Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3855 runtime=io.containerd.runc.v2\n" Dec 13 02:02:01.649866 env[1321]: time="2024-12-13T02:02:01.649831391Z" level=info msg="StopContainer for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" returns successfully" Dec 13 02:02:01.650518 env[1321]: time="2024-12-13T02:02:01.650488927Z" level=info msg="StopPodSandbox for \"850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b\"" Dec 13 02:02:01.650573 env[1321]: time="2024-12-13T02:02:01.650558949Z" level=info msg="Container to stop \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.652852 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b-shm.mount: Deactivated successfully. Dec 13 02:02:01.680617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd-rootfs.mount: Deactivated successfully. Dec 13 02:02:01.684476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b-rootfs.mount: Deactivated successfully. Dec 13 02:02:01.693145 env[1321]: time="2024-12-13T02:02:01.693096860Z" level=info msg="shim disconnected" id=850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b Dec 13 02:02:01.693145 env[1321]: time="2024-12-13T02:02:01.693146635Z" level=warning msg="cleaning up after shim disconnected" id=850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b namespace=k8s.io Dec 13 02:02:01.693322 env[1321]: time="2024-12-13T02:02:01.693156274Z" level=info msg="cleaning up dead shim" Dec 13 02:02:01.693322 env[1321]: time="2024-12-13T02:02:01.693135774Z" level=info msg="shim disconnected" id=ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd Dec 13 02:02:01.693322 env[1321]: time="2024-12-13T02:02:01.693180870Z" level=warning msg="cleaning up after shim disconnected" id=ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd namespace=k8s.io Dec 13 02:02:01.693322 env[1321]: time="2024-12-13T02:02:01.693194075Z" level=info msg="cleaning up dead shim" Dec 13 02:02:01.701356 env[1321]: time="2024-12-13T02:02:01.701321497Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3903 runtime=io.containerd.runc.v2\n" Dec 13 02:02:01.701695 env[1321]: time="2024-12-13T02:02:01.701654277Z" level=info msg="TearDown network for sandbox \"850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b\" successfully" Dec 13 02:02:01.701695 env[1321]: time="2024-12-13T02:02:01.701685467Z" level=info msg="StopPodSandbox for \"850fe8bccada561e3c43956d228fcd2be6bb564cfbc3de7c9c25a46068c9023b\" returns successfully" Dec 13 02:02:01.702078 env[1321]: time="2024-12-13T02:02:01.702057041Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3904 runtime=io.containerd.runc.v2\n" Dec 13 02:02:01.704104 env[1321]: time="2024-12-13T02:02:01.704052292Z" level=info msg="StopContainer for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" returns successfully" Dec 13 02:02:01.704337 env[1321]: time="2024-12-13T02:02:01.704301595Z" level=info msg="StopPodSandbox for \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\"" Dec 13 02:02:01.704549 env[1321]: time="2024-12-13T02:02:01.704524387Z" level=info msg="Container to stop \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.704643 env[1321]: time="2024-12-13T02:02:01.704618034Z" level=info msg="Container to stop \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.704754 env[1321]: time="2024-12-13T02:02:01.704733272Z" level=info msg="Container to stop \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.704841 env[1321]: time="2024-12-13T02:02:01.704819666Z" level=info msg="Container to stop \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.704931 env[1321]: time="2024-12-13T02:02:01.704902383Z" level=info msg="Container to stop \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.732550 env[1321]: time="2024-12-13T02:02:01.732489071Z" level=info msg="shim disconnected" id=5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493 Dec 13 02:02:01.733048 env[1321]: time="2024-12-13T02:02:01.733002504Z" level=warning msg="cleaning up after shim disconnected" id=5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493 namespace=k8s.io Dec 13 02:02:01.733048 env[1321]: time="2024-12-13T02:02:01.733027842Z" level=info msg="cleaning up dead shim" Dec 13 02:02:01.740698 env[1321]: time="2024-12-13T02:02:01.740637443Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3947 runtime=io.containerd.runc.v2\n" Dec 13 02:02:01.741054 env[1321]: time="2024-12-13T02:02:01.741016070Z" level=info msg="TearDown network for sandbox \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" successfully" Dec 13 02:02:01.741054 env[1321]: time="2024-12-13T02:02:01.741041409Z" level=info msg="StopPodSandbox for \"5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493\" returns successfully" Dec 13 02:02:01.757946 kubelet[2180]: I1213 02:02:01.757079 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g7f4\" (UniqueName: \"kubernetes.io/projected/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-kube-api-access-2g7f4\") pod \"f4b1a1d3-d835-498a-8cc7-e5511e294ad1\" (UID: \"f4b1a1d3-d835-498a-8cc7-e5511e294ad1\") " Dec 13 02:02:01.757946 kubelet[2180]: I1213 02:02:01.757129 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-cilium-config-path\") pod \"f4b1a1d3-d835-498a-8cc7-e5511e294ad1\" (UID: \"f4b1a1d3-d835-498a-8cc7-e5511e294ad1\") " Dec 13 02:02:01.759671 kubelet[2180]: I1213 02:02:01.759650 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4b1a1d3-d835-498a-8cc7-e5511e294ad1" (UID: "f4b1a1d3-d835-498a-8cc7-e5511e294ad1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:02:01.760200 kubelet[2180]: I1213 02:02:01.760156 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-kube-api-access-2g7f4" (OuterVolumeSpecName: "kube-api-access-2g7f4") pod "f4b1a1d3-d835-498a-8cc7-e5511e294ad1" (UID: "f4b1a1d3-d835-498a-8cc7-e5511e294ad1"). InnerVolumeSpecName "kube-api-access-2g7f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:01.857690 kubelet[2180]: I1213 02:02:01.857518 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-etc-cni-netd\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.857690 kubelet[2180]: I1213 02:02:01.857569 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-cgroup\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.857690 kubelet[2180]: I1213 02:02:01.857587 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-bpf-maps\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.857690 kubelet[2180]: I1213 02:02:01.857608 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hubble-tls\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.857690 kubelet[2180]: I1213 02:02:01.857625 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cni-path\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.857690 kubelet[2180]: I1213 02:02:01.857640 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-lib-modules\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858082 kubelet[2180]: I1213 02:02:01.857629 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.858082 kubelet[2180]: I1213 02:02:01.857663 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-config-path\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858082 kubelet[2180]: I1213 02:02:01.857794 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-net\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858082 kubelet[2180]: I1213 02:02:01.857830 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-run\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858082 kubelet[2180]: I1213 02:02:01.857862 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-xtables-lock\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858082 kubelet[2180]: I1213 02:02:01.857901 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hostproc\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858347 kubelet[2180]: I1213 02:02:01.857934 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzh44\" (UniqueName: \"kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-kube-api-access-xzh44\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858347 kubelet[2180]: I1213 02:02:01.857959 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-kernel\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858347 kubelet[2180]: I1213 02:02:01.857988 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-clustermesh-secrets\") pod \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\" (UID: \"fad185a2-169d-4ef1-9d53-d8b47e3c30b8\") " Dec 13 02:02:01.858347 kubelet[2180]: I1213 02:02:01.858033 2180 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.858347 kubelet[2180]: I1213 02:02:01.858052 2180 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2g7f4\" (UniqueName: \"kubernetes.io/projected/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-kube-api-access-2g7f4\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.858347 kubelet[2180]: I1213 02:02:01.858072 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b1a1d3-d835-498a-8cc7-e5511e294ad1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.859037 kubelet[2180]: I1213 02:02:01.857644 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859037 kubelet[2180]: I1213 02:02:01.858670 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859037 kubelet[2180]: I1213 02:02:01.858746 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859037 kubelet[2180]: I1213 02:02:01.858773 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859564 kubelet[2180]: I1213 02:02:01.859543 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:02:01.859664 kubelet[2180]: I1213 02:02:01.859541 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hostproc" (OuterVolumeSpecName: "hostproc") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859776 kubelet[2180]: I1213 02:02:01.859544 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cni-path" (OuterVolumeSpecName: "cni-path") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859776 kubelet[2180]: I1213 02:02:01.859571 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859776 kubelet[2180]: I1213 02:02:01.859595 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.859776 kubelet[2180]: I1213 02:02:01.859621 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.861998 kubelet[2180]: I1213 02:02:01.861969 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:02:01.862462 kubelet[2180]: I1213 02:02:01.862436 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-kube-api-access-xzh44" (OuterVolumeSpecName: "kube-api-access-xzh44") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "kube-api-access-xzh44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:01.862539 kubelet[2180]: I1213 02:02:01.862465 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fad185a2-169d-4ef1-9d53-d8b47e3c30b8" (UID: "fad185a2-169d-4ef1-9d53-d8b47e3c30b8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:01.873253 kubelet[2180]: E1213 02:02:01.873197 2180 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:02:01.958923 kubelet[2180]: I1213 02:02:01.958861 2180 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.958923 kubelet[2180]: I1213 02:02:01.958900 2180 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.958923 kubelet[2180]: I1213 02:02:01.958912 2180 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xzh44\" (UniqueName: \"kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-kube-api-access-xzh44\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.958923 kubelet[2180]: I1213 02:02:01.958927 2180 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.958923 kubelet[2180]: I1213 02:02:01.958935 2180 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.958923 kubelet[2180]: I1213 02:02:01.958944 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.958952 2180 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.958960 2180 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.958967 2180 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.958975 2180 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.958983 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.958993 2180 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:01.959224 kubelet[2180]: I1213 02:02:01.959002 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fad185a2-169d-4ef1-9d53-d8b47e3c30b8-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:02.224820 kubelet[2180]: I1213 02:02:02.224773 2180 scope.go:117] "RemoveContainer" containerID="9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846" Dec 13 02:02:02.230563 env[1321]: time="2024-12-13T02:02:02.230515182Z" level=info msg="RemoveContainer for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\"" Dec 13 02:02:02.235483 env[1321]: time="2024-12-13T02:02:02.235443437Z" level=info msg="RemoveContainer for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" returns successfully" Dec 13 02:02:02.235697 kubelet[2180]: I1213 02:02:02.235667 2180 scope.go:117] "RemoveContainer" containerID="9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846" Dec 13 02:02:02.235987 env[1321]: time="2024-12-13T02:02:02.235923638Z" level=error msg="ContainerStatus for \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\": not found" Dec 13 02:02:02.236160 kubelet[2180]: E1213 02:02:02.236139 2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\": not found" containerID="9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846" Dec 13 02:02:02.236242 kubelet[2180]: I1213 02:02:02.236228 2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846"} err="failed to get container status \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d7d9edd12521b3e36082017ba1b6438ba317d82a6067baa069c2444e8947846\": not found" Dec 13 02:02:02.236283 kubelet[2180]: I1213 02:02:02.236244 2180 scope.go:117] "RemoveContainer" containerID="ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd" Dec 13 02:02:02.237779 env[1321]: time="2024-12-13T02:02:02.237732858Z" level=info msg="RemoveContainer for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\"" Dec 13 02:02:02.241978 env[1321]: time="2024-12-13T02:02:02.241936108Z" level=info msg="RemoveContainer for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" returns successfully" Dec 13 02:02:02.242131 kubelet[2180]: I1213 02:02:02.242115 2180 scope.go:117] "RemoveContainer" containerID="a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef" Dec 13 02:02:02.243120 env[1321]: time="2024-12-13T02:02:02.243091870Z" level=info msg="RemoveContainer for \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\"" Dec 13 02:02:02.246420 env[1321]: time="2024-12-13T02:02:02.246392317Z" level=info msg="RemoveContainer for \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\" returns successfully" Dec 13 02:02:02.246544 kubelet[2180]: I1213 02:02:02.246522 2180 scope.go:117] "RemoveContainer" containerID="a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85" Dec 13 02:02:02.247738 env[1321]: time="2024-12-13T02:02:02.247523633Z" level=info msg="RemoveContainer for \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\"" Dec 13 02:02:02.251808 env[1321]: time="2024-12-13T02:02:02.251045961Z" level=info msg="RemoveContainer for \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\" returns successfully" Dec 13 02:02:02.252326 kubelet[2180]: I1213 02:02:02.252299 2180 scope.go:117] "RemoveContainer" containerID="10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd" Dec 13 02:02:02.253231 env[1321]: time="2024-12-13T02:02:02.253196708Z" level=info msg="RemoveContainer for \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\"" Dec 13 02:02:02.256218 env[1321]: time="2024-12-13T02:02:02.256189513Z" level=info msg="RemoveContainer for \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\" returns successfully" Dec 13 02:02:02.256334 kubelet[2180]: I1213 02:02:02.256317 2180 scope.go:117] "RemoveContainer" containerID="51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033" Dec 13 02:02:02.257080 env[1321]: time="2024-12-13T02:02:02.257058241Z" level=info msg="RemoveContainer for \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\"" Dec 13 02:02:02.259922 env[1321]: time="2024-12-13T02:02:02.259891422Z" level=info msg="RemoveContainer for \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\" returns successfully" Dec 13 02:02:02.260025 kubelet[2180]: I1213 02:02:02.260009 2180 scope.go:117] "RemoveContainer" containerID="ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd" Dec 13 02:02:02.260196 env[1321]: time="2024-12-13T02:02:02.260147909Z" level=error msg="ContainerStatus for \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\": not found" Dec 13 02:02:02.260349 kubelet[2180]: E1213 02:02:02.260329 2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\": not found" containerID="ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd" Dec 13 02:02:02.260415 kubelet[2180]: I1213 02:02:02.260375 2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd"} err="failed to get container status \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed761367e07ab0494340e2f5c104cc9dbaa35d8fde4d6dbaa3754f41d2f516fd\": not found" Dec 13 02:02:02.260415 kubelet[2180]: I1213 02:02:02.260392 2180 scope.go:117] "RemoveContainer" containerID="a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef" Dec 13 02:02:02.260564 env[1321]: time="2024-12-13T02:02:02.260521477Z" level=error msg="ContainerStatus for \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\": not found" Dec 13 02:02:02.260654 kubelet[2180]: E1213 02:02:02.260642 2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\": not found" containerID="a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef" Dec 13 02:02:02.260727 kubelet[2180]: I1213 02:02:02.260661 2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef"} err="failed to get container status \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\": rpc error: code = NotFound desc = an error occurred when try to find container \"a9640396895d4d03737f655921b927a668d8dcb4cab79ef81db33b53b75f2cef\": not found" Dec 13 02:02:02.260727 kubelet[2180]: I1213 02:02:02.260670 2180 scope.go:117] "RemoveContainer" containerID="a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85" Dec 13 02:02:02.260833 env[1321]: time="2024-12-13T02:02:02.260801428Z" level=error msg="ContainerStatus for \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\": not found" Dec 13 02:02:02.260912 kubelet[2180]: E1213 02:02:02.260898 2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\": not found" containerID="a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85" Dec 13 02:02:02.260969 kubelet[2180]: I1213 02:02:02.260924 2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85"} err="failed to get container status \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\": rpc error: code = NotFound desc = an error occurred when try to find container \"a159042905678d96fb72238a679a22913f69dbe20a2e7c77cc64351010af7a85\": not found" Dec 13 02:02:02.260969 kubelet[2180]: I1213 02:02:02.260934 2180 scope.go:117] "RemoveContainer" containerID="10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd" Dec 13 02:02:02.261090 env[1321]: time="2024-12-13T02:02:02.261054297Z" level=error msg="ContainerStatus for \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\": not found" Dec 13 02:02:02.261163 kubelet[2180]: E1213 02:02:02.261149 2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\": not found" containerID="10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd" Dec 13 02:02:02.261199 kubelet[2180]: I1213 02:02:02.261175 2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd"} err="failed to get container status \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\": rpc error: code = NotFound desc = an error occurred when try to find container \"10ecdfa6fb6012a4a32af9df0d6f3d60c0815a3a99d075004cd9e1db6a0b0ecd\": not found" Dec 13 02:02:02.261199 kubelet[2180]: I1213 02:02:02.261186 2180 scope.go:117] "RemoveContainer" containerID="51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033" Dec 13 02:02:02.261321 env[1321]: time="2024-12-13T02:02:02.261287750Z" level=error msg="ContainerStatus for \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\": not found" Dec 13 02:02:02.261383 kubelet[2180]: E1213 02:02:02.261378 2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\": not found" containerID="51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033" Dec 13 02:02:02.261421 kubelet[2180]: I1213 02:02:02.261395 2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033"} err="failed to get container status \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\": rpc error: code = NotFound desc = an error occurred when try to find container \"51131c9f6c8c074e19b177d29e2541b11593d20336bb2b054f3d3203dfc59033\": not found" Dec 13 02:02:02.594889 systemd[1]: var-lib-kubelet-pods-f4b1a1d3\x2dd835\x2d498a\x2d8cc7\x2de5511e294ad1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2g7f4.mount: Deactivated successfully. Dec 13 02:02:02.595086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493-rootfs.mount: Deactivated successfully. Dec 13 02:02:02.595198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e47bfba46e12bd8af15aff1c4ce0979a560d923a7394f8cb71d4f9976c9c493-shm.mount: Deactivated successfully. Dec 13 02:02:02.595285 systemd[1]: var-lib-kubelet-pods-fad185a2\x2d169d\x2d4ef1\x2d9d53\x2dd8b47e3c30b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxzh44.mount: Deactivated successfully. Dec 13 02:02:02.595366 systemd[1]: var-lib-kubelet-pods-fad185a2\x2d169d\x2d4ef1\x2d9d53\x2dd8b47e3c30b8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:02:02.595459 systemd[1]: var-lib-kubelet-pods-fad185a2\x2d169d\x2d4ef1\x2d9d53\x2dd8b47e3c30b8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:02:02.812327 kubelet[2180]: I1213 02:02:02.812284 2180 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f4b1a1d3-d835-498a-8cc7-e5511e294ad1" path="/var/lib/kubelet/pods/f4b1a1d3-d835-498a-8cc7-e5511e294ad1/volumes" Dec 13 02:02:02.812741 kubelet[2180]: I1213 02:02:02.812666 2180 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" path="/var/lib/kubelet/pods/fad185a2-169d-4ef1-9d53-d8b47e3c30b8/volumes" Dec 13 02:02:03.551182 sshd[3798]: pam_unix(sshd:session): session closed for user core Dec 13 02:02:03.553806 systemd[1]: Started sshd@23-10.0.0.65:22-10.0.0.1:58436.service. Dec 13 02:02:03.554409 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:58420.service: Deactivated successfully. Dec 13 02:02:03.555361 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:02:03.555440 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:02:03.556513 systemd-logind[1301]: Removed session 23. Dec 13 02:02:03.593027 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 58436 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:02:03.594335 sshd[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:02:03.598004 systemd-logind[1301]: New session 24 of user core. Dec 13 02:02:03.598743 systemd[1]: Started session-24.scope. Dec 13 02:02:04.115283 sshd[3964]: pam_unix(sshd:session): session closed for user core Dec 13 02:02:04.119147 systemd[1]: Started sshd@24-10.0.0.65:22-10.0.0.1:58450.service. Dec 13 02:02:04.122633 systemd[1]: sshd@23-10.0.0.65:22-10.0.0.1:58436.service: Deactivated successfully. Dec 13 02:02:04.126070 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:02:04.130934 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:02:04.142777 kubelet[2180]: I1213 02:02:04.133680 2180 topology_manager.go:215] "Topology Admit Handler" podUID="92c7a685-b975-477f-ae48-32db80feb1ab" podNamespace="kube-system" podName="cilium-p44tt" Dec 13 02:02:04.142777 kubelet[2180]: E1213 02:02:04.133758 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" containerName="cilium-agent" Dec 13 02:02:04.142777 kubelet[2180]: E1213 02:02:04.133768 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" containerName="mount-cgroup" Dec 13 02:02:04.142777 kubelet[2180]: E1213 02:02:04.133774 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" containerName="apply-sysctl-overwrites" Dec 13 02:02:04.142777 kubelet[2180]: E1213 02:02:04.133780 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" containerName="mount-bpf-fs" Dec 13 02:02:04.142777 kubelet[2180]: E1213 02:02:04.133786 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" containerName="clean-cilium-state" Dec 13 02:02:04.142777 kubelet[2180]: E1213 02:02:04.133794 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f4b1a1d3-d835-498a-8cc7-e5511e294ad1" containerName="cilium-operator" Dec 13 02:02:04.142777 kubelet[2180]: I1213 02:02:04.133815 2180 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad185a2-169d-4ef1-9d53-d8b47e3c30b8" containerName="cilium-agent" Dec 13 02:02:04.142777 kubelet[2180]: I1213 02:02:04.133820 2180 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b1a1d3-d835-498a-8cc7-e5511e294ad1" containerName="cilium-operator" Dec 13 02:02:04.146267 systemd-logind[1301]: Removed session 24. Dec 13 02:02:04.177749 sshd[3976]: Accepted publickey for core from 10.0.0.1 port 58450 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:02:04.178224 kubelet[2180]: I1213 02:02:04.173999 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cni-path\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178224 kubelet[2180]: I1213 02:02:04.174055 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-lib-modules\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178224 kubelet[2180]: I1213 02:02:04.174123 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-kernel\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178224 kubelet[2180]: I1213 02:02:04.174153 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-hostproc\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178224 kubelet[2180]: I1213 02:02:04.174169 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-xtables-lock\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178224 kubelet[2180]: I1213 02:02:04.174187 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-cgroup\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178446 kubelet[2180]: I1213 02:02:04.174207 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-ipsec-secrets\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178446 kubelet[2180]: I1213 02:02:04.174225 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-net\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178446 kubelet[2180]: I1213 02:02:04.174242 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-bpf-maps\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178446 kubelet[2180]: I1213 02:02:04.174259 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-etc-cni-netd\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178446 kubelet[2180]: I1213 02:02:04.174274 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-clustermesh-secrets\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178446 kubelet[2180]: I1213 02:02:04.174293 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-hubble-tls\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178684 kubelet[2180]: I1213 02:02:04.174309 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsks4\" (UniqueName: \"kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-kube-api-access-tsks4\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178684 kubelet[2180]: I1213 02:02:04.174326 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-run\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.178684 kubelet[2180]: I1213 02:02:04.174342 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-config-path\") pod \"cilium-p44tt\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " pod="kube-system/cilium-p44tt" Dec 13 02:02:04.187154 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:02:04.199026 systemd[1]: Started session-25.scope. Dec 13 02:02:04.199687 systemd-logind[1301]: New session 25 of user core. Dec 13 02:02:04.361158 sshd[3976]: pam_unix(sshd:session): session closed for user core Dec 13 02:02:04.365788 systemd[1]: Started sshd@25-10.0.0.65:22-10.0.0.1:58452.service. Dec 13 02:02:04.376032 env[1321]: time="2024-12-13T02:02:04.374919426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p44tt,Uid:92c7a685-b975-477f-ae48-32db80feb1ab,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:04.376375 kubelet[2180]: E1213 02:02:04.374286 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:04.370657 systemd[1]: sshd@24-10.0.0.65:22-10.0.0.1:58450.service: Deactivated successfully. Dec 13 02:02:04.371623 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:02:04.377412 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:02:04.378650 systemd-logind[1301]: Removed session 25. Dec 13 02:02:04.395091 env[1321]: time="2024-12-13T02:02:04.395002746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:04.395091 env[1321]: time="2024-12-13T02:02:04.395055586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:04.395326 env[1321]: time="2024-12-13T02:02:04.395069542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:04.395626 env[1321]: time="2024-12-13T02:02:04.395574070Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f pid=4005 runtime=io.containerd.runc.v2 Dec 13 02:02:04.405983 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 58452 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:02:04.407810 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:02:04.413620 systemd[1]: Started session-26.scope. Dec 13 02:02:04.414216 systemd-logind[1301]: New session 26 of user core. Dec 13 02:02:04.434093 env[1321]: time="2024-12-13T02:02:04.433993882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p44tt,Uid:92c7a685-b975-477f-ae48-32db80feb1ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f\"" Dec 13 02:02:04.435117 kubelet[2180]: E1213 02:02:04.435094 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:04.437493 env[1321]: time="2024-12-13T02:02:04.437423492Z" level=info msg="CreateContainer within sandbox \"bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:02:04.454614 env[1321]: time="2024-12-13T02:02:04.454550129Z" level=info msg="CreateContainer within sandbox \"bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a\"" Dec 13 02:02:04.456063 env[1321]: time="2024-12-13T02:02:04.456005381Z" level=info msg="StartContainer for \"9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a\"" Dec 13 02:02:04.500968 env[1321]: time="2024-12-13T02:02:04.500894511Z" level=info msg="StartContainer for \"9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a\" returns successfully" Dec 13 02:02:04.532788 env[1321]: time="2024-12-13T02:02:04.532740837Z" level=info msg="shim disconnected" id=9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a Dec 13 02:02:04.533039 env[1321]: time="2024-12-13T02:02:04.533019636Z" level=warning msg="cleaning up after shim disconnected" id=9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a namespace=k8s.io Dec 13 02:02:04.533138 env[1321]: time="2024-12-13T02:02:04.533120398Z" level=info msg="cleaning up dead shim" Dec 13 02:02:04.544764 env[1321]: time="2024-12-13T02:02:04.541857250Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4096 runtime=io.containerd.runc.v2\n" Dec 13 02:02:05.243649 env[1321]: time="2024-12-13T02:02:05.243607231Z" level=info msg="StopPodSandbox for \"bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f\"" Dec 13 02:02:05.243959 env[1321]: time="2024-12-13T02:02:05.243918522Z" level=info msg="Container to stop \"9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:05.268057 env[1321]: time="2024-12-13T02:02:05.267998029Z" level=info msg="shim disconnected" id=bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f Dec 13 02:02:05.268057 env[1321]: time="2024-12-13T02:02:05.268054475Z" level=warning msg="cleaning up after shim disconnected" id=bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f namespace=k8s.io Dec 13 02:02:05.268253 env[1321]: time="2024-12-13T02:02:05.268066137Z" level=info msg="cleaning up dead shim" Dec 13 02:02:05.275385 env[1321]: time="2024-12-13T02:02:05.275329060Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4128 runtime=io.containerd.runc.v2\n" Dec 13 02:02:05.275858 env[1321]: time="2024-12-13T02:02:05.275703541Z" level=info msg="TearDown network for sandbox \"bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f\" successfully" Dec 13 02:02:05.275900 env[1321]: time="2024-12-13T02:02:05.275854498Z" level=info msg="StopPodSandbox for \"bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f\" returns successfully" Dec 13 02:02:05.281364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f-rootfs.mount: Deactivated successfully. Dec 13 02:02:05.281498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdde9712af51c2daa7cfe4da87a0160b4f82da2159c8cd2b00fe61098046b41f-shm.mount: Deactivated successfully. Dec 13 02:02:05.383430 kubelet[2180]: I1213 02:02:05.383363 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cni-path\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.383430 kubelet[2180]: I1213 02:02:05.383423 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-clustermesh-secrets\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.383430 kubelet[2180]: I1213 02:02:05.383446 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-hubble-tls\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384321 kubelet[2180]: I1213 02:02:05.383464 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-kernel\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384321 kubelet[2180]: I1213 02:02:05.383484 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-hostproc\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384321 kubelet[2180]: I1213 02:02:05.383499 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-net\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384321 kubelet[2180]: I1213 02:02:05.383517 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-bpf-maps\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384321 kubelet[2180]: I1213 02:02:05.383507 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cni-path" (OuterVolumeSpecName: "cni-path") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.384321 kubelet[2180]: I1213 02:02:05.383537 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-config-path\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384546 kubelet[2180]: I1213 02:02:05.383555 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-lib-modules\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384546 kubelet[2180]: I1213 02:02:05.383560 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-hostproc" (OuterVolumeSpecName: "hostproc") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.384546 kubelet[2180]: I1213 02:02:05.383571 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-xtables-lock\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384546 kubelet[2180]: I1213 02:02:05.383588 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-etc-cni-netd\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384546 kubelet[2180]: I1213 02:02:05.383618 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsks4\" (UniqueName: \"kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-kube-api-access-tsks4\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384546 kubelet[2180]: I1213 02:02:05.383634 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-run\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384806 kubelet[2180]: I1213 02:02:05.383687 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-cgroup\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384806 kubelet[2180]: I1213 02:02:05.383707 2180 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-ipsec-secrets\") pod \"92c7a685-b975-477f-ae48-32db80feb1ab\" (UID: \"92c7a685-b975-477f-ae48-32db80feb1ab\") " Dec 13 02:02:05.384806 kubelet[2180]: I1213 02:02:05.383776 2180 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.384806 kubelet[2180]: I1213 02:02:05.383792 2180 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.384806 kubelet[2180]: I1213 02:02:05.383979 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.384806 kubelet[2180]: I1213 02:02:05.384024 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.385026 kubelet[2180]: I1213 02:02:05.384045 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.385026 kubelet[2180]: I1213 02:02:05.384215 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.385026 kubelet[2180]: I1213 02:02:05.384243 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.385026 kubelet[2180]: I1213 02:02:05.384474 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.385026 kubelet[2180]: I1213 02:02:05.384498 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.385213 kubelet[2180]: I1213 02:02:05.384517 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:05.390355 kubelet[2180]: I1213 02:02:05.386647 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:02:05.390355 kubelet[2180]: I1213 02:02:05.386834 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:02:05.390355 kubelet[2180]: I1213 02:02:05.390178 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:02:05.390355 kubelet[2180]: I1213 02:02:05.390267 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:05.389154 systemd[1]: var-lib-kubelet-pods-92c7a685\x2db975\x2d477f\x2dae48\x2d32db80feb1ab-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:02:05.391325 kubelet[2180]: I1213 02:02:05.391285 2180 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-kube-api-access-tsks4" (OuterVolumeSpecName: "kube-api-access-tsks4") pod "92c7a685-b975-477f-ae48-32db80feb1ab" (UID: "92c7a685-b975-477f-ae48-32db80feb1ab"). InnerVolumeSpecName "kube-api-access-tsks4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:05.392082 systemd[1]: var-lib-kubelet-pods-92c7a685\x2db975\x2d477f\x2dae48\x2d32db80feb1ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtsks4.mount: Deactivated successfully. Dec 13 02:02:05.392234 systemd[1]: var-lib-kubelet-pods-92c7a685\x2db975\x2d477f\x2dae48\x2d32db80feb1ab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:02:05.392333 systemd[1]: var-lib-kubelet-pods-92c7a685\x2db975\x2d477f\x2dae48\x2d32db80feb1ab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484511 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484546 2180 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484556 2180 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484565 2180 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484575 2180 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tsks4\" (UniqueName: \"kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-kube-api-access-tsks4\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484583 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484592 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484577 kubelet[2180]: I1213 02:02:05.484603 2180 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484973 kubelet[2180]: I1213 02:02:05.484619 2180 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/92c7a685-b975-477f-ae48-32db80feb1ab-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484973 kubelet[2180]: I1213 02:02:05.484636 2180 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/92c7a685-b975-477f-ae48-32db80feb1ab-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484973 kubelet[2180]: I1213 02:02:05.484652 2180 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484973 kubelet[2180]: I1213 02:02:05.484682 2180 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.484973 kubelet[2180]: I1213 02:02:05.484693 2180 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/92c7a685-b975-477f-ae48-32db80feb1ab-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 02:02:05.810070 kubelet[2180]: E1213 02:02:05.810013 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:06.245700 kubelet[2180]: I1213 02:02:06.245665 2180 scope.go:117] "RemoveContainer" containerID="9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a" Dec 13 02:02:06.246865 env[1321]: time="2024-12-13T02:02:06.246486646Z" level=info msg="RemoveContainer for \"9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a\"" Dec 13 02:02:06.250435 env[1321]: time="2024-12-13T02:02:06.250385915Z" level=info msg="RemoveContainer for \"9297b9191a512182515e99eceda8af0be9aec185bc341c4cb9606b92e5e9ab6a\" returns successfully" Dec 13 02:02:06.283495 kubelet[2180]: I1213 02:02:06.283456 2180 topology_manager.go:215] "Topology Admit Handler" podUID="5ac619d9-e8ed-409a-a498-e50d92a89c1d" podNamespace="kube-system" podName="cilium-dbtwm" Dec 13 02:02:06.283779 kubelet[2180]: E1213 02:02:06.283740 2180 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92c7a685-b975-477f-ae48-32db80feb1ab" containerName="mount-cgroup" Dec 13 02:02:06.283779 kubelet[2180]: I1213 02:02:06.283771 2180 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c7a685-b975-477f-ae48-32db80feb1ab" containerName="mount-cgroup" Dec 13 02:02:06.390978 kubelet[2180]: I1213 02:02:06.390914 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-hostproc\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.390978 kubelet[2180]: I1213 02:02:06.390957 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-etc-cni-netd\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.390978 kubelet[2180]: I1213 02:02:06.390974 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-cni-path\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.390978 kubelet[2180]: I1213 02:02:06.390991 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-lib-modules\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391494 kubelet[2180]: I1213 02:02:06.391007 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ac619d9-e8ed-409a-a498-e50d92a89c1d-clustermesh-secrets\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391494 kubelet[2180]: I1213 02:02:06.391025 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-host-proc-sys-net\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391494 kubelet[2180]: I1213 02:02:06.391043 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-bpf-maps\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391494 kubelet[2180]: I1213 02:02:06.391058 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-cilium-cgroup\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391494 kubelet[2180]: I1213 02:02:06.391076 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtcn\" (UniqueName: \"kubernetes.io/projected/5ac619d9-e8ed-409a-a498-e50d92a89c1d-kube-api-access-kmtcn\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391494 kubelet[2180]: I1213 02:02:06.391162 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ac619d9-e8ed-409a-a498-e50d92a89c1d-hubble-tls\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391641 kubelet[2180]: I1213 02:02:06.391189 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ac619d9-e8ed-409a-a498-e50d92a89c1d-cilium-ipsec-secrets\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391641 kubelet[2180]: I1213 02:02:06.391296 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-cilium-run\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391641 kubelet[2180]: I1213 02:02:06.391383 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-xtables-lock\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391641 kubelet[2180]: I1213 02:02:06.391424 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ac619d9-e8ed-409a-a498-e50d92a89c1d-host-proc-sys-kernel\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.391641 kubelet[2180]: I1213 02:02:06.391466 2180 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ac619d9-e8ed-409a-a498-e50d92a89c1d-cilium-config-path\") pod \"cilium-dbtwm\" (UID: \"5ac619d9-e8ed-409a-a498-e50d92a89c1d\") " pod="kube-system/cilium-dbtwm" Dec 13 02:02:06.587061 kubelet[2180]: E1213 02:02:06.586882 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:06.587588 env[1321]: time="2024-12-13T02:02:06.587515987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbtwm,Uid:5ac619d9-e8ed-409a-a498-e50d92a89c1d,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:06.603918 env[1321]: time="2024-12-13T02:02:06.603813130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:06.603918 env[1321]: time="2024-12-13T02:02:06.603866210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:06.603918 env[1321]: time="2024-12-13T02:02:06.603876681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:06.604166 env[1321]: time="2024-12-13T02:02:06.604086369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c pid=4158 runtime=io.containerd.runc.v2 Dec 13 02:02:06.637866 env[1321]: time="2024-12-13T02:02:06.637792877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dbtwm,Uid:5ac619d9-e8ed-409a-a498-e50d92a89c1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\"" Dec 13 02:02:06.638688 kubelet[2180]: E1213 02:02:06.638647 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:06.641033 env[1321]: time="2024-12-13T02:02:06.640494449Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:02:06.690545 env[1321]: time="2024-12-13T02:02:06.690473132Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ffd3e4bb7d5c8841b2384942900b5e5349bddb4e22d8471ee39a72e84aad72a1\"" Dec 13 02:02:06.691083 env[1321]: time="2024-12-13T02:02:06.691041081Z" level=info msg="StartContainer for \"ffd3e4bb7d5c8841b2384942900b5e5349bddb4e22d8471ee39a72e84aad72a1\"" Dec 13 02:02:06.741229 env[1321]: time="2024-12-13T02:02:06.738792282Z" level=info msg="StartContainer for \"ffd3e4bb7d5c8841b2384942900b5e5349bddb4e22d8471ee39a72e84aad72a1\" returns successfully" Dec 13 02:02:06.811412 kubelet[2180]: I1213 02:02:06.811372 2180 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="92c7a685-b975-477f-ae48-32db80feb1ab" path="/var/lib/kubelet/pods/92c7a685-b975-477f-ae48-32db80feb1ab/volumes" Dec 13 02:02:06.873920 kubelet[2180]: E1213 02:02:06.873791 2180 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:02:06.916191 env[1321]: time="2024-12-13T02:02:06.916111856Z" level=info msg="shim disconnected" id=ffd3e4bb7d5c8841b2384942900b5e5349bddb4e22d8471ee39a72e84aad72a1 Dec 13 02:02:06.916191 env[1321]: time="2024-12-13T02:02:06.916164667Z" level=warning msg="cleaning up after shim disconnected" id=ffd3e4bb7d5c8841b2384942900b5e5349bddb4e22d8471ee39a72e84aad72a1 namespace=k8s.io Dec 13 02:02:06.916191 env[1321]: time="2024-12-13T02:02:06.916174766Z" level=info msg="cleaning up dead shim" Dec 13 02:02:06.923229 env[1321]: time="2024-12-13T02:02:06.923162853Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4241 runtime=io.containerd.runc.v2\n" Dec 13 02:02:07.249197 kubelet[2180]: E1213 02:02:07.249035 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:07.251064 env[1321]: time="2024-12-13T02:02:07.251019625Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:02:07.262556 env[1321]: time="2024-12-13T02:02:07.262501079Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"303c0ad4f33af9ab58edbce7e99e41e713abef4ead468dc751fc4152604a475b\"" Dec 13 02:02:07.263182 env[1321]: time="2024-12-13T02:02:07.263121668Z" level=info msg="StartContainer for \"303c0ad4f33af9ab58edbce7e99e41e713abef4ead468dc751fc4152604a475b\"" Dec 13 02:02:07.302197 env[1321]: time="2024-12-13T02:02:07.302135554Z" level=info msg="StartContainer for \"303c0ad4f33af9ab58edbce7e99e41e713abef4ead468dc751fc4152604a475b\" returns successfully" Dec 13 02:02:07.321879 env[1321]: time="2024-12-13T02:02:07.321829687Z" level=info msg="shim disconnected" id=303c0ad4f33af9ab58edbce7e99e41e713abef4ead468dc751fc4152604a475b Dec 13 02:02:07.321879 env[1321]: time="2024-12-13T02:02:07.321871216Z" level=warning msg="cleaning up after shim disconnected" id=303c0ad4f33af9ab58edbce7e99e41e713abef4ead468dc751fc4152604a475b namespace=k8s.io Dec 13 02:02:07.321879 env[1321]: time="2024-12-13T02:02:07.321880484Z" level=info msg="cleaning up dead shim" Dec 13 02:02:07.328059 env[1321]: time="2024-12-13T02:02:07.328016056Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4302 runtime=io.containerd.runc.v2\n" Dec 13 02:02:08.252373 kubelet[2180]: E1213 02:02:08.252331 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:08.255009 env[1321]: time="2024-12-13T02:02:08.254935997Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:02:08.269743 env[1321]: time="2024-12-13T02:02:08.269644896Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"058f80262b9019c2e9c948c36451fcbefc0ff8d15b53411e305e674f71b534e3\"" Dec 13 02:02:08.270209 env[1321]: time="2024-12-13T02:02:08.270181597Z" level=info msg="StartContainer for \"058f80262b9019c2e9c948c36451fcbefc0ff8d15b53411e305e674f71b534e3\"" Dec 13 02:02:08.322761 env[1321]: time="2024-12-13T02:02:08.322682963Z" level=info msg="StartContainer for \"058f80262b9019c2e9c948c36451fcbefc0ff8d15b53411e305e674f71b534e3\" returns successfully" Dec 13 02:02:08.347065 env[1321]: time="2024-12-13T02:02:08.347005995Z" level=info msg="shim disconnected" id=058f80262b9019c2e9c948c36451fcbefc0ff8d15b53411e305e674f71b534e3 Dec 13 02:02:08.347065 env[1321]: time="2024-12-13T02:02:08.347053717Z" level=warning msg="cleaning up after shim disconnected" id=058f80262b9019c2e9c948c36451fcbefc0ff8d15b53411e305e674f71b534e3 namespace=k8s.io Dec 13 02:02:08.347065 env[1321]: time="2024-12-13T02:02:08.347062694Z" level=info msg="cleaning up dead shim" Dec 13 02:02:08.354212 env[1321]: time="2024-12-13T02:02:08.354177112Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4360 runtime=io.containerd.runc.v2\n" Dec 13 02:02:08.498093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058f80262b9019c2e9c948c36451fcbefc0ff8d15b53411e305e674f71b534e3-rootfs.mount: Deactivated successfully. Dec 13 02:02:08.576724 kubelet[2180]: I1213 02:02:08.576241 2180 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:02:08Z","lastTransitionTime":"2024-12-13T02:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:02:09.256100 kubelet[2180]: E1213 02:02:09.256069 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:09.258341 env[1321]: time="2024-12-13T02:02:09.258293942Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:02:09.294940 env[1321]: time="2024-12-13T02:02:09.294881657Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5685ba6d38f8e821c798b341f86815d203eac23f3f15ab0c7dc7ad423c3d4c32\"" Dec 13 02:02:09.295454 env[1321]: time="2024-12-13T02:02:09.295429248Z" level=info msg="StartContainer for \"5685ba6d38f8e821c798b341f86815d203eac23f3f15ab0c7dc7ad423c3d4c32\"" Dec 13 02:02:09.366910 env[1321]: time="2024-12-13T02:02:09.366829602Z" level=info msg="StartContainer for \"5685ba6d38f8e821c798b341f86815d203eac23f3f15ab0c7dc7ad423c3d4c32\" returns successfully" Dec 13 02:02:09.389201 env[1321]: time="2024-12-13T02:02:09.389110349Z" level=info msg="shim disconnected" id=5685ba6d38f8e821c798b341f86815d203eac23f3f15ab0c7dc7ad423c3d4c32 Dec 13 02:02:09.389201 env[1321]: time="2024-12-13T02:02:09.389183077Z" level=warning msg="cleaning up after shim disconnected" id=5685ba6d38f8e821c798b341f86815d203eac23f3f15ab0c7dc7ad423c3d4c32 namespace=k8s.io Dec 13 02:02:09.389201 env[1321]: time="2024-12-13T02:02:09.389195290Z" level=info msg="cleaning up dead shim" Dec 13 02:02:09.397517 env[1321]: time="2024-12-13T02:02:09.397433583Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4414 runtime=io.containerd.runc.v2\n" Dec 13 02:02:09.497890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5685ba6d38f8e821c798b341f86815d203eac23f3f15ab0c7dc7ad423c3d4c32-rootfs.mount: Deactivated successfully. Dec 13 02:02:10.260969 kubelet[2180]: E1213 02:02:10.260917 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:10.264583 env[1321]: time="2024-12-13T02:02:10.264532300Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:02:10.296515 env[1321]: time="2024-12-13T02:02:10.296467228Z" level=info msg="CreateContainer within sandbox \"e19b1609111c6a537d7f7c6b7e6787980950c008d98d5c49845bcb33dfba030c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"43c5274a02bf97197dd8f9d823430d4d8ab808234b95fc1a7c488ca76fcec6fe\"" Dec 13 02:02:10.297975 env[1321]: time="2024-12-13T02:02:10.297917459Z" level=info msg="StartContainer for \"43c5274a02bf97197dd8f9d823430d4d8ab808234b95fc1a7c488ca76fcec6fe\"" Dec 13 02:02:10.349304 env[1321]: time="2024-12-13T02:02:10.349232833Z" level=info msg="StartContainer for \"43c5274a02bf97197dd8f9d823430d4d8ab808234b95fc1a7c488ca76fcec6fe\" returns successfully" Dec 13 02:02:10.622834 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:02:11.265679 kubelet[2180]: E1213 02:02:11.265624 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:11.384925 kubelet[2180]: I1213 02:02:11.384858 2180 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dbtwm" podStartSLOduration=5.384805656 podStartE2EDuration="5.384805656s" podCreationTimestamp="2024-12-13 02:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:11.383992958 +0000 UTC m=+94.667662876" watchObservedRunningTime="2024-12-13 02:02:11.384805656 +0000 UTC m=+94.668475554" Dec 13 02:02:12.588106 kubelet[2180]: E1213 02:02:12.588048 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:13.343436 systemd-networkd[1090]: lxc_health: Link UP Dec 13 02:02:13.356749 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:02:13.356780 systemd-networkd[1090]: lxc_health: Gained carrier Dec 13 02:02:13.810920 kubelet[2180]: E1213 02:02:13.810872 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:14.463923 systemd-networkd[1090]: lxc_health: Gained IPv6LL Dec 13 02:02:14.589310 kubelet[2180]: E1213 02:02:14.589248 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:14.788539 systemd[1]: run-containerd-runc-k8s.io-43c5274a02bf97197dd8f9d823430d4d8ab808234b95fc1a7c488ca76fcec6fe-runc.j5UKzq.mount: Deactivated successfully. Dec 13 02:02:14.813672 kubelet[2180]: E1213 02:02:14.812012 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:15.271980 kubelet[2180]: E1213 02:02:15.271942 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:16.274056 kubelet[2180]: E1213 02:02:16.274017 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:19.013444 sshd[3995]: pam_unix(sshd:session): session closed for user core Dec 13 02:02:19.015773 systemd[1]: sshd@25-10.0.0.65:22-10.0.0.1:58452.service: Deactivated successfully. Dec 13 02:02:19.016706 systemd-logind[1301]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:02:19.016768 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:02:19.017682 systemd-logind[1301]: Removed session 26.