Mar 17 18:35:52.909268 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Mar 17 17:12:34 -00 2025 Mar 17 18:35:52.909287 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:35:52.909295 kernel: BIOS-provided physical RAM map: Mar 17 18:35:52.909301 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 18:35:52.909306 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 18:35:52.909311 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 18:35:52.909318 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 18:35:52.909324 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 18:35:52.909331 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 18:35:52.909336 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 18:35:52.909342 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 18:35:52.909347 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 18:35:52.909353 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 18:35:52.909358 kernel: NX (Execute Disable) protection: active Mar 17 18:35:52.909366 kernel: SMBIOS 2.8 present. Mar 17 18:35:52.909372 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 18:35:52.909378 kernel: Hypervisor detected: KVM Mar 17 18:35:52.909384 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 18:35:52.909390 kernel: kvm-clock: cpu 0, msr 7d19a001, primary cpu clock Mar 17 18:35:52.909396 kernel: kvm-clock: using sched offset of 3098981470 cycles Mar 17 18:35:52.909403 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 18:35:52.909409 kernel: tsc: Detected 2794.750 MHz processor Mar 17 18:35:52.909415 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 18:35:52.909423 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 18:35:52.909429 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 18:35:52.909435 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 18:35:52.909441 kernel: Using GB pages for direct mapping Mar 17 18:35:52.909447 kernel: ACPI: Early table checksum verification disabled Mar 17 18:35:52.909453 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 18:35:52.909459 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909466 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909472 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909479 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 18:35:52.909486 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909493 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909499 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909507 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:35:52.909514 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 18:35:52.909520 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 18:35:52.909526 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 18:35:52.909536 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 18:35:52.909542 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 18:35:52.909549 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 18:35:52.909556 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 18:35:52.909562 kernel: No NUMA configuration found Mar 17 18:35:52.909569 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 18:35:52.909576 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 18:35:52.909583 kernel: Zone ranges: Mar 17 18:35:52.909589 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 18:35:52.909596 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 18:35:52.909602 kernel: Normal empty Mar 17 18:35:52.909609 kernel: Movable zone start for each node Mar 17 18:35:52.909615 kernel: Early memory node ranges Mar 17 18:35:52.909622 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 18:35:52.909628 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 18:35:52.909634 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 18:35:52.909642 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 18:35:52.909649 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 18:35:52.909655 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 18:35:52.909662 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 18:35:52.909676 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 18:35:52.909682 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 18:35:52.909689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 18:35:52.909695 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 18:35:52.909702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 18:35:52.909710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 18:35:52.909717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 18:35:52.909723 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 18:35:52.909730 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 18:35:52.909736 kernel: TSC deadline timer available Mar 17 18:35:52.909743 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 18:35:52.909749 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 18:35:52.909756 kernel: kvm-guest: setup PV sched yield Mar 17 18:35:52.909773 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 18:35:52.909781 kernel: Booting paravirtualized kernel on KVM Mar 17 18:35:52.909788 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 18:35:52.909795 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Mar 17 18:35:52.909802 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Mar 17 18:35:52.909808 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Mar 17 18:35:52.909815 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 18:35:52.909821 kernel: kvm-guest: setup async PF for cpu 0 Mar 17 18:35:52.909828 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Mar 17 18:35:52.909834 kernel: kvm-guest: PV spinlocks enabled Mar 17 18:35:52.909842 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 18:35:52.909848 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 18:35:52.909855 kernel: Policy zone: DMA32 Mar 17 18:35:52.909863 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:35:52.909870 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:35:52.909876 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:35:52.909883 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:35:52.909889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:35:52.909897 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2278K rwdata, 13724K rodata, 47472K init, 4108K bss, 134796K reserved, 0K cma-reserved) Mar 17 18:35:52.909904 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:35:52.909910 kernel: ftrace: allocating 34580 entries in 136 pages Mar 17 18:35:52.909917 kernel: ftrace: allocated 136 pages with 2 groups Mar 17 18:35:52.909923 kernel: rcu: Hierarchical RCU implementation. Mar 17 18:35:52.909930 kernel: rcu: RCU event tracing is enabled. Mar 17 18:35:52.909937 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:35:52.909943 kernel: Rude variant of Tasks RCU enabled. Mar 17 18:35:52.909950 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:35:52.909958 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:35:52.909964 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:35:52.909971 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 18:35:52.909977 kernel: random: crng init done Mar 17 18:35:52.909983 kernel: Console: colour VGA+ 80x25 Mar 17 18:35:52.909990 kernel: printk: console [ttyS0] enabled Mar 17 18:35:52.909996 kernel: ACPI: Core revision 20210730 Mar 17 18:35:52.910003 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 18:35:52.910010 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 18:35:52.910017 kernel: x2apic enabled Mar 17 18:35:52.910024 kernel: Switched APIC routing to physical x2apic. Mar 17 18:35:52.910030 kernel: kvm-guest: setup PV IPIs Mar 17 18:35:52.910037 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 18:35:52.910043 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 18:35:52.910050 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 18:35:52.910057 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 18:35:52.910063 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 18:35:52.910070 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 18:35:52.910082 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 18:35:52.910089 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 18:35:52.910096 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 18:35:52.910104 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 18:35:52.910111 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 18:35:52.910117 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 18:35:52.910124 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 18:35:52.910131 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Mar 17 18:35:52.910138 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 18:35:52.910146 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 18:35:52.910153 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 18:35:52.910160 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 18:35:52.910167 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 18:35:52.910174 kernel: Freeing SMP alternatives memory: 32K Mar 17 18:35:52.910181 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:35:52.910188 kernel: LSM: Security Framework initializing Mar 17 18:35:52.910195 kernel: SELinux: Initializing. Mar 17 18:35:52.910203 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:35:52.910210 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:35:52.910217 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 18:35:52.910223 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 18:35:52.910230 kernel: ... version: 0 Mar 17 18:35:52.910237 kernel: ... bit width: 48 Mar 17 18:35:52.910244 kernel: ... generic registers: 6 Mar 17 18:35:52.910250 kernel: ... value mask: 0000ffffffffffff Mar 17 18:35:52.910258 kernel: ... max period: 00007fffffffffff Mar 17 18:35:52.910265 kernel: ... fixed-purpose events: 0 Mar 17 18:35:52.910272 kernel: ... event mask: 000000000000003f Mar 17 18:35:52.910279 kernel: signal: max sigframe size: 1776 Mar 17 18:35:52.910286 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:35:52.910293 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:35:52.910300 kernel: x86: Booting SMP configuration: Mar 17 18:35:52.910306 kernel: .... node #0, CPUs: #1 Mar 17 18:35:52.910313 kernel: kvm-clock: cpu 1, msr 7d19a041, secondary cpu clock Mar 17 18:35:52.910320 kernel: kvm-guest: setup async PF for cpu 1 Mar 17 18:35:52.910328 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Mar 17 18:35:52.910335 kernel: #2 Mar 17 18:35:52.910341 kernel: kvm-clock: cpu 2, msr 7d19a081, secondary cpu clock Mar 17 18:35:52.910348 kernel: kvm-guest: setup async PF for cpu 2 Mar 17 18:35:52.910355 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Mar 17 18:35:52.910362 kernel: #3 Mar 17 18:35:52.910368 kernel: kvm-clock: cpu 3, msr 7d19a0c1, secondary cpu clock Mar 17 18:35:52.910375 kernel: kvm-guest: setup async PF for cpu 3 Mar 17 18:35:52.910382 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Mar 17 18:35:52.910390 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:35:52.910397 kernel: smpboot: Max logical packages: 1 Mar 17 18:35:52.910403 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 18:35:52.910410 kernel: devtmpfs: initialized Mar 17 18:35:52.910417 kernel: x86/mm: Memory block size: 128MB Mar 17 18:35:52.910424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:35:52.910431 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:35:52.910438 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:35:52.910445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:35:52.910453 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:35:52.910460 kernel: audit: type=2000 audit(1742236552.222:1): state=initialized audit_enabled=0 res=1 Mar 17 18:35:52.910467 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:35:52.910474 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 18:35:52.910480 kernel: cpuidle: using governor menu Mar 17 18:35:52.910487 kernel: ACPI: bus type PCI registered Mar 17 18:35:52.910496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:35:52.910503 kernel: dca service started, version 1.12.1 Mar 17 18:35:52.910512 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 18:35:52.910521 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Mar 17 18:35:52.910528 kernel: PCI: Using configuration type 1 for base access Mar 17 18:35:52.910535 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 18:35:52.910542 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:35:52.910549 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:35:52.910555 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:35:52.910562 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:35:52.910569 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:35:52.910576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:35:52.910584 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:35:52.910591 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:35:52.910597 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:35:52.910604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:35:52.910611 kernel: ACPI: Interpreter enabled Mar 17 18:35:52.910618 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 18:35:52.910625 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 18:35:52.910631 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 18:35:52.910638 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 18:35:52.910647 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:35:52.910796 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:35:52.910877 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 18:35:52.910953 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 18:35:52.910962 kernel: PCI host bridge to bus 0000:00 Mar 17 18:35:52.911052 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 18:35:52.911125 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 18:35:52.911196 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 18:35:52.911262 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 18:35:52.911326 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 18:35:52.911392 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 18:35:52.911458 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:35:52.911555 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 18:35:52.911653 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 18:35:52.911741 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 18:35:52.911836 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 18:35:52.911912 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 18:35:52.911986 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 18:35:52.912083 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:35:52.912158 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 18:35:52.912240 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 18:35:52.912314 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 18:35:52.912408 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 18:35:52.912483 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 18:35:52.912556 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 18:35:52.912631 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 18:35:52.912730 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 18:35:52.912824 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 18:35:52.912900 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 18:35:52.912974 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 18:35:52.913048 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 18:35:52.913141 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 18:35:52.913217 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 18:35:52.913305 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 18:35:52.913384 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 18:35:52.913457 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 18:35:52.913548 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 18:35:52.913622 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 18:35:52.913632 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 18:35:52.913639 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 18:35:52.913646 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 18:35:52.913656 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 18:35:52.913662 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 18:35:52.913677 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 18:35:52.913684 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 18:35:52.913691 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 18:35:52.913697 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 18:35:52.913705 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 18:35:52.913712 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 18:35:52.913719 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 18:35:52.913727 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 18:35:52.913734 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 18:35:52.913741 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 18:35:52.913748 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 18:35:52.913755 kernel: iommu: Default domain type: Translated Mar 17 18:35:52.913800 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 18:35:52.913880 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 18:35:52.913954 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 18:35:52.914025 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 18:35:52.914037 kernel: vgaarb: loaded Mar 17 18:35:52.914044 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:35:52.914051 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:35:52.914059 kernel: PTP clock support registered Mar 17 18:35:52.914065 kernel: PCI: Using ACPI for IRQ routing Mar 17 18:35:52.914072 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 18:35:52.914079 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 18:35:52.914086 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 18:35:52.914094 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 18:35:52.914101 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 18:35:52.914108 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 18:35:52.914115 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:35:52.914122 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:35:52.914129 kernel: pnp: PnP ACPI init Mar 17 18:35:52.914236 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 18:35:52.914248 kernel: pnp: PnP ACPI: found 6 devices Mar 17 18:35:52.914255 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 18:35:52.914264 kernel: NET: Registered PF_INET protocol family Mar 17 18:35:52.914271 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:35:52.914278 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:35:52.914285 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:35:52.914292 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:35:52.914299 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:35:52.914305 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:35:52.914312 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:35:52.914321 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:35:52.914328 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:35:52.914335 kernel: NET: Registered PF_XDP protocol family Mar 17 18:35:52.914403 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 18:35:52.914468 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 18:35:52.914537 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 18:35:52.914600 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 18:35:52.914665 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 18:35:52.914739 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 18:35:52.914751 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:35:52.914769 kernel: Initialise system trusted keyrings Mar 17 18:35:52.914776 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:35:52.914783 kernel: Key type asymmetric registered Mar 17 18:35:52.914790 kernel: Asymmetric key parser 'x509' registered Mar 17 18:35:52.914797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:35:52.914804 kernel: io scheduler mq-deadline registered Mar 17 18:35:52.914811 kernel: io scheduler kyber registered Mar 17 18:35:52.914818 kernel: io scheduler bfq registered Mar 17 18:35:52.914826 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 18:35:52.914834 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 18:35:52.914841 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 18:35:52.914848 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 18:35:52.914854 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:35:52.914862 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 18:35:52.914869 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 18:35:52.914876 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 18:35:52.914883 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 18:35:52.914891 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 18:35:52.914972 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 18:35:52.915042 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 18:35:52.915111 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T18:35:52 UTC (1742236552) Mar 17 18:35:52.915180 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 18:35:52.915189 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:35:52.915196 kernel: Segment Routing with IPv6 Mar 17 18:35:52.915203 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:35:52.915213 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:35:52.915220 kernel: Key type dns_resolver registered Mar 17 18:35:52.915226 kernel: IPI shorthand broadcast: enabled Mar 17 18:35:52.915233 kernel: sched_clock: Marking stable (484511665, 101855869)->(628760847, -42393313) Mar 17 18:35:52.915241 kernel: registered taskstats version 1 Mar 17 18:35:52.915248 kernel: Loading compiled-in X.509 certificates Mar 17 18:35:52.915255 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: d5b956bbabb2d386c0246a969032c0de9eaa8220' Mar 17 18:35:52.915262 kernel: Key type .fscrypt registered Mar 17 18:35:52.915268 kernel: Key type fscrypt-provisioning registered Mar 17 18:35:52.915276 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:35:52.915283 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:35:52.915290 kernel: ima: No architecture policies found Mar 17 18:35:52.915297 kernel: clk: Disabling unused clocks Mar 17 18:35:52.915304 kernel: Freeing unused kernel image (initmem) memory: 47472K Mar 17 18:35:52.915311 kernel: Write protecting the kernel read-only data: 28672k Mar 17 18:35:52.915318 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Mar 17 18:35:52.915325 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Mar 17 18:35:52.915333 kernel: Run /init as init process Mar 17 18:35:52.915339 kernel: with arguments: Mar 17 18:35:52.915346 kernel: /init Mar 17 18:35:52.915353 kernel: with environment: Mar 17 18:35:52.915359 kernel: HOME=/ Mar 17 18:35:52.915366 kernel: TERM=linux Mar 17 18:35:52.915373 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:35:52.915382 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:35:52.915391 systemd[1]: Detected virtualization kvm. Mar 17 18:35:52.915400 systemd[1]: Detected architecture x86-64. Mar 17 18:35:52.915407 systemd[1]: Running in initrd. Mar 17 18:35:52.915414 systemd[1]: No hostname configured, using default hostname. Mar 17 18:35:52.915421 systemd[1]: Hostname set to . Mar 17 18:35:52.915429 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:35:52.915436 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:35:52.915444 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:35:52.915451 systemd[1]: Reached target cryptsetup.target. Mar 17 18:35:52.915460 systemd[1]: Reached target paths.target. Mar 17 18:35:52.915473 systemd[1]: Reached target slices.target. Mar 17 18:35:52.915482 systemd[1]: Reached target swap.target. Mar 17 18:35:52.915490 systemd[1]: Reached target timers.target. Mar 17 18:35:52.915498 systemd[1]: Listening on iscsid.socket. Mar 17 18:35:52.915506 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:35:52.915514 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:35:52.915522 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:35:52.915529 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:35:52.915537 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:35:52.915545 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:35:52.915552 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:35:52.915560 systemd[1]: Reached target sockets.target. Mar 17 18:35:52.915567 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:35:52.915576 systemd[1]: Finished network-cleanup.service. Mar 17 18:35:52.915584 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:35:52.915592 systemd[1]: Starting systemd-journald.service... Mar 17 18:35:52.915599 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:35:52.915607 systemd[1]: Starting systemd-resolved.service... Mar 17 18:35:52.915615 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:35:52.915622 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:35:52.915630 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:35:52.915637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:35:52.915650 systemd-journald[199]: Journal started Mar 17 18:35:52.915694 systemd-journald[199]: Runtime Journal (/run/log/journal/7cccb7ceb8484e09b42eee358e6ad024) is 6.0M, max 48.5M, 42.5M free. Mar 17 18:35:52.921246 systemd-modules-load[200]: Inserted module 'overlay' Mar 17 18:35:52.954895 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:35:52.954912 kernel: Bridge firewalling registered Mar 17 18:35:52.930845 systemd-resolved[201]: Positive Trust Anchors: Mar 17 18:35:52.930854 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:35:52.930890 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:35:52.933100 systemd-resolved[201]: Defaulting to hostname 'linux'. Mar 17 18:35:52.954878 systemd-modules-load[200]: Inserted module 'br_netfilter' Mar 17 18:35:52.965861 systemd[1]: Started systemd-journald.service. Mar 17 18:35:52.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.968985 systemd[1]: Started systemd-resolved.service. Mar 17 18:35:52.970605 kernel: audit: type=1130 audit(1742236552.965:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.970622 kernel: audit: type=1130 audit(1742236552.969:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.973605 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:35:52.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.978407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:35:52.980654 kernel: audit: type=1130 audit(1742236552.974:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.980676 kernel: SCSI subsystem initialized Mar 17 18:35:52.980685 kernel: audit: type=1130 audit(1742236552.979:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.980748 systemd[1]: Reached target nss-lookup.target. Mar 17 18:35:52.985756 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:35:52.990208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:35:52.990227 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:35:52.991430 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:35:52.994129 systemd-modules-load[200]: Inserted module 'dm_multipath' Mar 17 18:35:52.994944 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:35:52.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:52.995940 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:35:52.998795 kernel: audit: type=1130 audit(1742236552.994:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.006959 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:35:53.012379 kernel: audit: type=1130 audit(1742236553.007:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.012409 kernel: audit: type=1130 audit(1742236553.011:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.008023 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:35:53.013096 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:35:53.023015 dracut-cmdline[221]: dracut-dracut-053 Mar 17 18:35:53.025103 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=249ccd113f901380672c0d31e18f792e8e0344094c0e39eedc449f039418b31a Mar 17 18:35:53.082815 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:35:53.098802 kernel: iscsi: registered transport (tcp) Mar 17 18:35:53.120799 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:35:53.120867 kernel: QLogic iSCSI HBA Driver Mar 17 18:35:53.153370 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:35:53.159048 kernel: audit: type=1130 audit(1742236553.153:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.155132 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:35:53.224788 kernel: raid6: avx2x4 gen() 30067 MB/s Mar 17 18:35:53.241776 kernel: raid6: avx2x4 xor() 7359 MB/s Mar 17 18:35:53.258785 kernel: raid6: avx2x2 gen() 32250 MB/s Mar 17 18:35:53.275776 kernel: raid6: avx2x2 xor() 19197 MB/s Mar 17 18:35:53.292780 kernel: raid6: avx2x1 gen() 26356 MB/s Mar 17 18:35:53.309776 kernel: raid6: avx2x1 xor() 15265 MB/s Mar 17 18:35:53.326777 kernel: raid6: sse2x4 gen() 14661 MB/s Mar 17 18:35:53.343777 kernel: raid6: sse2x4 xor() 7401 MB/s Mar 17 18:35:53.360777 kernel: raid6: sse2x2 gen() 16122 MB/s Mar 17 18:35:53.377776 kernel: raid6: sse2x2 xor() 9812 MB/s Mar 17 18:35:53.394779 kernel: raid6: sse2x1 gen() 12206 MB/s Mar 17 18:35:53.412164 kernel: raid6: sse2x1 xor() 7734 MB/s Mar 17 18:35:53.412176 kernel: raid6: using algorithm avx2x2 gen() 32250 MB/s Mar 17 18:35:53.412185 kernel: raid6: .... xor() 19197 MB/s, rmw enabled Mar 17 18:35:53.412888 kernel: raid6: using avx2x2 recovery algorithm Mar 17 18:35:53.424784 kernel: xor: automatically using best checksumming function avx Mar 17 18:35:53.514786 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Mar 17 18:35:53.522192 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:35:53.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.525000 audit: BPF prog-id=7 op=LOAD Mar 17 18:35:53.525000 audit: BPF prog-id=8 op=LOAD Mar 17 18:35:53.526777 kernel: audit: type=1130 audit(1742236553.522:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.527013 systemd[1]: Starting systemd-udevd.service... Mar 17 18:35:53.538682 systemd-udevd[398]: Using default interface naming scheme 'v252'. Mar 17 18:35:53.542553 systemd[1]: Started systemd-udevd.service. Mar 17 18:35:53.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.543273 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:35:53.553297 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Mar 17 18:35:53.575563 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:35:53.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.577936 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:35:53.611292 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:35:53.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:53.642132 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:35:53.647973 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:35:53.647986 kernel: GPT:9289727 != 19775487 Mar 17 18:35:53.647995 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:35:53.648009 kernel: GPT:9289727 != 19775487 Mar 17 18:35:53.648022 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:35:53.648030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:35:53.672807 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:35:53.686989 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 18:35:53.687017 kernel: AES CTR mode by8 optimization enabled Mar 17 18:35:53.689776 kernel: libata version 3.00 loaded. Mar 17 18:35:53.693669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:35:53.734170 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 18:35:53.734305 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 18:35:53.734316 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 18:35:53.734401 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 18:35:53.734479 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Mar 17 18:35:53.734489 kernel: scsi host0: ahci Mar 17 18:35:53.734581 kernel: scsi host1: ahci Mar 17 18:35:53.734678 kernel: scsi host2: ahci Mar 17 18:35:53.734783 kernel: scsi host3: ahci Mar 17 18:35:53.734881 kernel: scsi host4: ahci Mar 17 18:35:53.734982 kernel: scsi host5: ahci Mar 17 18:35:53.735072 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 18:35:53.735082 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 18:35:53.735090 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 18:35:53.735099 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 18:35:53.735108 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 18:35:53.735117 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 18:35:53.744322 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:35:53.748601 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:35:53.751130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:35:53.756090 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:35:53.758606 systemd[1]: Starting disk-uuid.service... Mar 17 18:35:54.016294 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 18:35:54.016371 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 18:35:54.016381 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 18:35:54.016390 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 18:35:54.017792 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 18:35:54.018785 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 18:35:54.019784 kernel: ata3.00: applying bridge limits Mar 17 18:35:54.019797 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 18:35:54.020789 kernel: ata3.00: configured for UDMA/100 Mar 17 18:35:54.021781 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 18:35:54.036494 disk-uuid[534]: Primary Header is updated. Mar 17 18:35:54.036494 disk-uuid[534]: Secondary Entries is updated. Mar 17 18:35:54.036494 disk-uuid[534]: Secondary Header is updated. Mar 17 18:35:54.040790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:35:54.043790 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:35:54.053381 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 18:35:54.070782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 18:35:54.070797 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 18:35:55.044800 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:35:55.045093 disk-uuid[547]: The operation has completed successfully. Mar 17 18:35:55.065932 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:35:55.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.066031 systemd[1]: Finished disk-uuid.service. Mar 17 18:35:55.075839 systemd[1]: Starting verity-setup.service... Mar 17 18:35:55.087785 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 18:35:55.107110 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:35:55.108510 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:35:55.110518 systemd[1]: Finished verity-setup.service. Mar 17 18:35:55.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.167781 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:35:55.167938 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:35:55.169512 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:35:55.170236 systemd[1]: Starting ignition-setup.service... Mar 17 18:35:55.172176 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:35:55.180053 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:35:55.180080 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:35:55.180089 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:35:55.188625 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:35:55.196244 systemd[1]: Finished ignition-setup.service. Mar 17 18:35:55.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.197987 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:35:55.233220 ignition[647]: Ignition 2.14.0 Mar 17 18:35:55.233232 ignition[647]: Stage: fetch-offline Mar 17 18:35:55.233302 ignition[647]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:55.233311 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:55.233408 ignition[647]: parsed url from cmdline: "" Mar 17 18:35:55.233412 ignition[647]: no config URL provided Mar 17 18:35:55.233416 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:35:55.233423 ignition[647]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:35:55.233445 ignition[647]: op(1): [started] loading QEMU firmware config module Mar 17 18:35:55.233449 ignition[647]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:35:55.238860 ignition[647]: op(1): [finished] loading QEMU firmware config module Mar 17 18:35:55.247000 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:35:55.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.248000 audit: BPF prog-id=9 op=LOAD Mar 17 18:35:55.249460 systemd[1]: Starting systemd-networkd.service... Mar 17 18:35:55.283313 ignition[647]: parsing config with SHA512: d0b91c2538c9f8e805df6900d6235f5731fde923ccd3822f812775b6a4988001fef213ffe66d7a197164af3c07d9299f7d658ac9a1ce6ca2539a5328280813de Mar 17 18:35:55.290903 unknown[647]: fetched base config from "system" Mar 17 18:35:55.290915 unknown[647]: fetched user config from "qemu" Mar 17 18:35:55.291386 ignition[647]: fetch-offline: fetch-offline passed Mar 17 18:35:55.291435 ignition[647]: Ignition finished successfully Mar 17 18:35:55.295003 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:35:55.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.301711 systemd-networkd[728]: lo: Link UP Mar 17 18:35:55.301721 systemd-networkd[728]: lo: Gained carrier Mar 17 18:35:55.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.302215 systemd-networkd[728]: Enumeration completed Mar 17 18:35:55.302289 systemd[1]: Started systemd-networkd.service. Mar 17 18:35:55.302435 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:35:55.303870 systemd-networkd[728]: eth0: Link UP Mar 17 18:35:55.303873 systemd-networkd[728]: eth0: Gained carrier Mar 17 18:35:55.303996 systemd[1]: Reached target network.target. Mar 17 18:35:55.305484 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:35:55.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.306209 systemd[1]: Starting ignition-kargs.service... Mar 17 18:35:55.316694 ignition[730]: Ignition 2.14.0 Mar 17 18:35:55.307715 systemd[1]: Starting iscsiuio.service... Mar 17 18:35:55.316702 ignition[730]: Stage: kargs Mar 17 18:35:55.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.312089 systemd[1]: Started iscsiuio.service. Mar 17 18:35:55.321021 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:35:55.321021 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:35:55.321021 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:35:55.321021 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:35:55.321021 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:35:55.321021 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:35:55.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.316821 ignition[730]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:55.314146 systemd[1]: Starting iscsid.service... Mar 17 18:35:55.316830 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:55.318846 systemd[1]: Finished ignition-kargs.service. Mar 17 18:35:55.317664 ignition[730]: kargs: kargs passed Mar 17 18:35:55.321014 systemd[1]: Started iscsid.service. Mar 17 18:35:55.317701 ignition[730]: Ignition finished successfully Mar 17 18:35:55.323820 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:35:55.339123 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:35:55.341471 systemd[1]: Starting ignition-disks.service... Mar 17 18:35:55.348932 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:35:55.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.350662 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:35:55.352305 ignition[741]: Ignition 2.14.0 Mar 17 18:35:55.352331 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:35:55.352314 ignition[741]: Stage: disks Mar 17 18:35:55.353795 systemd[1]: Reached target remote-fs.target. Mar 17 18:35:55.352452 ignition[741]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:55.352464 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:55.353553 ignition[741]: disks: disks passed Mar 17 18:35:55.353595 ignition[741]: Ignition finished successfully Mar 17 18:35:55.360332 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:35:55.361999 systemd[1]: Finished ignition-disks.service. Mar 17 18:35:55.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.363716 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:35:55.364615 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:35:55.366224 systemd[1]: Reached target local-fs.target. Mar 17 18:35:55.367008 systemd[1]: Reached target sysinit.target. Mar 17 18:35:55.368522 systemd[1]: Reached target basic.target. Mar 17 18:35:55.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.370205 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:35:55.372231 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:35:55.391222 systemd-fsck[761]: ROOT: clean, 623/553520 files, 56022/553472 blocks Mar 17 18:35:55.563944 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:35:55.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.565596 systemd[1]: Mounting sysroot.mount... Mar 17 18:35:55.592449 systemd[1]: Mounted sysroot.mount. Mar 17 18:35:55.593844 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:35:55.593218 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:35:55.595477 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:35:55.596389 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:35:55.596420 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:35:55.596439 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:35:55.598403 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:35:55.600446 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:35:55.604806 initrd-setup-root[771]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:35:55.607859 initrd-setup-root[779]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:35:55.610734 initrd-setup-root[787]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:35:55.613445 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:35:55.637154 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:35:55.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.639468 systemd[1]: Starting ignition-mount.service... Mar 17 18:35:55.641469 systemd[1]: Starting sysroot-boot.service... Mar 17 18:35:55.644463 bash[812]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:35:55.653187 ignition[814]: INFO : Ignition 2.14.0 Mar 17 18:35:55.654242 ignition[814]: INFO : Stage: mount Mar 17 18:35:55.654242 ignition[814]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:55.654242 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:55.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:55.658188 ignition[814]: INFO : mount: mount passed Mar 17 18:35:55.658188 ignition[814]: INFO : Ignition finished successfully Mar 17 18:35:55.655418 systemd[1]: Finished ignition-mount.service. Mar 17 18:35:55.661384 systemd[1]: Finished sysroot-boot.service. Mar 17 18:35:55.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:56.117345 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:35:56.123787 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (822) Mar 17 18:35:56.123807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 18:35:56.125238 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:35:56.125253 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:35:56.128913 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:35:56.130388 systemd[1]: Starting ignition-files.service... Mar 17 18:35:56.143422 ignition[842]: INFO : Ignition 2.14.0 Mar 17 18:35:56.143422 ignition[842]: INFO : Stage: files Mar 17 18:35:56.145290 ignition[842]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:56.145290 ignition[842]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:56.145290 ignition[842]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:35:56.149554 ignition[842]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:35:56.149554 ignition[842]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:35:56.149554 ignition[842]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:35:56.149554 ignition[842]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:35:56.149554 ignition[842]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:35:56.149270 unknown[842]: wrote ssh authorized keys file for user: core Mar 17 18:35:56.158296 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 18:35:56.158296 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 17 18:35:56.191387 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:35:56.355148 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 17 18:35:56.357183 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:35:56.357183 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 17 18:35:56.763543 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:35:56.851184 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:35:56.853224 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 17 18:35:57.274803 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:35:57.279911 systemd-networkd[728]: eth0: Gained IPv6LL Mar 17 18:35:57.629160 ignition[842]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 17 18:35:57.629160 ignition[842]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:35:57.633089 ignition[842]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:35:57.657869 ignition[842]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:35:57.660424 ignition[842]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:35:57.660424 ignition[842]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:35:57.660424 ignition[842]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:35:57.660424 ignition[842]: INFO : files: files passed Mar 17 18:35:57.660424 ignition[842]: INFO : Ignition finished successfully Mar 17 18:35:57.683885 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:35:57.683914 kernel: audit: type=1130 audit(1742236557.660:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.683926 kernel: audit: type=1130 audit(1742236557.671:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.683938 kernel: audit: type=1130 audit(1742236557.676:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.683948 kernel: audit: type=1131 audit(1742236557.676:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.659231 systemd[1]: Finished ignition-files.service. Mar 17 18:35:57.661171 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:35:57.666453 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:35:57.688828 initrd-setup-root-after-ignition[866]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:35:57.667010 systemd[1]: Starting ignition-quench.service... Mar 17 18:35:57.691282 initrd-setup-root-after-ignition[868]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:35:57.669126 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:35:57.671789 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:35:57.671847 systemd[1]: Finished ignition-quench.service. Mar 17 18:35:57.702968 kernel: audit: type=1130 audit(1742236557.694:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.702990 kernel: audit: type=1131 audit(1742236557.695:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.676415 systemd[1]: Reached target ignition-complete.target. Mar 17 18:35:57.684344 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:35:57.694285 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:35:57.694355 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:35:57.695844 systemd[1]: Reached target initrd-fs.target. Mar 17 18:35:57.702963 systemd[1]: Reached target initrd.target. Mar 17 18:35:57.703780 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:35:57.704327 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:35:57.712371 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:35:57.717442 kernel: audit: type=1130 audit(1742236557.711:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.712994 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:35:57.720059 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:35:57.721100 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:35:57.722718 systemd[1]: Stopped target timers.target. Mar 17 18:35:57.724314 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:35:57.730303 kernel: audit: type=1131 audit(1742236557.725:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.724400 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:35:57.725962 systemd[1]: Stopped target initrd.target. Mar 17 18:35:57.730390 systemd[1]: Stopped target basic.target. Mar 17 18:35:57.731958 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:35:57.733541 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:35:57.735130 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:35:57.736873 systemd[1]: Stopped target remote-fs.target. Mar 17 18:35:57.738488 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:35:57.740190 systemd[1]: Stopped target sysinit.target. Mar 17 18:35:57.741755 systemd[1]: Stopped target local-fs.target. Mar 17 18:35:57.743338 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:35:57.744907 systemd[1]: Stopped target swap.target. Mar 17 18:35:57.752334 kernel: audit: type=1131 audit(1742236557.747:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.746348 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:35:57.746435 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:35:57.758634 kernel: audit: type=1131 audit(1742236557.754:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.748054 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:35:57.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.752356 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:35:57.752443 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:35:57.754270 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:35:57.754354 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:35:57.758750 systemd[1]: Stopped target paths.target. Mar 17 18:35:57.760288 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:35:57.764802 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:35:57.765795 systemd[1]: Stopped target slices.target. Mar 17 18:35:57.767557 systemd[1]: Stopped target sockets.target. Mar 17 18:35:57.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.769208 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:35:57.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.769298 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:35:57.775951 iscsid[739]: iscsid shutting down. Mar 17 18:35:57.770937 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:35:57.771017 systemd[1]: Stopped ignition-files.service. Mar 17 18:35:57.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.780915 ignition[883]: INFO : Ignition 2.14.0 Mar 17 18:35:57.780915 ignition[883]: INFO : Stage: umount Mar 17 18:35:57.780915 ignition[883]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:35:57.780915 ignition[883]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:35:57.780915 ignition[883]: INFO : umount: umount passed Mar 17 18:35:57.780915 ignition[883]: INFO : Ignition finished successfully Mar 17 18:35:57.773004 systemd[1]: Stopping ignition-mount.service... Mar 17 18:35:57.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.774361 systemd[1]: Stopping iscsid.service... Mar 17 18:35:57.778047 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:35:57.779370 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:35:57.779586 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:35:57.780682 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:35:57.780784 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:35:57.795029 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:35:57.796450 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:35:57.797439 systemd[1]: Stopped iscsid.service. Mar 17 18:35:57.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.799165 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:35:57.800123 systemd[1]: Stopped ignition-mount.service. Mar 17 18:35:57.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.801979 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:35:57.802884 systemd[1]: Closed iscsid.socket. Mar 17 18:35:57.804266 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:35:57.804305 systemd[1]: Stopped ignition-disks.service. Mar 17 18:35:57.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.806705 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:35:57.806736 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:35:57.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.808607 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:35:57.809317 systemd[1]: Stopped ignition-setup.service. Mar 17 18:35:57.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.811848 systemd[1]: Stopping iscsiuio.service... Mar 17 18:35:57.813459 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:35:57.814457 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:35:57.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.816185 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:35:57.817121 systemd[1]: Stopped iscsiuio.service. Mar 17 18:35:57.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.819322 systemd[1]: Stopped target network.target. Mar 17 18:35:57.820872 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:35:57.820902 systemd[1]: Closed iscsiuio.socket. Mar 17 18:35:57.823293 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:35:57.825042 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:35:57.835200 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:35:57.835302 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:35:57.835814 systemd-networkd[728]: eth0: DHCPv6 lease lost Mar 17 18:35:57.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.840481 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:35:57.841813 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:35:57.843000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:35:57.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.844576 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:35:57.844619 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:35:57.847000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:35:57.848450 systemd[1]: Stopping network-cleanup.service... Mar 17 18:35:57.850345 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:35:57.850396 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:35:57.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.853932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:35:57.853978 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:35:57.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.856864 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:35:57.856903 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:35:57.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.859525 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:35:57.861208 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:35:57.865291 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:35:57.866663 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:35:57.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.868581 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:35:57.868690 systemd[1]: Stopped network-cleanup.service. Mar 17 18:35:57.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.870787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:35:57.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.870821 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:35:57.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.872277 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:35:57.872303 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:35:57.872365 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:35:57.872396 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:35:57.872578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:35:57.872608 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:35:57.872732 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:35:57.872774 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:35:57.873719 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:35:57.874097 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 18:35:57.874152 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Mar 17 18:35:57.876934 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:35:57.876969 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:35:57.878726 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:35:57.878818 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:35:57.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.879682 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 18:35:57.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:57.880075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:35:57.880145 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:35:57.894888 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:35:57.894962 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:35:57.896028 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:35:57.897642 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:35:57.897679 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:35:57.899187 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:35:57.915322 systemd[1]: Switching root. Mar 17 18:35:57.934482 systemd-journald[199]: Journal stopped Mar 17 18:36:01.075169 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Mar 17 18:36:01.075219 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:36:01.075235 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:36:01.075245 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:36:01.075254 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:36:01.075266 kernel: SELinux: policy capability open_perms=1 Mar 17 18:36:01.075276 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:36:01.075289 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:36:01.075304 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:36:01.075314 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:36:01.076127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:36:01.076143 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:36:01.076153 systemd[1]: Successfully loaded SELinux policy in 37.908ms. Mar 17 18:36:01.076167 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.946ms. Mar 17 18:36:01.076179 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:36:01.076190 systemd[1]: Detected virtualization kvm. Mar 17 18:36:01.076204 systemd[1]: Detected architecture x86-64. Mar 17 18:36:01.076214 systemd[1]: Detected first boot. Mar 17 18:36:01.076225 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:36:01.076236 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:36:01.076246 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:36:01.076259 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:36:01.076272 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:36:01.076283 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:01.076295 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:36:01.076305 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:36:01.076315 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:36:01.076326 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:36:01.076337 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:36:01.076348 systemd[1]: Created slice system-getty.slice. Mar 17 18:36:01.076358 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:36:01.076369 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:36:01.076379 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:36:01.076390 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:36:01.076400 systemd[1]: Created slice user.slice. Mar 17 18:36:01.076410 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:36:01.076421 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:36:01.076432 systemd[1]: Set up automount boot.automount. Mar 17 18:36:01.076443 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:36:01.076454 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:36:01.076472 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:36:01.076483 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:36:01.076493 systemd[1]: Reached target integritysetup.target. Mar 17 18:36:01.076504 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:36:01.076515 systemd[1]: Reached target remote-fs.target. Mar 17 18:36:01.076525 systemd[1]: Reached target slices.target. Mar 17 18:36:01.076537 systemd[1]: Reached target swap.target. Mar 17 18:36:01.076548 systemd[1]: Reached target torcx.target. Mar 17 18:36:01.076558 systemd[1]: Reached target veritysetup.target. Mar 17 18:36:01.076568 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:36:01.076578 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:36:01.076589 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:36:01.076599 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:36:01.076610 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:36:01.076620 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:36:01.076632 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:36:01.076643 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:36:01.076654 systemd[1]: Mounting media.mount... Mar 17 18:36:01.076664 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:01.076675 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:36:01.076685 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:36:01.076696 systemd[1]: Mounting tmp.mount... Mar 17 18:36:01.076706 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:36:01.076716 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:36:01.076728 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:36:01.076739 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:36:01.076749 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:36:01.076776 systemd[1]: Starting modprobe@drm.service... Mar 17 18:36:01.076788 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:36:01.076798 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:36:01.076808 systemd[1]: Starting modprobe@loop.service... Mar 17 18:36:01.076818 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:36:01.076829 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:36:01.076841 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:36:01.076852 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:36:01.076862 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:36:01.076873 kernel: fuse: init (API version 7.34) Mar 17 18:36:01.076883 systemd[1]: Stopped systemd-journald.service. Mar 17 18:36:01.076893 kernel: loop: module loaded Mar 17 18:36:01.076903 systemd[1]: Starting systemd-journald.service... Mar 17 18:36:01.076913 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:36:01.076924 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:36:01.076935 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:36:01.076946 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:36:01.076956 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:36:01.076966 systemd[1]: Stopped verity-setup.service. Mar 17 18:36:01.076977 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:01.076988 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:36:01.076997 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:36:01.077008 systemd[1]: Mounted media.mount. Mar 17 18:36:01.077018 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:36:01.077030 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:36:01.077040 systemd[1]: Mounted tmp.mount. Mar 17 18:36:01.077050 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:36:01.077060 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:36:01.077073 systemd-journald[993]: Journal started Mar 17 18:36:01.077112 systemd-journald[993]: Runtime Journal (/run/log/journal/7cccb7ceb8484e09b42eee358e6ad024) is 6.0M, max 48.5M, 42.5M free. Mar 17 18:35:57.991000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:35:58.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:35:58.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:35:58.844000 audit: BPF prog-id=10 op=LOAD Mar 17 18:35:58.844000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:35:58.844000 audit: BPF prog-id=11 op=LOAD Mar 17 18:35:58.844000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:35:58.872000 audit[916]: AVC avc: denied { associate } for pid=916 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:35:58.872000 audit[916]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001558ac a1=c0000dade0 a2=c0000e30c0 a3=32 items=0 ppid=899 pid=916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:58.872000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:35:58.874000 audit[916]: AVC avc: denied { associate } for pid=916 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:35:58.874000 audit[916]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000155985 a2=1ed a3=0 items=2 ppid=899 pid=916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:35:58.874000 audit: CWD cwd="/" Mar 17 18:35:58.874000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:58.874000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:35:58.874000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:36:00.931000 audit: BPF prog-id=12 op=LOAD Mar 17 18:36:00.931000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:36:00.931000 audit: BPF prog-id=13 op=LOAD Mar 17 18:36:00.932000 audit: BPF prog-id=14 op=LOAD Mar 17 18:36:00.932000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:36:00.932000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:36:00.933000 audit: BPF prog-id=15 op=LOAD Mar 17 18:36:00.933000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:36:00.933000 audit: BPF prog-id=16 op=LOAD Mar 17 18:36:00.933000 audit: BPF prog-id=17 op=LOAD Mar 17 18:36:00.933000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:36:00.933000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:36:00.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:00.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:00.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:00.946000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:36:01.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.043000 audit: BPF prog-id=18 op=LOAD Mar 17 18:36:01.043000 audit: BPF prog-id=19 op=LOAD Mar 17 18:36:01.043000 audit: BPF prog-id=20 op=LOAD Mar 17 18:36:01.043000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:36:01.043000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:36:01.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.072000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:36:01.072000 audit[993]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe0d99f140 a2=4000 a3=7ffe0d99f1dc items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:36:01.072000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:36:01.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:00.931185 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:36:01.079957 systemd[1]: Started systemd-journald.service. Mar 17 18:36:01.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:58.872032 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:36:00.931196 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:35:58.872253 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:36:00.934573 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:35:58.872269 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:36:01.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.080085 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:35:58.872295 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:36:01.080249 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:35:58.872304 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:36:01.081318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:35:58.872329 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:36:01.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.081488 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:35:58.872340 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:35:58.872532 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:35:58.872570 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:36:01.082581 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:35:58.872581 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:35:58.872997 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:36:01.082810 systemd[1]: Finished modprobe@drm.service. Mar 17 18:36:01.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:35:58.873032 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:35:58.873052 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:35:58.873065 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:36:01.083902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:35:58.873084 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:36:01.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.084048 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:35:58.873095 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:35:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:36:00.666679 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:36:00Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:36:00.666946 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:36:00Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:36:01.085198 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:36:00.667046 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:36:00Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:36:00.667205 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:36:00Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:36:01.085336 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:36:01.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:00.667251 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:36:00Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:36:00.667307 /usr/lib/systemd/system-generators/torcx-generator[916]: time="2025-03-17T18:36:00Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:36:01.086388 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:01.086573 systemd[1]: Finished modprobe@loop.service. Mar 17 18:36:01.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.087731 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:36:01.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.088886 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:36:01.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.090134 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:36:01.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.091371 systemd[1]: Reached target network-pre.target. Mar 17 18:36:01.093347 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:36:01.095192 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:36:01.095975 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:36:01.097572 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:36:01.099476 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:36:01.100343 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:01.101523 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:36:01.102823 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:36:01.103812 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:36:01.106382 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:36:01.109916 systemd-journald[993]: Time spent on flushing to /var/log/journal/7cccb7ceb8484e09b42eee358e6ad024 is 17.944ms for 1105 entries. Mar 17 18:36:01.109916 systemd-journald[993]: System Journal (/var/log/journal/7cccb7ceb8484e09b42eee358e6ad024) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:36:01.147005 systemd-journald[993]: Received client request to flush runtime journal. Mar 17 18:36:01.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.109646 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:36:01.112040 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:36:01.147505 udevadm[1020]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:36:01.119214 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:36:01.120399 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:36:01.124105 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:36:01.126547 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:36:01.127656 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:36:01.128781 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:36:01.130895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:36:01.148185 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:36:01.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.152202 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:36:01.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.530336 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:36:01.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.531000 audit: BPF prog-id=21 op=LOAD Mar 17 18:36:01.531000 audit: BPF prog-id=22 op=LOAD Mar 17 18:36:01.531000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:36:01.531000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:36:01.532824 systemd[1]: Starting systemd-udevd.service... Mar 17 18:36:01.548511 systemd-udevd[1024]: Using default interface naming scheme 'v252'. Mar 17 18:36:01.560562 systemd[1]: Started systemd-udevd.service. Mar 17 18:36:01.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.565000 audit: BPF prog-id=23 op=LOAD Mar 17 18:36:01.566580 systemd[1]: Starting systemd-networkd.service... Mar 17 18:36:01.569000 audit: BPF prog-id=24 op=LOAD Mar 17 18:36:01.569000 audit: BPF prog-id=25 op=LOAD Mar 17 18:36:01.569000 audit: BPF prog-id=26 op=LOAD Mar 17 18:36:01.571130 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:36:01.582756 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Mar 17 18:36:01.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.596890 systemd[1]: Started systemd-userdbd.service. Mar 17 18:36:01.610440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:36:01.629798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 18:36:01.633807 kernel: ACPI: button: Power Button [PWRF] Mar 17 18:36:01.639907 systemd-networkd[1033]: lo: Link UP Mar 17 18:36:01.639920 systemd-networkd[1033]: lo: Gained carrier Mar 17 18:36:01.640314 systemd-networkd[1033]: Enumeration completed Mar 17 18:36:01.640419 systemd-networkd[1033]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:36:01.640421 systemd[1]: Started systemd-networkd.service. Mar 17 18:36:01.641600 systemd-networkd[1033]: eth0: Link UP Mar 17 18:36:01.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.641609 systemd-networkd[1033]: eth0: Gained carrier Mar 17 18:36:01.648000 audit[1051]: AVC avc: denied { confidentiality } for pid=1051 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Mar 17 18:36:01.648000 audit[1051]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e87b7bc670 a1=338ac a2=7f01551b2bc5 a3=5 items=110 ppid=1024 pid=1051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:36:01.648000 audit: CWD cwd="/" Mar 17 18:36:01.648000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=1 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=2 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=3 name=(null) inode=9879 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=4 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=5 name=(null) inode=9880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=6 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=7 name=(null) inode=9881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=8 name=(null) inode=9881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=9 name=(null) inode=9882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=10 name=(null) inode=9881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=11 name=(null) inode=9883 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=12 name=(null) inode=9881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=13 name=(null) inode=9884 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=14 name=(null) inode=9881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=15 name=(null) inode=9885 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=16 name=(null) inode=9881 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=17 name=(null) inode=9886 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=18 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=19 name=(null) inode=9887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=20 name=(null) inode=9887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=21 name=(null) inode=9888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=22 name=(null) inode=9887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=23 name=(null) inode=9889 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=24 name=(null) inode=9887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=25 name=(null) inode=9890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=26 name=(null) inode=9887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=27 name=(null) inode=9891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=28 name=(null) inode=9887 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=29 name=(null) inode=9892 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=30 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=31 name=(null) inode=9893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=32 name=(null) inode=9893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=33 name=(null) inode=9894 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=34 name=(null) inode=9893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=35 name=(null) inode=9895 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=36 name=(null) inode=9893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=37 name=(null) inode=9896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.653930 systemd-networkd[1033]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:36:01.648000 audit: PATH item=38 name=(null) inode=9893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=39 name=(null) inode=9897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=40 name=(null) inode=9893 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=41 name=(null) inode=9898 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=42 name=(null) inode=9878 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=43 name=(null) inode=9899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=44 name=(null) inode=9899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=45 name=(null) inode=9900 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=46 name=(null) inode=9899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=47 name=(null) inode=9901 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=48 name=(null) inode=9899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=49 name=(null) inode=9902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=50 name=(null) inode=9899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=51 name=(null) inode=9903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=52 name=(null) inode=9899 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=53 name=(null) inode=9904 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=55 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=56 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=57 name=(null) inode=9906 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=58 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=59 name=(null) inode=9907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=60 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=61 name=(null) inode=9908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=62 name=(null) inode=9908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=63 name=(null) inode=9909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=64 name=(null) inode=9908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=65 name=(null) inode=9910 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=66 name=(null) inode=9908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=67 name=(null) inode=9911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=68 name=(null) inode=9908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=69 name=(null) inode=9912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=70 name=(null) inode=9908 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=71 name=(null) inode=9913 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=72 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=73 name=(null) inode=9914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=74 name=(null) inode=9914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=75 name=(null) inode=9915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=76 name=(null) inode=9914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=77 name=(null) inode=9916 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=78 name=(null) inode=9914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=79 name=(null) inode=9917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=80 name=(null) inode=9914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=81 name=(null) inode=9918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=82 name=(null) inode=9914 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=83 name=(null) inode=9919 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=84 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=85 name=(null) inode=9920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=86 name=(null) inode=9920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=87 name=(null) inode=9921 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=88 name=(null) inode=9920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=89 name=(null) inode=9922 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=90 name=(null) inode=9920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=91 name=(null) inode=9923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=92 name=(null) inode=9920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=93 name=(null) inode=9924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=94 name=(null) inode=9920 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=95 name=(null) inode=9925 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=96 name=(null) inode=9905 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=97 name=(null) inode=9926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=98 name=(null) inode=9926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=99 name=(null) inode=9927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=100 name=(null) inode=9926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=101 name=(null) inode=9928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=102 name=(null) inode=9926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=103 name=(null) inode=9929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=104 name=(null) inode=9926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=105 name=(null) inode=9930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=106 name=(null) inode=9926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=107 name=(null) inode=9931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PATH item=109 name=(null) inode=9932 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:36:01.648000 audit: PROCTITLE proctitle="(udev-worker)" Mar 17 18:36:01.662792 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 18:36:01.667281 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 18:36:01.668660 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 18:36:01.668834 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 18:36:01.691798 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 18:36:01.735794 kernel: kvm: Nested Virtualization enabled Mar 17 18:36:01.735991 kernel: SVM: kvm: Nested Paging enabled Mar 17 18:36:01.736027 kernel: SVM: Virtual VMLOAD VMSAVE supported Mar 17 18:36:01.736055 kernel: SVM: Virtual GIF supported Mar 17 18:36:01.753799 kernel: EDAC MC: Ver: 3.0.0 Mar 17 18:36:01.786177 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:36:01.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.791949 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:36:01.800495 lvm[1057]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:36:01.827858 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:36:01.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.828987 systemd[1]: Reached target cryptsetup.target. Mar 17 18:36:01.830902 systemd[1]: Starting lvm2-activation.service... Mar 17 18:36:01.834263 lvm[1058]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:36:01.859827 systemd[1]: Finished lvm2-activation.service. Mar 17 18:36:01.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.860942 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:36:01.861877 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:36:01.861904 systemd[1]: Reached target local-fs.target. Mar 17 18:36:01.862747 systemd[1]: Reached target machines.target. Mar 17 18:36:01.864816 systemd[1]: Starting ldconfig.service... Mar 17 18:36:01.866055 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:36:01.866092 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:01.866983 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:36:01.868841 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:36:01.870853 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:36:01.873674 systemd[1]: Starting systemd-sysext.service... Mar 17 18:36:01.875284 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1060 (bootctl) Mar 17 18:36:01.876402 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:36:01.883840 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:36:01.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.884630 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:36:01.887936 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:36:01.888138 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:36:01.899797 kernel: loop0: detected capacity change from 0 to 218376 Mar 17 18:36:01.912844 systemd-fsck[1068]: fsck.fat 4.2 (2021-01-31) Mar 17 18:36:01.912844 systemd-fsck[1068]: /dev/vda1: 789 files, 119299/258078 clusters Mar 17 18:36:01.914465 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:36:01.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:01.917385 systemd[1]: Mounting boot.mount... Mar 17 18:36:02.097294 systemd[1]: Mounted boot.mount. Mar 17 18:36:02.108795 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:36:02.111441 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:36:02.111981 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:36:02.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.114961 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:36:02.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.122782 kernel: loop1: detected capacity change from 0 to 218376 Mar 17 18:36:02.126375 (sd-sysext)[1073]: Using extensions 'kubernetes'. Mar 17 18:36:02.126706 (sd-sysext)[1073]: Merged extensions into '/usr'. Mar 17 18:36:02.142116 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.143839 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:36:02.144896 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.145878 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:36:02.147955 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:36:02.149672 systemd[1]: Starting modprobe@loop.service... Mar 17 18:36:02.150603 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.150709 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:02.150826 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.153098 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:36:02.154323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:02.154447 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:36:02.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.155799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:02.155901 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:36:02.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.157391 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:02.157503 systemd[1]: Finished modprobe@loop.service. Mar 17 18:36:02.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.158881 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:02.158972 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.159898 systemd[1]: Finished systemd-sysext.service. Mar 17 18:36:02.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.161984 systemd[1]: Starting ensure-sysext.service... Mar 17 18:36:02.163943 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:36:02.168114 systemd[1]: Reloading. Mar 17 18:36:02.173862 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:36:02.174113 ldconfig[1059]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:36:02.174521 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:36:02.176806 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:36:02.210590 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-03-17T18:36:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:36:02.210939 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2025-03-17T18:36:02Z" level=info msg="torcx already run" Mar 17 18:36:02.273271 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:36:02.273287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:36:02.291237 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:02.344000 audit: BPF prog-id=27 op=LOAD Mar 17 18:36:02.345000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:36:02.345000 audit: BPF prog-id=28 op=LOAD Mar 17 18:36:02.345000 audit: BPF prog-id=29 op=LOAD Mar 17 18:36:02.345000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:36:02.345000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:36:02.346000 audit: BPF prog-id=30 op=LOAD Mar 17 18:36:02.346000 audit: BPF prog-id=31 op=LOAD Mar 17 18:36:02.346000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:36:02.346000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:36:02.347000 audit: BPF prog-id=32 op=LOAD Mar 17 18:36:02.347000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:36:02.348000 audit: BPF prog-id=33 op=LOAD Mar 17 18:36:02.348000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:36:02.349000 audit: BPF prog-id=34 op=LOAD Mar 17 18:36:02.349000 audit: BPF prog-id=35 op=LOAD Mar 17 18:36:02.349000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:36:02.349000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:36:02.351526 systemd[1]: Finished ldconfig.service. Mar 17 18:36:02.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.353559 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:36:02.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.358133 systemd[1]: Starting audit-rules.service... Mar 17 18:36:02.360471 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:36:02.362576 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:36:02.363000 audit: BPF prog-id=36 op=LOAD Mar 17 18:36:02.364739 systemd[1]: Starting systemd-resolved.service... Mar 17 18:36:02.365000 audit: BPF prog-id=37 op=LOAD Mar 17 18:36:02.366783 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:36:02.368465 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:36:02.369691 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:36:02.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.372378 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:36:02.372000 audit[1154]: SYSTEM_BOOT pid=1154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.377664 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.378004 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.379637 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:36:02.381653 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:36:02.383615 systemd[1]: Starting modprobe@loop.service... Mar 17 18:36:02.384390 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.384605 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:02.384826 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:36:02.384992 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.386798 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:36:02.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.388395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:02.388547 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:36:02.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.389964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:02.390101 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:36:02.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.391629 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:02.391885 systemd[1]: Finished modprobe@loop.service. Mar 17 18:36:02.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.394222 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:36:02.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:36:02.396321 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:02.396464 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.398194 systemd[1]: Starting systemd-update-done.service... Mar 17 18:36:02.401243 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.401498 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.403001 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:36:02.405150 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:36:02.406000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:36:02.406000 audit[1169]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8deb0fb0 a2=420 a3=0 items=0 ppid=1143 pid=1169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:36:02.406000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:36:02.407480 augenrules[1169]: No rules Mar 17 18:36:02.407790 systemd[1]: Starting modprobe@loop.service... Mar 17 18:36:02.421203 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.421319 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:02.421470 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:36:02.421622 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.423441 systemd[1]: Finished audit-rules.service. Mar 17 18:36:02.424937 systemd[1]: Finished systemd-update-done.service. Mar 17 18:36:02.426208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:02.426333 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:36:02.427555 systemd-resolved[1149]: Positive Trust Anchors: Mar 17 18:36:02.427566 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:36:02.427572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:02.427591 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:36:02.427692 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:36:02.428992 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:02.429117 systemd[1]: Finished modprobe@loop.service. Mar 17 18:36:02.430398 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:02.430523 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.433842 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.434128 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.435551 systemd-resolved[1149]: Defaulting to hostname 'linux'. Mar 17 18:36:02.435578 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:36:02.437492 systemd[1]: Starting modprobe@drm.service... Mar 17 18:36:02.439341 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:36:02.439746 systemd-timesyncd[1151]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:36:02.439816 systemd-timesyncd[1151]: Initial clock synchronization to Mon 2025-03-17 18:36:02.683705 UTC. Mar 17 18:36:02.441030 systemd[1]: Starting modprobe@loop.service... Mar 17 18:36:02.441827 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.441923 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:02.442912 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:36:02.443961 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:36:02.444106 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 18:36:02.445342 systemd[1]: Started systemd-resolved.service. Mar 17 18:36:02.446520 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:36:02.448211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:36:02.448333 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:36:02.449442 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:36:02.449544 systemd[1]: Finished modprobe@drm.service. Mar 17 18:36:02.450618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:36:02.450717 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:36:02.451868 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:36:02.451966 systemd[1]: Finished modprobe@loop.service. Mar 17 18:36:02.453374 systemd[1]: Reached target network.target. Mar 17 18:36:02.454196 systemd[1]: Reached target nss-lookup.target. Mar 17 18:36:02.455009 systemd[1]: Reached target time-set.target. Mar 17 18:36:02.455844 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:36:02.455865 systemd[1]: Reached target sysinit.target. Mar 17 18:36:02.456702 systemd[1]: Started motdgen.path. Mar 17 18:36:02.457394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:36:02.458585 systemd[1]: Started logrotate.timer. Mar 17 18:36:02.459379 systemd[1]: Started mdadm.timer. Mar 17 18:36:02.460048 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:36:02.460904 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:36:02.460925 systemd[1]: Reached target paths.target. Mar 17 18:36:02.461653 systemd[1]: Reached target timers.target. Mar 17 18:36:02.462621 systemd[1]: Listening on dbus.socket. Mar 17 18:36:02.464160 systemd[1]: Starting docker.socket... Mar 17 18:36:02.466631 systemd[1]: Listening on sshd.socket. Mar 17 18:36:02.467460 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:02.467512 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.468024 systemd[1]: Finished ensure-sysext.service. Mar 17 18:36:02.468958 systemd[1]: Listening on docker.socket. Mar 17 18:36:02.470359 systemd[1]: Reached target sockets.target. Mar 17 18:36:02.471156 systemd[1]: Reached target basic.target. Mar 17 18:36:02.471919 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.471935 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:36:02.472684 systemd[1]: Starting containerd.service... Mar 17 18:36:02.474254 systemd[1]: Starting dbus.service... Mar 17 18:36:02.475746 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:36:02.477471 systemd[1]: Starting extend-filesystems.service... Mar 17 18:36:02.478337 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:36:02.479508 jq[1185]: false Mar 17 18:36:02.479375 systemd[1]: Starting motdgen.service... Mar 17 18:36:02.481016 systemd[1]: Starting prepare-helm.service... Mar 17 18:36:02.482884 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:36:02.484831 systemd[1]: Starting sshd-keygen.service... Mar 17 18:36:02.488081 systemd[1]: Starting systemd-logind.service... Mar 17 18:36:02.489804 extend-filesystems[1186]: Found loop1 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found sr0 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda1 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda2 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda3 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found usr Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda4 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda6 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda7 Mar 17 18:36:02.489804 extend-filesystems[1186]: Found vda9 Mar 17 18:36:02.489804 extend-filesystems[1186]: Checking size of /dev/vda9 Mar 17 18:36:02.488861 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:36:02.512079 dbus-daemon[1184]: [system] SELinux support is enabled Mar 17 18:36:02.488939 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:36:02.544863 update_engine[1202]: I0317 18:36:02.542160 1202 main.cc:92] Flatcar Update Engine starting Mar 17 18:36:02.489384 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:36:02.545099 extend-filesystems[1186]: Resized partition /dev/vda9 Mar 17 18:36:02.551107 jq[1204]: true Mar 17 18:36:02.490120 systemd[1]: Starting update-engine.service... Mar 17 18:36:02.551319 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:36:02.552442 update_engine[1202]: I0317 18:36:02.545207 1202 update_check_scheduler.cc:74] Next update check in 9m41s Mar 17 18:36:02.491976 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:36:02.552605 tar[1207]: linux-amd64/LICENSE Mar 17 18:36:02.552605 tar[1207]: linux-amd64/helm Mar 17 18:36:02.494133 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:36:02.553435 jq[1208]: true Mar 17 18:36:02.494319 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:36:02.553652 env[1210]: time="2025-03-17T18:36:02.538058006Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:36:02.495273 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:36:02.495461 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:36:02.496576 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:36:02.496786 systemd[1]: Finished motdgen.service. Mar 17 18:36:02.512874 systemd[1]: Started dbus.service. Mar 17 18:36:02.516625 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:36:02.516650 systemd[1]: Reached target system-config.target. Mar 17 18:36:02.517645 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:36:02.517664 systemd[1]: Reached target user-config.target. Mar 17 18:36:02.544269 systemd-logind[1196]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 18:36:02.544290 systemd-logind[1196]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 18:36:02.544559 systemd-logind[1196]: New seat seat0. Mar 17 18:36:02.545171 systemd[1]: Started update-engine.service. Mar 17 18:36:02.548802 systemd[1]: Started locksmithd.service. Mar 17 18:36:02.550260 systemd[1]: Started systemd-logind.service. Mar 17 18:36:02.587885 env[1210]: time="2025-03-17T18:36:02.587837492Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:36:02.588051 env[1210]: time="2025-03-17T18:36:02.588022720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:02.594975 env[1210]: time="2025-03-17T18:36:02.594935671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595022 env[1210]: time="2025-03-17T18:36:02.594976678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595228 env[1210]: time="2025-03-17T18:36:02.595200828Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595228 env[1210]: time="2025-03-17T18:36:02.595224683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595298 env[1210]: time="2025-03-17T18:36:02.595236746Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:36:02.595298 env[1210]: time="2025-03-17T18:36:02.595245041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595335 env[1210]: time="2025-03-17T18:36:02.595310284Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595520 env[1210]: time="2025-03-17T18:36:02.595499819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595624 env[1210]: time="2025-03-17T18:36:02.595603153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:36:02.595624 env[1210]: time="2025-03-17T18:36:02.595620485Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:36:02.595683 env[1210]: time="2025-03-17T18:36:02.595659218Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:36:02.595683 env[1210]: time="2025-03-17T18:36:02.595669277Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:36:02.635795 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:36:02.755344 locksmithd[1239]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:36:02.761791 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:36:02.858640 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:36:02.858640 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:36:02.858640 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:36:02.866080 extend-filesystems[1186]: Resized filesystem in /dev/vda9 Mar 17 18:36:02.867884 bash[1234]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.858871621Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.858923549Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.858935451Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.858985585Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859009991Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859026441Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859044766Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859057159Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859069652Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859087466Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859103356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859116039Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859279496Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:36:02.867971 env[1210]: time="2025-03-17T18:36:02.859352112Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:36:02.859230 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859600368Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859625254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859637106Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859678825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859690987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859704994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859717688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859731784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859742294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859751711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859775496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859787789Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859918985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859932931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.868346 env[1210]: time="2025-03-17T18:36:02.859946346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.859398 systemd[1]: Finished extend-filesystems.service. Mar 17 18:36:02.868681 env[1210]: time="2025-03-17T18:36:02.859960102Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:36:02.868681 env[1210]: time="2025-03-17T18:36:02.859973798Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:36:02.868681 env[1210]: time="2025-03-17T18:36:02.859983997Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:36:02.868681 env[1210]: time="2025-03-17T18:36:02.860003503Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:36:02.868681 env[1210]: time="2025-03-17T18:36:02.860042497Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:36:02.863093 systemd[1]: Started containerd.service. Mar 17 18:36:02.865569 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.860250226Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.860300861Z" level=info msg="Connect containerd service" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.860333352Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.860832007Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861038574Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861071837Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861109798Z" level=info msg="containerd successfully booted in 0.323929s" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861097174Z" level=info msg="Start subscribing containerd event" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861157507Z" level=info msg="Start recovering state" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861228049Z" level=info msg="Start event monitor" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861359356Z" level=info msg="Start snapshots syncer" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861371659Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:36:02.868933 env[1210]: time="2025-03-17T18:36:02.861378381Z" level=info msg="Start streaming server" Mar 17 18:36:03.058299 tar[1207]: linux-amd64/README.md Mar 17 18:36:03.062184 systemd[1]: Finished prepare-helm.service. Mar 17 18:36:03.297620 systemd-networkd[1033]: eth0: Gained IPv6LL Mar 17 18:36:03.299367 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:36:03.300859 systemd[1]: Reached target network-online.target. Mar 17 18:36:03.303432 systemd[1]: Starting kubelet.service... Mar 17 18:36:03.572005 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:36:03.590192 systemd[1]: Finished sshd-keygen.service. Mar 17 18:36:03.592574 systemd[1]: Starting issuegen.service... Mar 17 18:36:03.597497 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:36:03.597664 systemd[1]: Finished issuegen.service. Mar 17 18:36:03.599887 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:36:03.605247 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:36:03.607633 systemd[1]: Started getty@tty1.service. Mar 17 18:36:03.609563 systemd[1]: Started serial-getty@ttyS0.service. Mar 17 18:36:03.610652 systemd[1]: Reached target getty.target. Mar 17 18:36:03.940204 systemd[1]: Started kubelet.service. Mar 17 18:36:03.941580 systemd[1]: Reached target multi-user.target. Mar 17 18:36:03.944149 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:36:03.953374 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:36:03.953548 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:36:03.954916 systemd[1]: Startup finished in 727ms (kernel) + 5.201s (initrd) + 6.002s (userspace) = 11.932s. Mar 17 18:36:04.366683 kubelet[1266]: E0317 18:36:04.366566 1266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:04.368551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:04.368685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:12.011595 systemd[1]: Created slice system-sshd.slice. Mar 17 18:36:12.012675 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:58150.service. Mar 17 18:36:12.045704 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 58150 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:12.047132 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:12.054492 systemd[1]: Created slice user-500.slice. Mar 17 18:36:12.055517 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:36:12.057578 systemd-logind[1196]: New session 1 of user core. Mar 17 18:36:12.065505 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:36:12.066695 systemd[1]: Starting user@500.service... Mar 17 18:36:12.070289 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:12.155672 systemd[1278]: Queued start job for default target default.target. Mar 17 18:36:12.156219 systemd[1278]: Reached target paths.target. Mar 17 18:36:12.156239 systemd[1278]: Reached target sockets.target. Mar 17 18:36:12.156251 systemd[1278]: Reached target timers.target. Mar 17 18:36:12.156262 systemd[1278]: Reached target basic.target. Mar 17 18:36:12.156297 systemd[1278]: Reached target default.target. Mar 17 18:36:12.156320 systemd[1278]: Startup finished in 80ms. Mar 17 18:36:12.156392 systemd[1]: Started user@500.service. Mar 17 18:36:12.157352 systemd[1]: Started session-1.scope. Mar 17 18:36:12.208163 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:58154.service. Mar 17 18:36:12.239001 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 58154 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:12.240548 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:12.244067 systemd-logind[1196]: New session 2 of user core. Mar 17 18:36:12.245179 systemd[1]: Started session-2.scope. Mar 17 18:36:12.298242 sshd[1287]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:12.301005 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:58154.service: Deactivated successfully. Mar 17 18:36:12.301534 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:36:12.302009 systemd-logind[1196]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:36:12.303205 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:58168.service. Mar 17 18:36:12.304029 systemd-logind[1196]: Removed session 2. Mar 17 18:36:12.331684 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 58168 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:12.332615 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:12.335954 systemd-logind[1196]: New session 3 of user core. Mar 17 18:36:12.336975 systemd[1]: Started session-3.scope. Mar 17 18:36:12.386645 sshd[1293]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:12.389579 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:58168.service: Deactivated successfully. Mar 17 18:36:12.390257 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:36:12.390876 systemd-logind[1196]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:36:12.392276 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:58180.service. Mar 17 18:36:12.392914 systemd-logind[1196]: Removed session 3. Mar 17 18:36:12.421028 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 58180 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:12.421960 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:12.425053 systemd-logind[1196]: New session 4 of user core. Mar 17 18:36:12.425804 systemd[1]: Started session-4.scope. Mar 17 18:36:12.478326 sshd[1299]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:12.481286 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:58196.service. Mar 17 18:36:12.481751 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:58180.service: Deactivated successfully. Mar 17 18:36:12.482305 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:36:12.482701 systemd-logind[1196]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:36:12.483349 systemd-logind[1196]: Removed session 4. Mar 17 18:36:12.512632 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 58196 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:36:12.513737 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:36:12.517137 systemd-logind[1196]: New session 5 of user core. Mar 17 18:36:12.517910 systemd[1]: Started session-5.scope. Mar 17 18:36:12.574196 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:36:12.574389 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:36:12.597752 systemd[1]: Starting docker.service... Mar 17 18:36:12.635327 env[1322]: time="2025-03-17T18:36:12.635261916Z" level=info msg="Starting up" Mar 17 18:36:12.636635 env[1322]: time="2025-03-17T18:36:12.636587122Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:36:12.636635 env[1322]: time="2025-03-17T18:36:12.636616584Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:36:12.636712 env[1322]: time="2025-03-17T18:36:12.636640951Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:36:12.636712 env[1322]: time="2025-03-17T18:36:12.636651496Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:36:12.638009 env[1322]: time="2025-03-17T18:36:12.637984498Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:36:12.638073 env[1322]: time="2025-03-17T18:36:12.638006154Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:36:12.638073 env[1322]: time="2025-03-17T18:36:12.638032352Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:36:12.638073 env[1322]: time="2025-03-17T18:36:12.638048184Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:36:13.509440 env[1322]: time="2025-03-17T18:36:13.509368485Z" level=info msg="Loading containers: start." Mar 17 18:36:13.633810 kernel: Initializing XFRM netlink socket Mar 17 18:36:13.661054 env[1322]: time="2025-03-17T18:36:13.661006309Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:36:13.710789 systemd-networkd[1033]: docker0: Link UP Mar 17 18:36:13.726102 env[1322]: time="2025-03-17T18:36:13.726048810Z" level=info msg="Loading containers: done." Mar 17 18:36:13.741952 env[1322]: time="2025-03-17T18:36:13.741893583Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:36:13.742137 env[1322]: time="2025-03-17T18:36:13.742093203Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:36:13.742209 env[1322]: time="2025-03-17T18:36:13.742190600Z" level=info msg="Daemon has completed initialization" Mar 17 18:36:13.760728 systemd[1]: Started docker.service. Mar 17 18:36:13.764362 env[1322]: time="2025-03-17T18:36:13.764311412Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:36:14.404661 env[1210]: time="2025-03-17T18:36:14.404612497Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 17 18:36:14.619627 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:36:14.619877 systemd[1]: Stopped kubelet.service. Mar 17 18:36:14.621336 systemd[1]: Starting kubelet.service... Mar 17 18:36:14.712861 systemd[1]: Started kubelet.service. Mar 17 18:36:14.750997 kubelet[1456]: E0317 18:36:14.750932 1456 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:14.753985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:14.754106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:16.062611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3175713936.mount: Deactivated successfully. Mar 17 18:36:17.817261 env[1210]: time="2025-03-17T18:36:17.817168667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:17.881871 env[1210]: time="2025-03-17T18:36:17.881771132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:17.897690 env[1210]: time="2025-03-17T18:36:17.897627091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:17.903948 env[1210]: time="2025-03-17T18:36:17.903908151Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:17.904708 env[1210]: time="2025-03-17T18:36:17.904659501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 17 18:36:17.905265 env[1210]: time="2025-03-17T18:36:17.905237204Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 17 18:36:19.671036 env[1210]: time="2025-03-17T18:36:19.670978076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:19.673224 env[1210]: time="2025-03-17T18:36:19.673185300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:19.675033 env[1210]: time="2025-03-17T18:36:19.674988952Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:19.676810 env[1210]: time="2025-03-17T18:36:19.676757583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:19.677514 env[1210]: time="2025-03-17T18:36:19.677477191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 17 18:36:19.677967 env[1210]: time="2025-03-17T18:36:19.677914477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 17 18:36:22.190911 env[1210]: time="2025-03-17T18:36:22.190836784Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:22.192917 env[1210]: time="2025-03-17T18:36:22.192881480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:22.194987 env[1210]: time="2025-03-17T18:36:22.194962913Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:22.196740 env[1210]: time="2025-03-17T18:36:22.196710428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:22.197390 env[1210]: time="2025-03-17T18:36:22.197358802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 17 18:36:22.197900 env[1210]: time="2025-03-17T18:36:22.197825136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 18:36:24.550208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832655959.mount: Deactivated successfully. Mar 17 18:36:25.004902 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:36:25.005077 systemd[1]: Stopped kubelet.service. Mar 17 18:36:25.006468 systemd[1]: Starting kubelet.service... Mar 17 18:36:25.090236 systemd[1]: Started kubelet.service. Mar 17 18:36:25.654612 kubelet[1468]: E0317 18:36:25.654546 1468 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:25.656731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:25.656906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:25.812925 env[1210]: time="2025-03-17T18:36:25.812837310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:25.815614 env[1210]: time="2025-03-17T18:36:25.815566724Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:25.816961 env[1210]: time="2025-03-17T18:36:25.816926986Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:25.818576 env[1210]: time="2025-03-17T18:36:25.818500340Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:25.819040 env[1210]: time="2025-03-17T18:36:25.818931001Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 17 18:36:25.819570 env[1210]: time="2025-03-17T18:36:25.819516753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 17 18:36:26.380231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106366456.mount: Deactivated successfully. Mar 17 18:36:27.802511 env[1210]: time="2025-03-17T18:36:27.802453017Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:27.902308 env[1210]: time="2025-03-17T18:36:27.902247527Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:27.968775 env[1210]: time="2025-03-17T18:36:27.968722717Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:28.033324 env[1210]: time="2025-03-17T18:36:28.033280107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:28.034121 env[1210]: time="2025-03-17T18:36:28.034074112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 17 18:36:28.034561 env[1210]: time="2025-03-17T18:36:28.034532326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:36:31.194905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711857548.mount: Deactivated successfully. Mar 17 18:36:31.550033 env[1210]: time="2025-03-17T18:36:31.549867369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:31.564117 env[1210]: time="2025-03-17T18:36:31.564027626Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:31.568054 env[1210]: time="2025-03-17T18:36:31.567827520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:31.570715 env[1210]: time="2025-03-17T18:36:31.570644189Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:31.571425 env[1210]: time="2025-03-17T18:36:31.571382117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 18:36:31.572067 env[1210]: time="2025-03-17T18:36:31.572018731Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 17 18:36:33.077843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2829850945.mount: Deactivated successfully. Mar 17 18:36:35.668924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 18:36:35.669131 systemd[1]: Stopped kubelet.service. Mar 17 18:36:35.670428 systemd[1]: Starting kubelet.service... Mar 17 18:36:35.756389 systemd[1]: Started kubelet.service. Mar 17 18:36:35.792267 kubelet[1479]: E0317 18:36:35.792207 1479 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:36:35.794085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:36:35.794251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:36:36.780289 env[1210]: time="2025-03-17T18:36:36.780209661Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:36.782405 env[1210]: time="2025-03-17T18:36:36.782356784Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:36.784409 env[1210]: time="2025-03-17T18:36:36.784372801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:36.788892 env[1210]: time="2025-03-17T18:36:36.788861328Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:36.789751 env[1210]: time="2025-03-17T18:36:36.789712613Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 17 18:36:38.730893 systemd[1]: Stopped kubelet.service. Mar 17 18:36:38.732919 systemd[1]: Starting kubelet.service... Mar 17 18:36:38.752313 systemd[1]: Reloading. Mar 17 18:36:38.828283 /usr/lib/systemd/system-generators/torcx-generator[1536]: time="2025-03-17T18:36:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:36:38.828315 /usr/lib/systemd/system-generators/torcx-generator[1536]: time="2025-03-17T18:36:38Z" level=info msg="torcx already run" Mar 17 18:36:39.880916 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:36:39.880937 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:36:39.898551 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:39.979088 systemd[1]: Started kubelet.service. Mar 17 18:36:39.980666 systemd[1]: Stopping kubelet.service... Mar 17 18:36:39.984704 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:36:39.984909 systemd[1]: Stopped kubelet.service. Mar 17 18:36:39.986488 systemd[1]: Starting kubelet.service... Mar 17 18:36:40.078314 systemd[1]: Started kubelet.service. Mar 17 18:36:40.129270 kubelet[1582]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:40.129725 kubelet[1582]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:36:40.129725 kubelet[1582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:40.129962 kubelet[1582]: I0317 18:36:40.129803 1582 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:36:40.501101 kubelet[1582]: I0317 18:36:40.500992 1582 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:36:40.501101 kubelet[1582]: I0317 18:36:40.501030 1582 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:36:40.501367 kubelet[1582]: I0317 18:36:40.501291 1582 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:36:40.563981 kubelet[1582]: E0317 18:36:40.563908 1582 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:40.566941 kubelet[1582]: I0317 18:36:40.566903 1582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:36:40.575485 kubelet[1582]: E0317 18:36:40.575431 1582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:36:40.575485 kubelet[1582]: I0317 18:36:40.575480 1582 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:36:40.579585 kubelet[1582]: I0317 18:36:40.579542 1582 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:36:40.579915 kubelet[1582]: I0317 18:36:40.579871 1582 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:36:40.580095 kubelet[1582]: I0317 18:36:40.579912 1582 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:36:40.580178 kubelet[1582]: I0317 18:36:40.580096 1582 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:36:40.580178 kubelet[1582]: I0317 18:36:40.580105 1582 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:36:40.580896 kubelet[1582]: I0317 18:36:40.580874 1582 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:40.585115 kubelet[1582]: I0317 18:36:40.585080 1582 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:36:40.585165 kubelet[1582]: I0317 18:36:40.585117 1582 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:36:40.585165 kubelet[1582]: I0317 18:36:40.585140 1582 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:36:40.585165 kubelet[1582]: I0317 18:36:40.585153 1582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:36:40.612410 kubelet[1582]: I0317 18:36:40.612351 1582 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:36:40.612950 kubelet[1582]: I0317 18:36:40.612905 1582 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:36:40.613115 kubelet[1582]: W0317 18:36:40.612973 1582 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:36:40.613115 kubelet[1582]: W0317 18:36:40.613076 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:40.613174 kubelet[1582]: E0317 18:36:40.613147 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:40.613464 kubelet[1582]: W0317 18:36:40.613433 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:40.613542 kubelet[1582]: E0317 18:36:40.613487 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:40.617278 kubelet[1582]: I0317 18:36:40.617251 1582 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:36:40.617356 kubelet[1582]: I0317 18:36:40.617296 1582 server.go:1287] "Started kubelet" Mar 17 18:36:40.617395 kubelet[1582]: I0317 18:36:40.617367 1582 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:36:40.618204 kubelet[1582]: I0317 18:36:40.618177 1582 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:36:40.620224 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:36:40.620288 kubelet[1582]: I0317 18:36:40.620038 1582 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:36:40.620589 kubelet[1582]: I0317 18:36:40.620543 1582 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:36:40.621357 kubelet[1582]: I0317 18:36:40.620789 1582 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:36:40.621357 kubelet[1582]: I0317 18:36:40.621052 1582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:36:40.624469 kubelet[1582]: E0317 18:36:40.624423 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:40.624469 kubelet[1582]: I0317 18:36:40.624470 1582 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:36:40.624654 kubelet[1582]: I0317 18:36:40.624637 1582 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:36:40.624709 kubelet[1582]: I0317 18:36:40.624685 1582 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:36:40.625145 kubelet[1582]: W0317 18:36:40.625058 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:40.625145 kubelet[1582]: E0317 18:36:40.625107 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:40.625328 kubelet[1582]: I0317 18:36:40.625294 1582 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:36:40.625413 kubelet[1582]: I0317 18:36:40.625393 1582 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:36:40.626280 kubelet[1582]: E0317 18:36:40.626189 1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Mar 17 18:36:40.626736 kubelet[1582]: I0317 18:36:40.626645 1582 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:36:40.627422 kubelet[1582]: E0317 18:36:40.627395 1582 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:36:40.627552 kubelet[1582]: E0317 18:36:40.625418 1582 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182daaf51e79eb58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:36:40.617266008 +0000 UTC m=+0.534828091,LastTimestamp:2025-03-17 18:36:40.617266008 +0000 UTC m=+0.534828091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:36:40.642922 kubelet[1582]: I0317 18:36:40.642871 1582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:36:40.644607 kubelet[1582]: I0317 18:36:40.644579 1582 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:36:40.644607 kubelet[1582]: I0317 18:36:40.644606 1582 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:36:40.644672 kubelet[1582]: I0317 18:36:40.644629 1582 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:36:40.644672 kubelet[1582]: I0317 18:36:40.644638 1582 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:36:40.644741 kubelet[1582]: E0317 18:36:40.644684 1582 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:36:40.645414 kubelet[1582]: W0317 18:36:40.645361 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:40.645469 kubelet[1582]: E0317 18:36:40.645422 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:40.646530 kubelet[1582]: I0317 18:36:40.646509 1582 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:36:40.646530 kubelet[1582]: I0317 18:36:40.646527 1582 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:36:40.646608 kubelet[1582]: I0317 18:36:40.646544 1582 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:40.724931 kubelet[1582]: E0317 18:36:40.724863 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:40.745271 kubelet[1582]: E0317 18:36:40.745220 1582 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:36:40.825675 kubelet[1582]: E0317 18:36:40.825525 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:40.827269 kubelet[1582]: E0317 18:36:40.827227 1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Mar 17 18:36:40.925685 kubelet[1582]: E0317 18:36:40.925609 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:40.946064 kubelet[1582]: E0317 18:36:40.946001 1582 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:36:41.026539 kubelet[1582]: E0317 18:36:41.026445 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.127450 kubelet[1582]: E0317 18:36:41.127311 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.227948 kubelet[1582]: E0317 18:36:41.227888 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.228357 kubelet[1582]: E0317 18:36:41.228297 1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Mar 17 18:36:41.328092 kubelet[1582]: E0317 18:36:41.328000 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.346359 kubelet[1582]: E0317 18:36:41.346282 1582 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:36:41.428976 kubelet[1582]: E0317 18:36:41.428899 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.529109 kubelet[1582]: E0317 18:36:41.529014 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.529361 kubelet[1582]: W0317 18:36:41.529293 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:41.529399 kubelet[1582]: E0317 18:36:41.529375 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:41.617541 kubelet[1582]: W0317 18:36:41.617450 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:41.617541 kubelet[1582]: E0317 18:36:41.617530 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:41.629356 kubelet[1582]: E0317 18:36:41.629290 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.730241 kubelet[1582]: E0317 18:36:41.730074 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.830818 kubelet[1582]: E0317 18:36:41.830725 1582 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:36:41.891654 kubelet[1582]: I0317 18:36:41.891575 1582 policy_none.go:49] "None policy: Start" Mar 17 18:36:41.891654 kubelet[1582]: I0317 18:36:41.891622 1582 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:36:41.891654 kubelet[1582]: I0317 18:36:41.891638 1582 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:36:41.905750 systemd[1]: Created slice kubepods.slice. Mar 17 18:36:41.909857 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:36:41.912398 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:36:41.924523 kubelet[1582]: I0317 18:36:41.924478 1582 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:36:41.924667 kubelet[1582]: I0317 18:36:41.924625 1582 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:36:41.924667 kubelet[1582]: I0317 18:36:41.924637 1582 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:36:41.924834 kubelet[1582]: I0317 18:36:41.924808 1582 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:36:41.925580 kubelet[1582]: E0317 18:36:41.925561 1582 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:36:41.925648 kubelet[1582]: E0317 18:36:41.925601 1582 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:36:41.985556 kubelet[1582]: W0317 18:36:41.985374 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:41.985556 kubelet[1582]: E0317 18:36:41.985469 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:42.026251 kubelet[1582]: I0317 18:36:42.026203 1582 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:36:42.026655 kubelet[1582]: E0317 18:36:42.026617 1582 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 17 18:36:42.028962 kubelet[1582]: E0317 18:36:42.028932 1582 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Mar 17 18:36:42.153749 systemd[1]: Created slice kubepods-burstable-pod65e598f975fd6cf456c23abb7c6a6ce1.slice. Mar 17 18:36:42.160308 kubelet[1582]: E0317 18:36:42.160274 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:42.162661 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 17 18:36:42.164178 kubelet[1582]: E0317 18:36:42.164145 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:42.165538 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 17 18:36:42.166935 kubelet[1582]: E0317 18:36:42.166888 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:42.187682 kubelet[1582]: W0317 18:36:42.187624 1582 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Mar 17 18:36:42.187871 kubelet[1582]: E0317 18:36:42.187696 1582 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:42.228400 kubelet[1582]: I0317 18:36:42.228360 1582 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:36:42.228834 kubelet[1582]: E0317 18:36:42.228722 1582 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 17 18:36:42.232978 kubelet[1582]: I0317 18:36:42.232941 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:42.233059 kubelet[1582]: I0317 18:36:42.232981 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e598f975fd6cf456c23abb7c6a6ce1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65e598f975fd6cf456c23abb7c6a6ce1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:42.233059 kubelet[1582]: I0317 18:36:42.233012 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e598f975fd6cf456c23abb7c6a6ce1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e598f975fd6cf456c23abb7c6a6ce1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:42.233059 kubelet[1582]: I0317 18:36:42.233031 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:42.233059 kubelet[1582]: I0317 18:36:42.233048 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:42.233335 kubelet[1582]: I0317 18:36:42.233070 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:42.233335 kubelet[1582]: I0317 18:36:42.233090 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:42.233335 kubelet[1582]: I0317 18:36:42.233108 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:42.233335 kubelet[1582]: I0317 18:36:42.233125 1582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e598f975fd6cf456c23abb7c6a6ce1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e598f975fd6cf456c23abb7c6a6ce1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:42.461347 kubelet[1582]: E0317 18:36:42.461280 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:42.462100 env[1210]: time="2025-03-17T18:36:42.462036941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65e598f975fd6cf456c23abb7c6a6ce1,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:42.465212 kubelet[1582]: E0317 18:36:42.465178 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:42.465592 env[1210]: time="2025-03-17T18:36:42.465551719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:42.467753 kubelet[1582]: E0317 18:36:42.467730 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:42.468136 env[1210]: time="2025-03-17T18:36:42.468092797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:42.569659 kubelet[1582]: E0317 18:36:42.569601 1582 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:36:42.630305 kubelet[1582]: I0317 18:36:42.630277 1582 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:36:42.630739 kubelet[1582]: E0317 18:36:42.630681 1582 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 17 18:36:42.969609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3593825788.mount: Deactivated successfully. Mar 17 18:36:42.973515 env[1210]: time="2025-03-17T18:36:42.973466485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.977389 env[1210]: time="2025-03-17T18:36:42.977326044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.978738 env[1210]: time="2025-03-17T18:36:42.978711447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.980067 env[1210]: time="2025-03-17T18:36:42.980044753Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.981742 env[1210]: time="2025-03-17T18:36:42.981709889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.982723 env[1210]: time="2025-03-17T18:36:42.982696861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.984565 env[1210]: time="2025-03-17T18:36:42.984535168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.985911 env[1210]: time="2025-03-17T18:36:42.985885141Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.987555 env[1210]: time="2025-03-17T18:36:42.987528277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.988190 env[1210]: time="2025-03-17T18:36:42.988165699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.989673 env[1210]: time="2025-03-17T18:36:42.989640815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:42.990865 env[1210]: time="2025-03-17T18:36:42.990820974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:36:43.018082 env[1210]: time="2025-03-17T18:36:43.018017101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:43.018232 env[1210]: time="2025-03-17T18:36:43.018100919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:43.018232 env[1210]: time="2025-03-17T18:36:43.018137983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:43.018429 env[1210]: time="2025-03-17T18:36:43.018377619Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3fceea0a0fe065d538560f2cfd7ea787da13a7694ba9622bccaa96ac53d61e5 pid=1624 runtime=io.containerd.runc.v2 Mar 17 18:36:43.052535 systemd[1]: Started cri-containerd-c3fceea0a0fe065d538560f2cfd7ea787da13a7694ba9622bccaa96ac53d61e5.scope. Mar 17 18:36:43.056149 env[1210]: time="2025-03-17T18:36:43.056084985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:43.056307 env[1210]: time="2025-03-17T18:36:43.056156797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:43.056307 env[1210]: time="2025-03-17T18:36:43.056178134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:43.056358 env[1210]: time="2025-03-17T18:36:43.056309649Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1b1231c2d8de03c4ba12f588ea2bce3a480c420c9e141cf77923b30e0f05a81 pid=1651 runtime=io.containerd.runc.v2 Mar 17 18:36:43.064274 env[1210]: time="2025-03-17T18:36:43.064055490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:43.064274 env[1210]: time="2025-03-17T18:36:43.064105542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:43.064274 env[1210]: time="2025-03-17T18:36:43.064115535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:43.064616 env[1210]: time="2025-03-17T18:36:43.064554288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edd11a70052e7bca5268a329d7a5a5fae695de855dfec3069ac8c9ff25ce1d5d pid=1653 runtime=io.containerd.runc.v2 Mar 17 18:36:43.081866 systemd[1]: Started cri-containerd-a1b1231c2d8de03c4ba12f588ea2bce3a480c420c9e141cf77923b30e0f05a81.scope. Mar 17 18:36:43.115834 systemd[1]: Started cri-containerd-edd11a70052e7bca5268a329d7a5a5fae695de855dfec3069ac8c9ff25ce1d5d.scope. Mar 17 18:36:43.190904 env[1210]: time="2025-03-17T18:36:43.190838689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3fceea0a0fe065d538560f2cfd7ea787da13a7694ba9622bccaa96ac53d61e5\"" Mar 17 18:36:43.191993 kubelet[1582]: E0317 18:36:43.191963 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.194224 env[1210]: time="2025-03-17T18:36:43.194184227Z" level=info msg="CreateContainer within sandbox \"c3fceea0a0fe065d538560f2cfd7ea787da13a7694ba9622bccaa96ac53d61e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:36:43.204212 env[1210]: time="2025-03-17T18:36:43.204172615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"edd11a70052e7bca5268a329d7a5a5fae695de855dfec3069ac8c9ff25ce1d5d\"" Mar 17 18:36:43.205054 kubelet[1582]: E0317 18:36:43.205026 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.206624 env[1210]: time="2025-03-17T18:36:43.206598842Z" level=info msg="CreateContainer within sandbox \"edd11a70052e7bca5268a329d7a5a5fae695de855dfec3069ac8c9ff25ce1d5d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:36:43.209395 env[1210]: time="2025-03-17T18:36:43.209355060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:65e598f975fd6cf456c23abb7c6a6ce1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1b1231c2d8de03c4ba12f588ea2bce3a480c420c9e141cf77923b30e0f05a81\"" Mar 17 18:36:43.209874 kubelet[1582]: E0317 18:36:43.209854 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.211178 env[1210]: time="2025-03-17T18:36:43.211137605Z" level=info msg="CreateContainer within sandbox \"a1b1231c2d8de03c4ba12f588ea2bce3a480c420c9e141cf77923b30e0f05a81\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:36:43.235001 env[1210]: time="2025-03-17T18:36:43.234911651Z" level=info msg="CreateContainer within sandbox \"c3fceea0a0fe065d538560f2cfd7ea787da13a7694ba9622bccaa96ac53d61e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf3b261fdd730798372aa6c43a02f44d6e78cdd274fa8b8b9ebd2be41e21dd6c\"" Mar 17 18:36:43.235583 env[1210]: time="2025-03-17T18:36:43.235551415Z" level=info msg="StartContainer for \"bf3b261fdd730798372aa6c43a02f44d6e78cdd274fa8b8b9ebd2be41e21dd6c\"" Mar 17 18:36:43.249487 systemd[1]: Started cri-containerd-bf3b261fdd730798372aa6c43a02f44d6e78cdd274fa8b8b9ebd2be41e21dd6c.scope. Mar 17 18:36:43.295460 env[1210]: time="2025-03-17T18:36:43.295401932Z" level=info msg="StartContainer for \"bf3b261fdd730798372aa6c43a02f44d6e78cdd274fa8b8b9ebd2be41e21dd6c\" returns successfully" Mar 17 18:36:43.299004 env[1210]: time="2025-03-17T18:36:43.298938088Z" level=info msg="CreateContainer within sandbox \"edd11a70052e7bca5268a329d7a5a5fae695de855dfec3069ac8c9ff25ce1d5d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"89bf92323fb0566db18a23c68ceed55ceba52e67846840010e0d0ba0cb3f44f0\"" Mar 17 18:36:43.299532 env[1210]: time="2025-03-17T18:36:43.299494696Z" level=info msg="StartContainer for \"89bf92323fb0566db18a23c68ceed55ceba52e67846840010e0d0ba0cb3f44f0\"" Mar 17 18:36:43.299755 env[1210]: time="2025-03-17T18:36:43.299732749Z" level=info msg="CreateContainer within sandbox \"a1b1231c2d8de03c4ba12f588ea2bce3a480c420c9e141cf77923b30e0f05a81\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4386c683a0761ea03db831fa37c9d4fbb18ff215211ad0ae88511ad6b0c622f\"" Mar 17 18:36:43.300176 env[1210]: time="2025-03-17T18:36:43.300155486Z" level=info msg="StartContainer for \"c4386c683a0761ea03db831fa37c9d4fbb18ff215211ad0ae88511ad6b0c622f\"" Mar 17 18:36:43.318252 systemd[1]: Started cri-containerd-c4386c683a0761ea03db831fa37c9d4fbb18ff215211ad0ae88511ad6b0c622f.scope. Mar 17 18:36:43.324348 systemd[1]: Started cri-containerd-89bf92323fb0566db18a23c68ceed55ceba52e67846840010e0d0ba0cb3f44f0.scope. Mar 17 18:36:43.396998 env[1210]: time="2025-03-17T18:36:43.396940748Z" level=info msg="StartContainer for \"c4386c683a0761ea03db831fa37c9d4fbb18ff215211ad0ae88511ad6b0c622f\" returns successfully" Mar 17 18:36:43.413113 env[1210]: time="2025-03-17T18:36:43.413072385Z" level=info msg="StartContainer for \"89bf92323fb0566db18a23c68ceed55ceba52e67846840010e0d0ba0cb3f44f0\" returns successfully" Mar 17 18:36:43.432637 kubelet[1582]: I0317 18:36:43.432605 1582 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:36:43.433026 kubelet[1582]: E0317 18:36:43.433001 1582 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Mar 17 18:36:43.652461 kubelet[1582]: E0317 18:36:43.652351 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:43.652461 kubelet[1582]: E0317 18:36:43.652461 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.653809 kubelet[1582]: E0317 18:36:43.653792 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:43.653867 kubelet[1582]: E0317 18:36:43.653861 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:43.655024 kubelet[1582]: E0317 18:36:43.655006 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:43.655090 kubelet[1582]: E0317 18:36:43.655073 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:44.661796 kubelet[1582]: E0317 18:36:44.657352 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:44.661796 kubelet[1582]: E0317 18:36:44.657468 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:44.661796 kubelet[1582]: E0317 18:36:44.657636 1582 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 17 18:36:44.661796 kubelet[1582]: E0317 18:36:44.657709 1582 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:44.745008 kubelet[1582]: E0317 18:36:44.744929 1582 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:36:45.034777 kubelet[1582]: I0317 18:36:45.034726 1582 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:36:45.043875 kubelet[1582]: I0317 18:36:45.043842 1582 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 18:36:45.126907 kubelet[1582]: I0317 18:36:45.126850 1582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:45.131589 kubelet[1582]: E0317 18:36:45.131559 1582 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:45.131589 kubelet[1582]: I0317 18:36:45.131582 1582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:45.133422 kubelet[1582]: E0317 18:36:45.133380 1582 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:45.133422 kubelet[1582]: I0317 18:36:45.133412 1582 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:45.134832 kubelet[1582]: E0317 18:36:45.134804 1582 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:45.587562 kubelet[1582]: I0317 18:36:45.587488 1582 apiserver.go:52] "Watching apiserver" Mar 17 18:36:45.625363 kubelet[1582]: I0317 18:36:45.625308 1582 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:36:47.045519 systemd[1]: Reloading. Mar 17 18:36:47.110798 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2025-03-17T18:36:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:36:47.110827 /usr/lib/systemd/system-generators/torcx-generator[1878]: time="2025-03-17T18:36:47Z" level=info msg="torcx already run" Mar 17 18:36:47.184394 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:36:47.184412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:36:47.201986 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:36:47.296202 systemd[1]: Stopping kubelet.service... Mar 17 18:36:47.317232 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:36:47.317430 systemd[1]: Stopped kubelet.service. Mar 17 18:36:47.319175 systemd[1]: Starting kubelet.service... Mar 17 18:36:47.406826 systemd[1]: Started kubelet.service. Mar 17 18:36:47.452039 kubelet[1923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:47.452039 kubelet[1923]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:36:47.452039 kubelet[1923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:36:47.452421 kubelet[1923]: I0317 18:36:47.452117 1923 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:36:47.458502 kubelet[1923]: I0317 18:36:47.458466 1923 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:36:47.458502 kubelet[1923]: I0317 18:36:47.458500 1923 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:36:47.458832 kubelet[1923]: I0317 18:36:47.458793 1923 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:36:47.459951 kubelet[1923]: I0317 18:36:47.459925 1923 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:36:47.461886 kubelet[1923]: I0317 18:36:47.461816 1923 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:36:47.466076 kubelet[1923]: E0317 18:36:47.466045 1923 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:36:47.466076 kubelet[1923]: I0317 18:36:47.466077 1923 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:36:47.469864 kubelet[1923]: I0317 18:36:47.469839 1923 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:36:47.470052 kubelet[1923]: I0317 18:36:47.470014 1923 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:36:47.470200 kubelet[1923]: I0317 18:36:47.470043 1923 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:36:47.470283 kubelet[1923]: I0317 18:36:47.470201 1923 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:36:47.470283 kubelet[1923]: I0317 18:36:47.470211 1923 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:36:47.470283 kubelet[1923]: I0317 18:36:47.470247 1923 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:47.470389 kubelet[1923]: I0317 18:36:47.470375 1923 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:36:47.470415 kubelet[1923]: I0317 18:36:47.470391 1923 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:36:47.470415 kubelet[1923]: I0317 18:36:47.470407 1923 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:36:47.470415 kubelet[1923]: I0317 18:36:47.470415 1923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:36:47.471336 kubelet[1923]: I0317 18:36:47.471313 1923 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.471648 1923 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.472075 1923 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.472098 1923 server.go:1287] "Started kubelet" Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.472284 1923 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.472432 1923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.472679 1923 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:36:47.475811 kubelet[1923]: I0317 18:36:47.473462 1923 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:36:47.483698 kubelet[1923]: I0317 18:36:47.483450 1923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:36:47.484279 kubelet[1923]: E0317 18:36:47.484259 1923 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:36:47.484642 kubelet[1923]: I0317 18:36:47.484607 1923 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:36:47.485078 kubelet[1923]: I0317 18:36:47.485040 1923 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:36:47.485631 kubelet[1923]: I0317 18:36:47.485373 1923 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:36:47.485631 kubelet[1923]: I0317 18:36:47.485479 1923 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:36:47.489333 kubelet[1923]: I0317 18:36:47.489303 1923 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:36:47.489333 kubelet[1923]: I0317 18:36:47.489338 1923 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:36:47.489984 kubelet[1923]: I0317 18:36:47.489447 1923 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:36:47.493796 kubelet[1923]: I0317 18:36:47.493741 1923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:36:47.494857 kubelet[1923]: I0317 18:36:47.494834 1923 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:36:47.494857 kubelet[1923]: I0317 18:36:47.494857 1923 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:36:47.494945 kubelet[1923]: I0317 18:36:47.494875 1923 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:36:47.494945 kubelet[1923]: I0317 18:36:47.494882 1923 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:36:47.494945 kubelet[1923]: E0317 18:36:47.494932 1923 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:36:47.522280 kubelet[1923]: I0317 18:36:47.522240 1923 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:36:47.522280 kubelet[1923]: I0317 18:36:47.522260 1923 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:36:47.522280 kubelet[1923]: I0317 18:36:47.522278 1923 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:36:47.522496 kubelet[1923]: I0317 18:36:47.522448 1923 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:36:47.522496 kubelet[1923]: I0317 18:36:47.522457 1923 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:36:47.522496 kubelet[1923]: I0317 18:36:47.522475 1923 policy_none.go:49] "None policy: Start" Mar 17 18:36:47.522496 kubelet[1923]: I0317 18:36:47.522483 1923 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:36:47.522496 kubelet[1923]: I0317 18:36:47.522491 1923 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:36:47.522602 kubelet[1923]: I0317 18:36:47.522574 1923 state_mem.go:75] "Updated machine memory state" Mar 17 18:36:47.526467 kubelet[1923]: I0317 18:36:47.526444 1923 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:36:47.527096 kubelet[1923]: I0317 18:36:47.527064 1923 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:36:47.527152 kubelet[1923]: I0317 18:36:47.527095 1923 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:36:47.527838 kubelet[1923]: I0317 18:36:47.527802 1923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:36:47.528780 kubelet[1923]: E0317 18:36:47.528742 1923 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:36:47.596567 kubelet[1923]: I0317 18:36:47.596424 1923 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:47.596567 kubelet[1923]: I0317 18:36:47.596469 1923 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:47.596903 kubelet[1923]: I0317 18:36:47.596469 1923 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:47.632664 kubelet[1923]: I0317 18:36:47.632612 1923 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 17 18:36:47.639058 kubelet[1923]: I0317 18:36:47.639026 1923 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 17 18:36:47.639139 kubelet[1923]: I0317 18:36:47.639123 1923 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 17 18:36:47.685859 kubelet[1923]: I0317 18:36:47.685793 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:47.685859 kubelet[1923]: I0317 18:36:47.685851 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:47.685859 kubelet[1923]: I0317 18:36:47.685874 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:47.686125 kubelet[1923]: I0317 18:36:47.685890 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:47.686125 kubelet[1923]: I0317 18:36:47.685919 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:47.686125 kubelet[1923]: I0317 18:36:47.686004 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65e598f975fd6cf456c23abb7c6a6ce1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e598f975fd6cf456c23abb7c6a6ce1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:47.686125 kubelet[1923]: I0317 18:36:47.686107 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:36:47.686242 kubelet[1923]: I0317 18:36:47.686165 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65e598f975fd6cf456c23abb7c6a6ce1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"65e598f975fd6cf456c23abb7c6a6ce1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:47.686242 kubelet[1923]: I0317 18:36:47.686189 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65e598f975fd6cf456c23abb7c6a6ce1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"65e598f975fd6cf456c23abb7c6a6ce1\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:36:47.903146 kubelet[1923]: E0317 18:36:47.902970 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:47.903315 kubelet[1923]: E0317 18:36:47.903293 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:47.903660 kubelet[1923]: E0317 18:36:47.903639 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:48.023874 update_engine[1202]: I0317 18:36:48.023823 1202 update_attempter.cc:509] Updating boot flags... Mar 17 18:36:48.027099 sudo[1957]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:36:48.027364 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:36:48.471518 kubelet[1923]: I0317 18:36:48.471475 1923 apiserver.go:52] "Watching apiserver" Mar 17 18:36:48.486232 kubelet[1923]: I0317 18:36:48.486200 1923 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:36:48.507021 kubelet[1923]: E0317 18:36:48.506992 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:48.507416 kubelet[1923]: I0317 18:36:48.507381 1923 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:48.507589 kubelet[1923]: E0317 18:36:48.507562 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:48.567943 sudo[1957]: pam_unix(sudo:session): session closed for user root Mar 17 18:36:48.685711 kubelet[1923]: E0317 18:36:48.685666 1923 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 18:36:48.686098 kubelet[1923]: E0317 18:36:48.686029 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:48.704585 kubelet[1923]: I0317 18:36:48.704495 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7044723670000002 podStartE2EDuration="1.704472367s" podCreationTimestamp="2025-03-17 18:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:48.689484605 +0000 UTC m=+1.278313748" watchObservedRunningTime="2025-03-17 18:36:48.704472367 +0000 UTC m=+1.293301510" Mar 17 18:36:48.707615 kubelet[1923]: I0317 18:36:48.704670 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7046627970000001 podStartE2EDuration="1.704662797s" podCreationTimestamp="2025-03-17 18:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:48.704240426 +0000 UTC m=+1.293069579" watchObservedRunningTime="2025-03-17 18:36:48.704662797 +0000 UTC m=+1.293491941" Mar 17 18:36:48.739804 kubelet[1923]: I0317 18:36:48.739159 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7391370670000001 podStartE2EDuration="1.739137067s" podCreationTimestamp="2025-03-17 18:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:48.716653421 +0000 UTC m=+1.305482564" watchObservedRunningTime="2025-03-17 18:36:48.739137067 +0000 UTC m=+1.327966221" Mar 17 18:36:49.508339 kubelet[1923]: E0317 18:36:49.508303 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:49.508727 kubelet[1923]: E0317 18:36:49.508350 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:49.693398 kubelet[1923]: E0317 18:36:49.693369 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:50.562549 kubelet[1923]: E0317 18:36:50.562501 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:50.770701 sudo[1309]: pam_unix(sudo:session): session closed for user root Mar 17 18:36:50.771913 sshd[1304]: pam_unix(sshd:session): session closed for user core Mar 17 18:36:50.774459 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:58196.service: Deactivated successfully. Mar 17 18:36:50.775364 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:36:50.775560 systemd[1]: session-5.scope: Consumed 4.375s CPU time. Mar 17 18:36:50.776125 systemd-logind[1196]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:36:50.776913 systemd-logind[1196]: Removed session 5. Mar 17 18:36:52.347696 kubelet[1923]: I0317 18:36:52.347637 1923 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:36:52.348160 kubelet[1923]: I0317 18:36:52.348105 1923 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:36:52.348206 env[1210]: time="2025-03-17T18:36:52.347973068Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:36:53.974075 systemd[1]: Created slice kubepods-besteffort-pod5826fd10_d042_43a8_8577_6aec18bd6d85.slice. Mar 17 18:36:53.986425 systemd[1]: Created slice kubepods-burstable-pod04b81920_4c44_4347_a538_5c6d5477fa7f.slice. Mar 17 18:36:53.998532 systemd[1]: Created slice kubepods-besteffort-podbaf52f13_97f9_4b2f_a06e_830bfc0f768c.slice. Mar 17 18:36:54.028998 kubelet[1923]: I0317 18:36:54.028932 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-bpf-maps\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.028998 kubelet[1923]: I0317 18:36:54.028985 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-hostproc\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029403 kubelet[1923]: I0317 18:36:54.029010 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgksf\" (UniqueName: \"kubernetes.io/projected/5826fd10-d042-43a8-8577-6aec18bd6d85-kube-api-access-xgksf\") pod \"kube-proxy-l2b5c\" (UID: \"5826fd10-d042-43a8-8577-6aec18bd6d85\") " pod="kube-system/kube-proxy-l2b5c" Mar 17 18:36:54.029403 kubelet[1923]: I0317 18:36:54.029057 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-kernel\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029403 kubelet[1923]: I0317 18:36:54.029129 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/baf52f13-97f9-4b2f-a06e-830bfc0f768c-kube-api-access-bb5cz\") pod \"cilium-operator-6c4d7847fc-chxnt\" (UID: \"baf52f13-97f9-4b2f-a06e-830bfc0f768c\") " pod="kube-system/cilium-operator-6c4d7847fc-chxnt" Mar 17 18:36:54.029403 kubelet[1923]: I0317 18:36:54.029193 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-net\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029403 kubelet[1923]: I0317 18:36:54.029218 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7snm\" (UniqueName: \"kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-kube-api-access-m7snm\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029547 kubelet[1923]: I0317 18:36:54.029241 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5826fd10-d042-43a8-8577-6aec18bd6d85-kube-proxy\") pod \"kube-proxy-l2b5c\" (UID: \"5826fd10-d042-43a8-8577-6aec18bd6d85\") " pod="kube-system/kube-proxy-l2b5c" Mar 17 18:36:54.029547 kubelet[1923]: I0317 18:36:54.029260 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-run\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029547 kubelet[1923]: I0317 18:36:54.029281 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04b81920-4c44-4347-a538-5c6d5477fa7f-clustermesh-secrets\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029547 kubelet[1923]: I0317 18:36:54.029301 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5826fd10-d042-43a8-8577-6aec18bd6d85-lib-modules\") pod \"kube-proxy-l2b5c\" (UID: \"5826fd10-d042-43a8-8577-6aec18bd6d85\") " pod="kube-system/kube-proxy-l2b5c" Mar 17 18:36:54.029547 kubelet[1923]: I0317 18:36:54.029322 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-lib-modules\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029547 kubelet[1923]: I0317 18:36:54.029348 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-hubble-tls\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029684 kubelet[1923]: I0317 18:36:54.029369 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5826fd10-d042-43a8-8577-6aec18bd6d85-xtables-lock\") pod \"kube-proxy-l2b5c\" (UID: \"5826fd10-d042-43a8-8577-6aec18bd6d85\") " pod="kube-system/kube-proxy-l2b5c" Mar 17 18:36:54.029684 kubelet[1923]: I0317 18:36:54.029399 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-cgroup\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029684 kubelet[1923]: I0317 18:36:54.029419 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-etc-cni-netd\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029684 kubelet[1923]: I0317 18:36:54.029437 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cni-path\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029684 kubelet[1923]: I0317 18:36:54.029457 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-xtables-lock\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029684 kubelet[1923]: I0317 18:36:54.029478 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-config-path\") pod \"cilium-c75ds\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " pod="kube-system/cilium-c75ds" Mar 17 18:36:54.029875 kubelet[1923]: I0317 18:36:54.029521 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baf52f13-97f9-4b2f-a06e-830bfc0f768c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-chxnt\" (UID: \"baf52f13-97f9-4b2f-a06e-830bfc0f768c\") " pod="kube-system/cilium-operator-6c4d7847fc-chxnt" Mar 17 18:36:54.130375 kubelet[1923]: I0317 18:36:54.130315 1923 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:36:54.584429 kubelet[1923]: E0317 18:36:54.584364 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:54.585144 env[1210]: time="2025-03-17T18:36:54.585085084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2b5c,Uid:5826fd10-d042-43a8-8577-6aec18bd6d85,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:54.588877 kubelet[1923]: E0317 18:36:54.588848 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:54.589383 env[1210]: time="2025-03-17T18:36:54.589346109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c75ds,Uid:04b81920-4c44-4347-a538-5c6d5477fa7f,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:54.601160 kubelet[1923]: E0317 18:36:54.601144 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:54.601470 env[1210]: time="2025-03-17T18:36:54.601438872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-chxnt,Uid:baf52f13-97f9-4b2f-a06e-830bfc0f768c,Namespace:kube-system,Attempt:0,}" Mar 17 18:36:55.849474 env[1210]: time="2025-03-17T18:36:55.849394584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:55.849474 env[1210]: time="2025-03-17T18:36:55.849450851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:55.849474 env[1210]: time="2025-03-17T18:36:55.849468127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:55.849824 env[1210]: time="2025-03-17T18:36:55.849576192Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00a3349aec61b36f5cb84b26719acd0f358c0e2b06e2719b7ced21de17eb1938 pid=2034 runtime=io.containerd.runc.v2 Mar 17 18:36:55.860893 kubelet[1923]: E0317 18:36:55.860847 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:55.863074 systemd[1]: Started cri-containerd-00a3349aec61b36f5cb84b26719acd0f358c0e2b06e2719b7ced21de17eb1938.scope. Mar 17 18:36:55.881669 env[1210]: time="2025-03-17T18:36:55.881169526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2b5c,Uid:5826fd10-d042-43a8-8577-6aec18bd6d85,Namespace:kube-system,Attempt:0,} returns sandbox id \"00a3349aec61b36f5cb84b26719acd0f358c0e2b06e2719b7ced21de17eb1938\"" Mar 17 18:36:55.882048 kubelet[1923]: E0317 18:36:55.882002 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:55.883875 env[1210]: time="2025-03-17T18:36:55.883833011Z" level=info msg="CreateContainer within sandbox \"00a3349aec61b36f5cb84b26719acd0f358c0e2b06e2719b7ced21de17eb1938\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:36:55.947898 env[1210]: time="2025-03-17T18:36:55.947808209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:55.947898 env[1210]: time="2025-03-17T18:36:55.947853092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:55.947898 env[1210]: time="2025-03-17T18:36:55.947865458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:55.948176 env[1210]: time="2025-03-17T18:36:55.948120077Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f pid=2077 runtime=io.containerd.runc.v2 Mar 17 18:36:55.961296 systemd[1]: Started cri-containerd-f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f.scope. Mar 17 18:36:55.974961 env[1210]: time="2025-03-17T18:36:55.974870838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:36:55.975143 env[1210]: time="2025-03-17T18:36:55.974973662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:36:55.975143 env[1210]: time="2025-03-17T18:36:55.975009386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:36:55.975252 env[1210]: time="2025-03-17T18:36:55.975210504Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00 pid=2111 runtime=io.containerd.runc.v2 Mar 17 18:36:55.984939 env[1210]: time="2025-03-17T18:36:55.984612863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c75ds,Uid:04b81920-4c44-4347-a538-5c6d5477fa7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\"" Mar 17 18:36:55.985405 kubelet[1923]: E0317 18:36:55.985381 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:55.986696 env[1210]: time="2025-03-17T18:36:55.986667763Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:36:55.992219 systemd[1]: Started cri-containerd-058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00.scope. Mar 17 18:36:56.022386 env[1210]: time="2025-03-17T18:36:56.022340046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-chxnt,Uid:baf52f13-97f9-4b2f-a06e-830bfc0f768c,Namespace:kube-system,Attempt:0,} returns sandbox id \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\"" Mar 17 18:36:56.023097 kubelet[1923]: E0317 18:36:56.023064 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:56.520604 kubelet[1923]: E0317 18:36:56.520574 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:56.790149 env[1210]: time="2025-03-17T18:36:56.790004522Z" level=info msg="CreateContainer within sandbox \"00a3349aec61b36f5cb84b26719acd0f358c0e2b06e2719b7ced21de17eb1938\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7168cf5be33d24085e85a7f0e63eb2725931ff66a1f22e769ec5126b678847f0\"" Mar 17 18:36:56.790585 env[1210]: time="2025-03-17T18:36:56.790546985Z" level=info msg="StartContainer for \"7168cf5be33d24085e85a7f0e63eb2725931ff66a1f22e769ec5126b678847f0\"" Mar 17 18:36:56.805328 systemd[1]: Started cri-containerd-7168cf5be33d24085e85a7f0e63eb2725931ff66a1f22e769ec5126b678847f0.scope. Mar 17 18:36:57.039067 env[1210]: time="2025-03-17T18:36:57.039015248Z" level=info msg="StartContainer for \"7168cf5be33d24085e85a7f0e63eb2725931ff66a1f22e769ec5126b678847f0\" returns successfully" Mar 17 18:36:57.522853 kubelet[1923]: E0317 18:36:57.522825 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:57.523218 kubelet[1923]: E0317 18:36:57.522941 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:57.758371 kubelet[1923]: I0317 18:36:57.758309 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l2b5c" podStartSLOduration=4.758292879 podStartE2EDuration="4.758292879s" podCreationTimestamp="2025-03-17 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:36:57.758149313 +0000 UTC m=+10.346978456" watchObservedRunningTime="2025-03-17 18:36:57.758292879 +0000 UTC m=+10.347122022" Mar 17 18:36:58.524166 kubelet[1923]: E0317 18:36:58.524134 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:36:59.698161 kubelet[1923]: E0317 18:36:59.698103 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:00.567660 kubelet[1923]: E0317 18:37:00.567582 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:01.528542 kubelet[1923]: E0317 18:37:01.528510 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:08.173165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310759506.mount: Deactivated successfully. Mar 17 18:37:15.500813 env[1210]: time="2025-03-17T18:37:15.500725240Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:37:15.503778 env[1210]: time="2025-03-17T18:37:15.503730943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:37:15.505777 env[1210]: time="2025-03-17T18:37:15.505726437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:37:15.506339 env[1210]: time="2025-03-17T18:37:15.506302275Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 17 18:37:15.507605 env[1210]: time="2025-03-17T18:37:15.507554681Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:37:15.509234 env[1210]: time="2025-03-17T18:37:15.508868639Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:37:15.681862 env[1210]: time="2025-03-17T18:37:15.681810546Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\"" Mar 17 18:37:15.682391 env[1210]: time="2025-03-17T18:37:15.682359270Z" level=info msg="StartContainer for \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\"" Mar 17 18:37:15.699778 systemd[1]: Started cri-containerd-0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd.scope. Mar 17 18:37:15.752025 systemd[1]: cri-containerd-0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd.scope: Deactivated successfully. Mar 17 18:37:15.811795 env[1210]: time="2025-03-17T18:37:15.811703940Z" level=info msg="StartContainer for \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\" returns successfully" Mar 17 18:37:15.831960 env[1210]: time="2025-03-17T18:37:15.831911298Z" level=info msg="shim disconnected" id=0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd Mar 17 18:37:15.831960 env[1210]: time="2025-03-17T18:37:15.831957589Z" level=warning msg="cleaning up after shim disconnected" id=0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd namespace=k8s.io Mar 17 18:37:15.831960 env[1210]: time="2025-03-17T18:37:15.831966627Z" level=info msg="cleaning up dead shim" Mar 17 18:37:15.838428 env[1210]: time="2025-03-17T18:37:15.838382811Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2371 runtime=io.containerd.runc.v2\n" Mar 17 18:37:16.551851 kubelet[1923]: E0317 18:37:16.551799 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:16.553748 env[1210]: time="2025-03-17T18:37:16.553700140Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:37:16.675575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd-rootfs.mount: Deactivated successfully. Mar 17 18:37:16.882693 env[1210]: time="2025-03-17T18:37:16.882567023Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\"" Mar 17 18:37:16.883255 env[1210]: time="2025-03-17T18:37:16.883221766Z" level=info msg="StartContainer for \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\"" Mar 17 18:37:16.900668 systemd[1]: Started cri-containerd-201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136.scope. Mar 17 18:37:16.928836 env[1210]: time="2025-03-17T18:37:16.928748679Z" level=info msg="StartContainer for \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\" returns successfully" Mar 17 18:37:16.938337 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:37:16.938604 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:37:16.938840 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:37:16.940583 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:37:16.942687 systemd[1]: cri-containerd-201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136.scope: Deactivated successfully. Mar 17 18:37:16.949275 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:37:16.987871 env[1210]: time="2025-03-17T18:37:16.987793592Z" level=info msg="shim disconnected" id=201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136 Mar 17 18:37:16.987871 env[1210]: time="2025-03-17T18:37:16.987864722Z" level=warning msg="cleaning up after shim disconnected" id=201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136 namespace=k8s.io Mar 17 18:37:16.987871 env[1210]: time="2025-03-17T18:37:16.987875924Z" level=info msg="cleaning up dead shim" Mar 17 18:37:16.995523 env[1210]: time="2025-03-17T18:37:16.995466331Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2436 runtime=io.containerd.runc.v2\n" Mar 17 18:37:17.555535 kubelet[1923]: E0317 18:37:17.555493 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:17.557993 env[1210]: time="2025-03-17T18:37:17.557950013Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:37:17.675812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136-rootfs.mount: Deactivated successfully. Mar 17 18:37:18.283721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3708073510.mount: Deactivated successfully. Mar 17 18:37:20.633715 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:45816.service. Mar 17 18:37:20.874126 sshd[2449]: Accepted publickey for core from 10.0.0.1 port 45816 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:20.875898 sshd[2449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:20.881575 systemd-logind[1196]: New session 6 of user core. Mar 17 18:37:20.882614 systemd[1]: Started session-6.scope. Mar 17 18:37:20.933975 env[1210]: time="2025-03-17T18:37:20.933884940Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\"" Mar 17 18:37:20.935274 env[1210]: time="2025-03-17T18:37:20.935193432Z" level=info msg="StartContainer for \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\"" Mar 17 18:37:20.957071 systemd[1]: Started cri-containerd-b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08.scope. Mar 17 18:37:21.230317 systemd[1]: cri-containerd-b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08.scope: Deactivated successfully. Mar 17 18:37:21.432858 sshd[2449]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:21.435009 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:45816.service: Deactivated successfully. Mar 17 18:37:21.435679 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:37:21.436215 systemd-logind[1196]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:37:21.437034 systemd-logind[1196]: Removed session 6. Mar 17 18:37:21.483865 env[1210]: time="2025-03-17T18:37:21.483743897Z" level=info msg="StartContainer for \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\" returns successfully" Mar 17 18:37:21.499054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08-rootfs.mount: Deactivated successfully. Mar 17 18:37:21.568481 kubelet[1923]: E0317 18:37:21.561928 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:21.866572 env[1210]: time="2025-03-17T18:37:21.866434372Z" level=info msg="shim disconnected" id=b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08 Mar 17 18:37:21.866572 env[1210]: time="2025-03-17T18:37:21.866507516Z" level=warning msg="cleaning up after shim disconnected" id=b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08 namespace=k8s.io Mar 17 18:37:21.866572 env[1210]: time="2025-03-17T18:37:21.866521494Z" level=info msg="cleaning up dead shim" Mar 17 18:37:21.872962 env[1210]: time="2025-03-17T18:37:21.872922820Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2505 runtime=io.containerd.runc.v2\n" Mar 17 18:37:22.565269 kubelet[1923]: E0317 18:37:22.565242 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:22.567683 env[1210]: time="2025-03-17T18:37:22.567428219Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:37:23.023091 env[1210]: time="2025-03-17T18:37:23.023023465Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\"" Mar 17 18:37:23.023577 env[1210]: time="2025-03-17T18:37:23.023547734Z" level=info msg="StartContainer for \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\"" Mar 17 18:37:23.035378 env[1210]: time="2025-03-17T18:37:23.035339442Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:37:23.039037 env[1210]: time="2025-03-17T18:37:23.039007577Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:37:23.040182 systemd[1]: Started cri-containerd-17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59.scope. Mar 17 18:37:23.044736 env[1210]: time="2025-03-17T18:37:23.043804014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:37:23.044736 env[1210]: time="2025-03-17T18:37:23.043981823Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 17 18:37:23.049799 env[1210]: time="2025-03-17T18:37:23.046973400Z" level=info msg="CreateContainer within sandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:37:23.067110 systemd[1]: cri-containerd-17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59.scope: Deactivated successfully. Mar 17 18:37:23.270625 env[1210]: time="2025-03-17T18:37:23.270535654Z" level=info msg="CreateContainer within sandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\"" Mar 17 18:37:23.271172 env[1210]: time="2025-03-17T18:37:23.271144138Z" level=info msg="StartContainer for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\"" Mar 17 18:37:23.274781 env[1210]: time="2025-03-17T18:37:23.274675455Z" level=info msg="StartContainer for \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\" returns successfully" Mar 17 18:37:23.287150 systemd[1]: Started cri-containerd-06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa.scope. Mar 17 18:37:23.434048 env[1210]: time="2025-03-17T18:37:23.433972018Z" level=info msg="StartContainer for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" returns successfully" Mar 17 18:37:23.434834 env[1210]: time="2025-03-17T18:37:23.434795053Z" level=info msg="shim disconnected" id=17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59 Mar 17 18:37:23.435000 env[1210]: time="2025-03-17T18:37:23.434983512Z" level=warning msg="cleaning up after shim disconnected" id=17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59 namespace=k8s.io Mar 17 18:37:23.435076 env[1210]: time="2025-03-17T18:37:23.435057868Z" level=info msg="cleaning up dead shim" Mar 17 18:37:23.444295 env[1210]: time="2025-03-17T18:37:23.444227874Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:37:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2598 runtime=io.containerd.runc.v2\n" Mar 17 18:37:23.568929 kubelet[1923]: E0317 18:37:23.568814 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:23.571997 kubelet[1923]: E0317 18:37:23.571945 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:23.574068 env[1210]: time="2025-03-17T18:37:23.574032381Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:37:24.003292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59-rootfs.mount: Deactivated successfully. Mar 17 18:37:24.192869 env[1210]: time="2025-03-17T18:37:24.192791269Z" level=info msg="CreateContainer within sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\"" Mar 17 18:37:24.193319 env[1210]: time="2025-03-17T18:37:24.193278495Z" level=info msg="StartContainer for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\"" Mar 17 18:37:24.212265 systemd[1]: run-containerd-runc-k8s.io-13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228-runc.8Bk8RH.mount: Deactivated successfully. Mar 17 18:37:24.214557 systemd[1]: Started cri-containerd-13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228.scope. Mar 17 18:37:24.230789 kubelet[1923]: I0317 18:37:24.230696 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-chxnt" podStartSLOduration=4.209341297 podStartE2EDuration="31.230670972s" podCreationTimestamp="2025-03-17 18:36:53 +0000 UTC" firstStartedPulling="2025-03-17 18:36:56.023916729 +0000 UTC m=+8.612745872" lastFinishedPulling="2025-03-17 18:37:23.045246414 +0000 UTC m=+35.634075547" observedRunningTime="2025-03-17 18:37:23.98206742 +0000 UTC m=+36.570896563" watchObservedRunningTime="2025-03-17 18:37:24.230670972 +0000 UTC m=+36.819500115" Mar 17 18:37:24.382909 env[1210]: time="2025-03-17T18:37:24.382752236Z" level=info msg="StartContainer for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" returns successfully" Mar 17 18:37:24.553582 kubelet[1923]: I0317 18:37:24.553547 1923 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 18:37:24.576642 kubelet[1923]: E0317 18:37:24.576614 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:24.576999 kubelet[1923]: E0317 18:37:24.576929 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:24.815738 systemd[1]: Created slice kubepods-burstable-pod69c175fd_470b_4855_8623_bbc31819fe03.slice. Mar 17 18:37:24.823226 systemd[1]: Created slice kubepods-burstable-podae31c7c3_bea8_41cf_8a60_49e6b9541d74.slice. Mar 17 18:37:24.853671 kubelet[1923]: I0317 18:37:24.853601 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae31c7c3-bea8-41cf-8a60-49e6b9541d74-config-volume\") pod \"coredns-668d6bf9bc-f5nlt\" (UID: \"ae31c7c3-bea8-41cf-8a60-49e6b9541d74\") " pod="kube-system/coredns-668d6bf9bc-f5nlt" Mar 17 18:37:24.853671 kubelet[1923]: I0317 18:37:24.853655 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnk4s\" (UniqueName: \"kubernetes.io/projected/69c175fd-470b-4855-8623-bbc31819fe03-kube-api-access-tnk4s\") pod \"coredns-668d6bf9bc-grcd4\" (UID: \"69c175fd-470b-4855-8623-bbc31819fe03\") " pod="kube-system/coredns-668d6bf9bc-grcd4" Mar 17 18:37:24.853671 kubelet[1923]: I0317 18:37:24.853685 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l7qg\" (UniqueName: \"kubernetes.io/projected/ae31c7c3-bea8-41cf-8a60-49e6b9541d74-kube-api-access-9l7qg\") pod \"coredns-668d6bf9bc-f5nlt\" (UID: \"ae31c7c3-bea8-41cf-8a60-49e6b9541d74\") " pod="kube-system/coredns-668d6bf9bc-f5nlt" Mar 17 18:37:24.853991 kubelet[1923]: I0317 18:37:24.853711 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69c175fd-470b-4855-8623-bbc31819fe03-config-volume\") pod \"coredns-668d6bf9bc-grcd4\" (UID: \"69c175fd-470b-4855-8623-bbc31819fe03\") " pod="kube-system/coredns-668d6bf9bc-grcd4" Mar 17 18:37:24.884505 kubelet[1923]: I0317 18:37:24.884434 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c75ds" podStartSLOduration=12.3632838 podStartE2EDuration="31.884414696s" podCreationTimestamp="2025-03-17 18:36:53 +0000 UTC" firstStartedPulling="2025-03-17 18:36:55.986224111 +0000 UTC m=+8.575053254" lastFinishedPulling="2025-03-17 18:37:15.507355017 +0000 UTC m=+28.096184150" observedRunningTime="2025-03-17 18:37:24.811903215 +0000 UTC m=+37.400732378" watchObservedRunningTime="2025-03-17 18:37:24.884414696 +0000 UTC m=+37.473243839" Mar 17 18:37:25.121117 kubelet[1923]: E0317 18:37:25.120998 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:25.126180 kubelet[1923]: E0317 18:37:25.126150 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:25.132863 env[1210]: time="2025-03-17T18:37:25.132812449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f5nlt,Uid:ae31c7c3-bea8-41cf-8a60-49e6b9541d74,Namespace:kube-system,Attempt:0,}" Mar 17 18:37:25.132863 env[1210]: time="2025-03-17T18:37:25.132817889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grcd4,Uid:69c175fd-470b-4855-8623-bbc31819fe03,Namespace:kube-system,Attempt:0,}" Mar 17 18:37:25.579210 kubelet[1923]: E0317 18:37:25.579177 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:26.438395 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:43358.service. Mar 17 18:37:26.470204 sshd[2785]: Accepted publickey for core from 10.0.0.1 port 43358 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:26.472014 sshd[2785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:26.477776 systemd[1]: Started session-7.scope. Mar 17 18:37:26.478817 systemd-logind[1196]: New session 7 of user core. Mar 17 18:37:26.486197 systemd-networkd[1033]: cilium_host: Link UP Mar 17 18:37:26.487109 systemd-networkd[1033]: cilium_net: Link UP Mar 17 18:37:26.491815 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:37:26.491879 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:37:26.490496 systemd-networkd[1033]: cilium_net: Gained carrier Mar 17 18:37:26.490661 systemd-networkd[1033]: cilium_host: Gained carrier Mar 17 18:37:26.490780 systemd-networkd[1033]: cilium_net: Gained IPv6LL Mar 17 18:37:26.490910 systemd-networkd[1033]: cilium_host: Gained IPv6LL Mar 17 18:37:26.583647 kubelet[1923]: E0317 18:37:26.583622 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:26.584272 systemd-networkd[1033]: cilium_vxlan: Link UP Mar 17 18:37:26.584277 systemd-networkd[1033]: cilium_vxlan: Gained carrier Mar 17 18:37:26.611070 sshd[2785]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:26.613357 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:43358.service: Deactivated successfully. Mar 17 18:37:26.614004 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:37:26.614549 systemd-logind[1196]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:37:26.615238 systemd-logind[1196]: Removed session 7. Mar 17 18:37:26.772799 kernel: NET: Registered PF_ALG protocol family Mar 17 18:37:27.331159 systemd-networkd[1033]: lxc_health: Link UP Mar 17 18:37:27.342462 systemd-networkd[1033]: lxc_health: Gained carrier Mar 17 18:37:27.342788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:37:27.775973 systemd-networkd[1033]: cilium_vxlan: Gained IPv6LL Mar 17 18:37:27.808934 systemd-networkd[1033]: lxcb6c15f845460: Link UP Mar 17 18:37:27.819018 kernel: eth0: renamed from tmpca3f0 Mar 17 18:37:27.831841 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:37:27.832308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb6c15f845460: link becomes ready Mar 17 18:37:27.832876 systemd-networkd[1033]: lxcb6c15f845460: Gained carrier Mar 17 18:37:27.834337 systemd-networkd[1033]: lxc426a9f286e39: Link UP Mar 17 18:37:27.843810 kernel: eth0: renamed from tmpfdf56 Mar 17 18:37:27.850670 systemd-networkd[1033]: lxc426a9f286e39: Gained carrier Mar 17 18:37:27.850850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc426a9f286e39: link becomes ready Mar 17 18:37:28.550883 systemd-networkd[1033]: lxc_health: Gained IPv6LL Mar 17 18:37:28.590825 kubelet[1923]: E0317 18:37:28.590789 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:28.992017 systemd-networkd[1033]: lxc426a9f286e39: Gained IPv6LL Mar 17 18:37:29.592236 kubelet[1923]: E0317 18:37:29.592195 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:29.824330 systemd-networkd[1033]: lxcb6c15f845460: Gained IPv6LL Mar 17 18:37:30.598818 kubelet[1923]: E0317 18:37:30.598753 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:31.622795 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:43370.service. Mar 17 18:37:31.701244 sshd[3184]: Accepted publickey for core from 10.0.0.1 port 43370 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:31.703383 sshd[3184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:31.715235 systemd[1]: Started session-8.scope. Mar 17 18:37:31.717298 systemd-logind[1196]: New session 8 of user core. Mar 17 18:37:31.906540 sshd[3184]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:31.910206 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:43370.service: Deactivated successfully. Mar 17 18:37:31.911113 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:37:31.912309 systemd-logind[1196]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:37:31.913260 systemd-logind[1196]: Removed session 8. Mar 17 18:37:32.172086 env[1210]: time="2025-03-17T18:37:32.171748973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:37:32.172086 env[1210]: time="2025-03-17T18:37:32.171856292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:37:32.172086 env[1210]: time="2025-03-17T18:37:32.171878586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:37:32.172086 env[1210]: time="2025-03-17T18:37:32.172029300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca3f089bb31e0583b8208bf204cefb94101979742769223bfca77219569aa6ea pid=3212 runtime=io.containerd.runc.v2 Mar 17 18:37:32.181611 env[1210]: time="2025-03-17T18:37:32.181455323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:37:32.181611 env[1210]: time="2025-03-17T18:37:32.181577060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:37:32.181611 env[1210]: time="2025-03-17T18:37:32.181611367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:37:32.182402 env[1210]: time="2025-03-17T18:37:32.182109038Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdf568592576b685d1700f1ab4a7b4f52e31b246ef1f43b1909c7838f5d2d563 pid=3226 runtime=io.containerd.runc.v2 Mar 17 18:37:32.198130 systemd[1]: run-containerd-runc-k8s.io-ca3f089bb31e0583b8208bf204cefb94101979742769223bfca77219569aa6ea-runc.aqoxYE.mount: Deactivated successfully. Mar 17 18:37:32.203373 systemd[1]: Started cri-containerd-ca3f089bb31e0583b8208bf204cefb94101979742769223bfca77219569aa6ea.scope. Mar 17 18:37:32.205264 systemd[1]: Started cri-containerd-fdf568592576b685d1700f1ab4a7b4f52e31b246ef1f43b1909c7838f5d2d563.scope. Mar 17 18:37:32.220513 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:37:32.222553 systemd-resolved[1149]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:37:32.256314 env[1210]: time="2025-03-17T18:37:32.256261121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-f5nlt,Uid:ae31c7c3-bea8-41cf-8a60-49e6b9541d74,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca3f089bb31e0583b8208bf204cefb94101979742769223bfca77219569aa6ea\"" Mar 17 18:37:32.257813 kubelet[1923]: E0317 18:37:32.257682 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:32.262857 env[1210]: time="2025-03-17T18:37:32.262786283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-grcd4,Uid:69c175fd-470b-4855-8623-bbc31819fe03,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf568592576b685d1700f1ab4a7b4f52e31b246ef1f43b1909c7838f5d2d563\"" Mar 17 18:37:32.264988 kubelet[1923]: E0317 18:37:32.264930 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:32.268090 env[1210]: time="2025-03-17T18:37:32.268040777Z" level=info msg="CreateContainer within sandbox \"fdf568592576b685d1700f1ab4a7b4f52e31b246ef1f43b1909c7838f5d2d563\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:37:32.268561 env[1210]: time="2025-03-17T18:37:32.268536194Z" level=info msg="CreateContainer within sandbox \"ca3f089bb31e0583b8208bf204cefb94101979742769223bfca77219569aa6ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:37:32.318506 env[1210]: time="2025-03-17T18:37:32.318431915Z" level=info msg="CreateContainer within sandbox \"fdf568592576b685d1700f1ab4a7b4f52e31b246ef1f43b1909c7838f5d2d563\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a50de06017d93de502412b764fcda6c2a08deabaf454ddf3d8b0661b3ec4619a\"" Mar 17 18:37:32.319587 env[1210]: time="2025-03-17T18:37:32.319516852Z" level=info msg="StartContainer for \"a50de06017d93de502412b764fcda6c2a08deabaf454ddf3d8b0661b3ec4619a\"" Mar 17 18:37:32.324720 env[1210]: time="2025-03-17T18:37:32.324656019Z" level=info msg="CreateContainer within sandbox \"ca3f089bb31e0583b8208bf204cefb94101979742769223bfca77219569aa6ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ba053433aaeb9993e23b977af091613bb84a145cffdc5b301a2ad7964173af2\"" Mar 17 18:37:32.326237 env[1210]: time="2025-03-17T18:37:32.325590543Z" level=info msg="StartContainer for \"0ba053433aaeb9993e23b977af091613bb84a145cffdc5b301a2ad7964173af2\"" Mar 17 18:37:32.339533 systemd[1]: Started cri-containerd-a50de06017d93de502412b764fcda6c2a08deabaf454ddf3d8b0661b3ec4619a.scope. Mar 17 18:37:32.346186 systemd[1]: Started cri-containerd-0ba053433aaeb9993e23b977af091613bb84a145cffdc5b301a2ad7964173af2.scope. Mar 17 18:37:32.382320 env[1210]: time="2025-03-17T18:37:32.382251305Z" level=info msg="StartContainer for \"a50de06017d93de502412b764fcda6c2a08deabaf454ddf3d8b0661b3ec4619a\" returns successfully" Mar 17 18:37:32.394241 env[1210]: time="2025-03-17T18:37:32.394165564Z" level=info msg="StartContainer for \"0ba053433aaeb9993e23b977af091613bb84a145cffdc5b301a2ad7964173af2\" returns successfully" Mar 17 18:37:32.606294 kubelet[1923]: E0317 18:37:32.606136 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:32.610989 kubelet[1923]: E0317 18:37:32.610934 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:32.809517 kubelet[1923]: I0317 18:37:32.809438 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-grcd4" podStartSLOduration=39.809419137 podStartE2EDuration="39.809419137s" podCreationTimestamp="2025-03-17 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:37:32.629386008 +0000 UTC m=+45.218215181" watchObservedRunningTime="2025-03-17 18:37:32.809419137 +0000 UTC m=+45.398248270" Mar 17 18:37:33.613315 kubelet[1923]: E0317 18:37:33.613278 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:33.613878 kubelet[1923]: E0317 18:37:33.613361 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:33.627202 kubelet[1923]: I0317 18:37:33.627117 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-f5nlt" podStartSLOduration=40.627093709 podStartE2EDuration="40.627093709s" podCreationTimestamp="2025-03-17 18:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:37:32.810658104 +0000 UTC m=+45.399487248" watchObservedRunningTime="2025-03-17 18:37:33.627093709 +0000 UTC m=+46.215922872" Mar 17 18:37:34.619913 kubelet[1923]: E0317 18:37:34.619423 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:34.622247 kubelet[1923]: E0317 18:37:34.622002 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:37:36.911842 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:47890.service. Mar 17 18:37:36.943092 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 47890 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:36.944373 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:36.948193 systemd-logind[1196]: New session 9 of user core. Mar 17 18:37:36.949128 systemd[1]: Started session-9.scope. Mar 17 18:37:37.061054 sshd[3368]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:37.063117 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:47890.service: Deactivated successfully. Mar 17 18:37:37.063951 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:37:37.064687 systemd-logind[1196]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:37:37.065523 systemd-logind[1196]: Removed session 9. Mar 17 18:37:42.067482 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:47906.service. Mar 17 18:37:42.102981 sshd[3382]: Accepted publickey for core from 10.0.0.1 port 47906 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:42.104661 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:42.109357 systemd-logind[1196]: New session 10 of user core. Mar 17 18:37:42.110462 systemd[1]: Started session-10.scope. Mar 17 18:37:42.234894 sshd[3382]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:42.237260 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:47906.service: Deactivated successfully. Mar 17 18:37:42.238237 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:37:42.239285 systemd-logind[1196]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:37:42.240137 systemd-logind[1196]: Removed session 10. Mar 17 18:37:47.240508 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:55482.service. Mar 17 18:37:47.273984 sshd[3397]: Accepted publickey for core from 10.0.0.1 port 55482 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:47.275537 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:47.279678 systemd-logind[1196]: New session 11 of user core. Mar 17 18:37:47.280994 systemd[1]: Started session-11.scope. Mar 17 18:37:47.395534 sshd[3397]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:47.399187 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:55482.service: Deactivated successfully. Mar 17 18:37:47.399968 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:37:47.401140 systemd-logind[1196]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:37:47.402617 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:55490.service. Mar 17 18:37:47.404213 systemd-logind[1196]: Removed session 11. Mar 17 18:37:47.434311 sshd[3411]: Accepted publickey for core from 10.0.0.1 port 55490 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:47.435582 sshd[3411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:47.439881 systemd-logind[1196]: New session 12 of user core. Mar 17 18:37:47.440959 systemd[1]: Started session-12.scope. Mar 17 18:37:47.615553 sshd[3411]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:47.617923 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:55498.service. Mar 17 18:37:47.621078 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:55490.service: Deactivated successfully. Mar 17 18:37:47.622116 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:37:47.624731 systemd-logind[1196]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:37:47.627283 systemd-logind[1196]: Removed session 12. Mar 17 18:37:47.669211 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 55498 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:47.670794 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:47.675639 systemd-logind[1196]: New session 13 of user core. Mar 17 18:37:47.676885 systemd[1]: Started session-13.scope. Mar 17 18:37:47.797130 sshd[3423]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:47.799966 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:55498.service: Deactivated successfully. Mar 17 18:37:47.800932 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:37:47.801551 systemd-logind[1196]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:37:47.802305 systemd-logind[1196]: Removed session 13. Mar 17 18:37:52.801662 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:55502.service. Mar 17 18:37:52.834500 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 55502 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:52.835719 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:52.839032 systemd-logind[1196]: New session 14 of user core. Mar 17 18:37:52.839868 systemd[1]: Started session-14.scope. Mar 17 18:37:52.944723 sshd[3437]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:52.947187 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:55502.service: Deactivated successfully. Mar 17 18:37:52.947955 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:37:52.948649 systemd-logind[1196]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:37:52.949402 systemd-logind[1196]: Removed session 14. Mar 17 18:37:57.948594 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:34768.service. Mar 17 18:37:58.078346 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 34768 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:37:58.079773 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:37:58.083438 systemd-logind[1196]: New session 15 of user core. Mar 17 18:37:58.084371 systemd[1]: Started session-15.scope. Mar 17 18:37:58.192937 sshd[3453]: pam_unix(sshd:session): session closed for user core Mar 17 18:37:58.195515 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:34768.service: Deactivated successfully. Mar 17 18:37:58.196212 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:37:58.196726 systemd-logind[1196]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:37:58.197383 systemd-logind[1196]: Removed session 15. Mar 17 18:38:02.495722 kubelet[1923]: E0317 18:38:02.495651 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:03.199607 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:34784.service. Mar 17 18:38:03.231969 sshd[3467]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:03.233552 sshd[3467]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:03.237901 systemd-logind[1196]: New session 16 of user core. Mar 17 18:38:03.239077 systemd[1]: Started session-16.scope. Mar 17 18:38:03.360537 sshd[3467]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:03.363782 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:34784.service: Deactivated successfully. Mar 17 18:38:03.364315 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:38:03.365053 systemd-logind[1196]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:38:03.366213 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:34794.service. Mar 17 18:38:03.367054 systemd-logind[1196]: Removed session 16. Mar 17 18:38:03.405204 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:03.406603 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:03.410580 systemd-logind[1196]: New session 17 of user core. Mar 17 18:38:03.411389 systemd[1]: Started session-17.scope. Mar 17 18:38:03.742448 sshd[3480]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:03.745995 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:34794.service: Deactivated successfully. Mar 17 18:38:03.746726 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:38:03.747352 systemd-logind[1196]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:38:03.749053 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:34802.service. Mar 17 18:38:03.750008 systemd-logind[1196]: Removed session 17. Mar 17 18:38:03.783117 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 34802 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:03.785330 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:03.790393 systemd-logind[1196]: New session 18 of user core. Mar 17 18:38:03.791503 systemd[1]: Started session-18.scope. Mar 17 18:38:04.496241 kubelet[1923]: E0317 18:38:04.496193 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:05.372632 sshd[3491]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:05.375616 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:34816.service. Mar 17 18:38:05.376134 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:34802.service: Deactivated successfully. Mar 17 18:38:05.376869 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:38:05.377518 systemd-logind[1196]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:38:05.378514 systemd-logind[1196]: Removed session 18. Mar 17 18:38:05.408602 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:05.410140 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:05.416648 systemd-logind[1196]: New session 19 of user core. Mar 17 18:38:05.417405 systemd[1]: Started session-19.scope. Mar 17 18:38:05.654882 sshd[3512]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:05.659092 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:46678.service. Mar 17 18:38:05.659568 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:34816.service: Deactivated successfully. Mar 17 18:38:05.662517 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:38:05.663184 systemd-logind[1196]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:38:05.665424 systemd-logind[1196]: Removed session 19. Mar 17 18:38:05.689840 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:05.691348 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:05.695328 systemd-logind[1196]: New session 20 of user core. Mar 17 18:38:05.696081 systemd[1]: Started session-20.scope. Mar 17 18:38:05.822166 sshd[3525]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:05.824806 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:46678.service: Deactivated successfully. Mar 17 18:38:05.825675 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:38:05.826558 systemd-logind[1196]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:38:05.827341 systemd-logind[1196]: Removed session 20. Mar 17 18:38:10.827830 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:46680.service. Mar 17 18:38:10.859471 sshd[3539]: Accepted publickey for core from 10.0.0.1 port 46680 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:10.860834 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:10.864474 systemd-logind[1196]: New session 21 of user core. Mar 17 18:38:10.865433 systemd[1]: Started session-21.scope. Mar 17 18:38:10.972033 sshd[3539]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:10.974994 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:46680.service: Deactivated successfully. Mar 17 18:38:10.975924 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:38:10.976482 systemd-logind[1196]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:38:10.977290 systemd-logind[1196]: Removed session 21. Mar 17 18:38:13.495921 kubelet[1923]: E0317 18:38:13.495862 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:15.976648 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:56098.service. Mar 17 18:38:16.008244 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 56098 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:16.009440 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:16.012621 systemd-logind[1196]: New session 22 of user core. Mar 17 18:38:16.013407 systemd[1]: Started session-22.scope. Mar 17 18:38:16.112543 sshd[3554]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:16.115365 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:56098.service: Deactivated successfully. Mar 17 18:38:16.116245 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:38:16.116929 systemd-logind[1196]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:38:16.117623 systemd-logind[1196]: Removed session 22. Mar 17 18:38:16.495718 kubelet[1923]: E0317 18:38:16.495667 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:21.116908 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:56108.service. Mar 17 18:38:21.152018 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 56108 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:21.153249 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:21.157393 systemd-logind[1196]: New session 23 of user core. Mar 17 18:38:21.158520 systemd[1]: Started session-23.scope. Mar 17 18:38:21.275046 sshd[3568]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:21.277092 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:56108.service: Deactivated successfully. Mar 17 18:38:21.277866 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:38:21.278474 systemd-logind[1196]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:38:21.279314 systemd-logind[1196]: Removed session 23. Mar 17 18:38:26.281944 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:44810.service. Mar 17 18:38:26.315716 sshd[3581]: Accepted publickey for core from 10.0.0.1 port 44810 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:26.317145 sshd[3581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:26.321447 systemd-logind[1196]: New session 24 of user core. Mar 17 18:38:26.322321 systemd[1]: Started session-24.scope. Mar 17 18:38:26.451956 sshd[3581]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:26.456336 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:44810.service: Deactivated successfully. Mar 17 18:38:26.456984 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:38:26.457815 systemd-logind[1196]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:38:26.459058 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:44812.service. Mar 17 18:38:26.460277 systemd-logind[1196]: Removed session 24. Mar 17 18:38:26.496945 sshd[3594]: Accepted publickey for core from 10.0.0.1 port 44812 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:26.498679 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:26.503809 systemd-logind[1196]: New session 25 of user core. Mar 17 18:38:26.504955 systemd[1]: Started session-25.scope. Mar 17 18:38:28.261001 env[1210]: time="2025-03-17T18:38:28.260925424Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:38:28.266088 env[1210]: time="2025-03-17T18:38:28.266059223Z" level=info msg="StopContainer for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" with timeout 2 (s)" Mar 17 18:38:28.266313 env[1210]: time="2025-03-17T18:38:28.266278742Z" level=info msg="Stop container \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" with signal terminated" Mar 17 18:38:28.271542 systemd-networkd[1033]: lxc_health: Link DOWN Mar 17 18:38:28.271550 systemd-networkd[1033]: lxc_health: Lost carrier Mar 17 18:38:28.311138 systemd[1]: cri-containerd-13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228.scope: Deactivated successfully. Mar 17 18:38:28.311693 systemd[1]: cri-containerd-13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228.scope: Consumed 7.192s CPU time. Mar 17 18:38:28.329183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228-rootfs.mount: Deactivated successfully. Mar 17 18:38:28.352540 env[1210]: time="2025-03-17T18:38:28.352425126Z" level=info msg="StopContainer for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" with timeout 30 (s)" Mar 17 18:38:28.352877 env[1210]: time="2025-03-17T18:38:28.352812547Z" level=info msg="Stop container \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" with signal terminated" Mar 17 18:38:28.361270 systemd[1]: cri-containerd-06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa.scope: Deactivated successfully. Mar 17 18:38:28.379329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa-rootfs.mount: Deactivated successfully. Mar 17 18:38:28.570148 env[1210]: time="2025-03-17T18:38:28.569699404Z" level=info msg="shim disconnected" id=06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa Mar 17 18:38:28.570148 env[1210]: time="2025-03-17T18:38:28.569778926Z" level=warning msg="cleaning up after shim disconnected" id=06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa namespace=k8s.io Mar 17 18:38:28.570148 env[1210]: time="2025-03-17T18:38:28.569792852Z" level=info msg="cleaning up dead shim" Mar 17 18:38:28.570148 env[1210]: time="2025-03-17T18:38:28.569782673Z" level=info msg="shim disconnected" id=13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228 Mar 17 18:38:28.570148 env[1210]: time="2025-03-17T18:38:28.569834462Z" level=warning msg="cleaning up after shim disconnected" id=13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228 namespace=k8s.io Mar 17 18:38:28.570148 env[1210]: time="2025-03-17T18:38:28.569846575Z" level=info msg="cleaning up dead shim" Mar 17 18:38:28.578079 env[1210]: time="2025-03-17T18:38:28.578023945Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3664 runtime=io.containerd.runc.v2\n" Mar 17 18:38:28.578969 env[1210]: time="2025-03-17T18:38:28.578918927Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3663 runtime=io.containerd.runc.v2\n" Mar 17 18:38:28.617165 env[1210]: time="2025-03-17T18:38:28.617093930Z" level=info msg="StopContainer for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" returns successfully" Mar 17 18:38:28.617845 env[1210]: time="2025-03-17T18:38:28.617808898Z" level=info msg="StopPodSandbox for \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\"" Mar 17 18:38:28.617913 env[1210]: time="2025-03-17T18:38:28.617889522Z" level=info msg="Container to stop \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:38:28.617948 env[1210]: time="2025-03-17T18:38:28.617915582Z" level=info msg="Container to stop \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:38:28.617948 env[1210]: time="2025-03-17T18:38:28.617932544Z" level=info msg="Container to stop \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:38:28.617996 env[1210]: time="2025-03-17T18:38:28.617947743Z" level=info msg="Container to stop \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:38:28.617996 env[1210]: time="2025-03-17T18:38:28.617962672Z" level=info msg="Container to stop \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:38:28.620078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f-shm.mount: Deactivated successfully. Mar 17 18:38:28.621437 env[1210]: time="2025-03-17T18:38:28.621392522Z" level=info msg="StopContainer for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" returns successfully" Mar 17 18:38:28.621885 env[1210]: time="2025-03-17T18:38:28.621848754Z" level=info msg="StopPodSandbox for \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\"" Mar 17 18:38:28.622028 env[1210]: time="2025-03-17T18:38:28.621902477Z" level=info msg="Container to stop \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:38:28.623501 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00-shm.mount: Deactivated successfully. Mar 17 18:38:28.626016 systemd[1]: cri-containerd-f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f.scope: Deactivated successfully. Mar 17 18:38:28.629276 systemd[1]: cri-containerd-058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00.scope: Deactivated successfully. Mar 17 18:38:28.646594 env[1210]: time="2025-03-17T18:38:28.646540770Z" level=info msg="shim disconnected" id=f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f Mar 17 18:38:28.647616 env[1210]: time="2025-03-17T18:38:28.647593664Z" level=warning msg="cleaning up after shim disconnected" id=f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f namespace=k8s.io Mar 17 18:38:28.647709 env[1210]: time="2025-03-17T18:38:28.647687453Z" level=info msg="cleaning up dead shim" Mar 17 18:38:28.651161 env[1210]: time="2025-03-17T18:38:28.651118365Z" level=info msg="shim disconnected" id=058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00 Mar 17 18:38:28.651402 env[1210]: time="2025-03-17T18:38:28.651356321Z" level=warning msg="cleaning up after shim disconnected" id=058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00 namespace=k8s.io Mar 17 18:38:28.651402 env[1210]: time="2025-03-17T18:38:28.651377921Z" level=info msg="cleaning up dead shim" Mar 17 18:38:28.655573 env[1210]: time="2025-03-17T18:38:28.655532017Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3725 runtime=io.containerd.runc.v2\n" Mar 17 18:38:28.655960 env[1210]: time="2025-03-17T18:38:28.655933334Z" level=info msg="TearDown network for sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" successfully" Mar 17 18:38:28.656011 env[1210]: time="2025-03-17T18:38:28.655960646Z" level=info msg="StopPodSandbox for \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" returns successfully" Mar 17 18:38:28.665160 env[1210]: time="2025-03-17T18:38:28.665096590Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3734 runtime=io.containerd.runc.v2\n" Mar 17 18:38:28.665530 env[1210]: time="2025-03-17T18:38:28.665493669Z" level=info msg="TearDown network for sandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" successfully" Mar 17 18:38:28.665592 env[1210]: time="2025-03-17T18:38:28.665529317Z" level=info msg="StopPodSandbox for \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" returns successfully" Mar 17 18:38:28.724468 kubelet[1923]: I0317 18:38:28.724433 1923 scope.go:117] "RemoveContainer" containerID="06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa" Mar 17 18:38:28.725747 env[1210]: time="2025-03-17T18:38:28.725716993Z" level=info msg="RemoveContainer for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\"" Mar 17 18:38:28.729219 env[1210]: time="2025-03-17T18:38:28.729190455Z" level=info msg="RemoveContainer for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" returns successfully" Mar 17 18:38:28.729462 kubelet[1923]: I0317 18:38:28.729439 1923 scope.go:117] "RemoveContainer" containerID="06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa" Mar 17 18:38:28.730297 env[1210]: time="2025-03-17T18:38:28.730220165Z" level=error msg="ContainerStatus for \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\": not found" Mar 17 18:38:28.730418 kubelet[1923]: E0317 18:38:28.730392 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\": not found" containerID="06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa" Mar 17 18:38:28.730499 kubelet[1923]: I0317 18:38:28.730428 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa"} err="failed to get container status \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"06fd755f6fa6a8e196f8c5ff9842bbced969161c10c1c8bb4617c26193118aaa\": not found" Mar 17 18:38:28.730538 kubelet[1923]: I0317 18:38:28.730501 1923 scope.go:117] "RemoveContainer" containerID="13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228" Mar 17 18:38:28.731364 env[1210]: time="2025-03-17T18:38:28.731339906Z" level=info msg="RemoveContainer for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\"" Mar 17 18:38:28.734831 env[1210]: time="2025-03-17T18:38:28.734803601Z" level=info msg="RemoveContainer for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" returns successfully" Mar 17 18:38:28.735038 kubelet[1923]: I0317 18:38:28.735001 1923 scope.go:117] "RemoveContainer" containerID="17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59" Mar 17 18:38:28.736000 env[1210]: time="2025-03-17T18:38:28.735979400Z" level=info msg="RemoveContainer for \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\"" Mar 17 18:38:28.738918 env[1210]: time="2025-03-17T18:38:28.738886991Z" level=info msg="RemoveContainer for \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\" returns successfully" Mar 17 18:38:28.739065 kubelet[1923]: I0317 18:38:28.739037 1923 scope.go:117] "RemoveContainer" containerID="b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08" Mar 17 18:38:28.740010 env[1210]: time="2025-03-17T18:38:28.739987927Z" level=info msg="RemoveContainer for \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\"" Mar 17 18:38:28.743601 env[1210]: time="2025-03-17T18:38:28.743549588Z" level=info msg="RemoveContainer for \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\" returns successfully" Mar 17 18:38:28.743820 kubelet[1923]: I0317 18:38:28.743797 1923 scope.go:117] "RemoveContainer" containerID="201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136" Mar 17 18:38:28.744966 env[1210]: time="2025-03-17T18:38:28.744942062Z" level=info msg="RemoveContainer for \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\"" Mar 17 18:38:28.748021 env[1210]: time="2025-03-17T18:38:28.747990563Z" level=info msg="RemoveContainer for \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\" returns successfully" Mar 17 18:38:28.748164 kubelet[1923]: I0317 18:38:28.748137 1923 scope.go:117] "RemoveContainer" containerID="0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd" Mar 17 18:38:28.749059 env[1210]: time="2025-03-17T18:38:28.749033177Z" level=info msg="RemoveContainer for \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\"" Mar 17 18:38:28.751827 env[1210]: time="2025-03-17T18:38:28.751792284Z" level=info msg="RemoveContainer for \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\" returns successfully" Mar 17 18:38:28.751986 kubelet[1923]: I0317 18:38:28.751960 1923 scope.go:117] "RemoveContainer" containerID="13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228" Mar 17 18:38:28.752244 env[1210]: time="2025-03-17T18:38:28.752185356Z" level=error msg="ContainerStatus for \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\": not found" Mar 17 18:38:28.752390 kubelet[1923]: E0317 18:38:28.752360 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\": not found" containerID="13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228" Mar 17 18:38:28.752424 kubelet[1923]: I0317 18:38:28.752397 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228"} err="failed to get container status \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\": rpc error: code = NotFound desc = an error occurred when try to find container \"13f66c84e183bc18c4c5c05378b457dd4bc4f18c83dcc981f0a331250c64c228\": not found" Mar 17 18:38:28.752424 kubelet[1923]: I0317 18:38:28.752417 1923 scope.go:117] "RemoveContainer" containerID="17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59" Mar 17 18:38:28.752619 env[1210]: time="2025-03-17T18:38:28.752570512Z" level=error msg="ContainerStatus for \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\": not found" Mar 17 18:38:28.752774 kubelet[1923]: E0317 18:38:28.752730 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\": not found" containerID="17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59" Mar 17 18:38:28.752829 kubelet[1923]: I0317 18:38:28.752774 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59"} err="failed to get container status \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\": rpc error: code = NotFound desc = an error occurred when try to find container \"17a141c1b205d1407dd7ef7baf58f20558e80b9606f4f44f362149235cc7be59\": not found" Mar 17 18:38:28.752829 kubelet[1923]: I0317 18:38:28.752791 1923 scope.go:117] "RemoveContainer" containerID="b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08" Mar 17 18:38:28.753076 env[1210]: time="2025-03-17T18:38:28.753008490Z" level=error msg="ContainerStatus for \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\": not found" Mar 17 18:38:28.753184 kubelet[1923]: E0317 18:38:28.753165 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\": not found" containerID="b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08" Mar 17 18:38:28.753217 kubelet[1923]: I0317 18:38:28.753184 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08"} err="failed to get container status \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\": rpc error: code = NotFound desc = an error occurred when try to find container \"b261681836202f1541efd92d9aedc61252a4aa7dda8c066fd789fa5f83f43b08\": not found" Mar 17 18:38:28.753217 kubelet[1923]: I0317 18:38:28.753197 1923 scope.go:117] "RemoveContainer" containerID="201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136" Mar 17 18:38:28.753435 env[1210]: time="2025-03-17T18:38:28.753374880Z" level=error msg="ContainerStatus for \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\": not found" Mar 17 18:38:28.753599 kubelet[1923]: E0317 18:38:28.753529 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\": not found" containerID="201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136" Mar 17 18:38:28.753599 kubelet[1923]: I0317 18:38:28.753560 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136"} err="failed to get container status \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\": rpc error: code = NotFound desc = an error occurred when try to find container \"201003498967093403b7a227fb2c4d3b77155c4d1ddbb826e92635e11e5ee136\": not found" Mar 17 18:38:28.753599 kubelet[1923]: I0317 18:38:28.753586 1923 scope.go:117] "RemoveContainer" containerID="0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd" Mar 17 18:38:28.753775 env[1210]: time="2025-03-17T18:38:28.753723366Z" level=error msg="ContainerStatus for \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\": not found" Mar 17 18:38:28.753885 kubelet[1923]: E0317 18:38:28.753862 1923 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\": not found" containerID="0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd" Mar 17 18:38:28.753946 kubelet[1923]: I0317 18:38:28.753885 1923 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd"} err="failed to get container status \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0da890067c44ee830f0ecd7948d1cb3d0042a34bea536364e8b97e30c1f88dcd\": not found" Mar 17 18:38:28.787254 kubelet[1923]: I0317 18:38:28.787175 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-etc-cni-netd\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787254 kubelet[1923]: I0317 18:38:28.787243 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-hostproc\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787450 kubelet[1923]: I0317 18:38:28.787275 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/baf52f13-97f9-4b2f-a06e-830bfc0f768c-kube-api-access-bb5cz\") pod \"baf52f13-97f9-4b2f-a06e-830bfc0f768c\" (UID: \"baf52f13-97f9-4b2f-a06e-830bfc0f768c\") " Mar 17 18:38:28.787450 kubelet[1923]: I0317 18:38:28.787302 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-kernel\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787450 kubelet[1923]: I0317 18:38:28.787320 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-net\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787450 kubelet[1923]: I0317 18:38:28.787340 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-lib-modules\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787450 kubelet[1923]: I0317 18:38:28.787362 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-hubble-tls\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787450 kubelet[1923]: I0317 18:38:28.787352 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787610 kubelet[1923]: I0317 18:38:28.787407 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787610 kubelet[1923]: I0317 18:38:28.787380 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-cgroup\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787610 kubelet[1923]: I0317 18:38:28.787441 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787610 kubelet[1923]: I0317 18:38:28.787461 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787610 kubelet[1923]: I0317 18:38:28.787478 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787759 kubelet[1923]: I0317 18:38:28.787494 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7snm\" (UniqueName: \"kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-kube-api-access-m7snm\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787759 kubelet[1923]: I0317 18:38:28.787545 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-run\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787759 kubelet[1923]: I0317 18:38:28.787572 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cni-path\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787759 kubelet[1923]: I0317 18:38:28.787592 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-bpf-maps\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787759 kubelet[1923]: I0317 18:38:28.787618 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04b81920-4c44-4347-a538-5c6d5477fa7f-clustermesh-secrets\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787759 kubelet[1923]: I0317 18:38:28.787659 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-xtables-lock\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787937 kubelet[1923]: I0317 18:38:28.787659 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-hostproc" (OuterVolumeSpecName: "hostproc") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787937 kubelet[1923]: I0317 18:38:28.787686 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baf52f13-97f9-4b2f-a06e-830bfc0f768c-cilium-config-path\") pod \"baf52f13-97f9-4b2f-a06e-830bfc0f768c\" (UID: \"baf52f13-97f9-4b2f-a06e-830bfc0f768c\") " Mar 17 18:38:28.787937 kubelet[1923]: I0317 18:38:28.787695 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cni-path" (OuterVolumeSpecName: "cni-path") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.787937 kubelet[1923]: I0317 18:38:28.787722 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-config-path\") pod \"04b81920-4c44-4347-a538-5c6d5477fa7f\" (UID: \"04b81920-4c44-4347-a538-5c6d5477fa7f\") " Mar 17 18:38:28.787937 kubelet[1923]: I0317 18:38:28.787842 1923 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.787937 kubelet[1923]: I0317 18:38:28.787858 1923 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.788092 kubelet[1923]: I0317 18:38:28.787889 1923 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.788092 kubelet[1923]: I0317 18:38:28.787901 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.788092 kubelet[1923]: I0317 18:38:28.787912 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.788092 kubelet[1923]: I0317 18:38:28.787924 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.788092 kubelet[1923]: I0317 18:38:28.787934 1923 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.790478 kubelet[1923]: I0317 18:38:28.790443 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.790552 kubelet[1923]: I0317 18:38:28.790480 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.790552 kubelet[1923]: I0317 18:38:28.790497 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:28.790552 kubelet[1923]: I0317 18:38:28.790506 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:38:28.791209 kubelet[1923]: I0317 18:38:28.791153 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-kube-api-access-m7snm" (OuterVolumeSpecName: "kube-api-access-m7snm") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "kube-api-access-m7snm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:38:28.791368 kubelet[1923]: I0317 18:38:28.791330 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:38:28.791596 kubelet[1923]: I0317 18:38:28.791570 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf52f13-97f9-4b2f-a06e-830bfc0f768c-kube-api-access-bb5cz" (OuterVolumeSpecName: "kube-api-access-bb5cz") pod "baf52f13-97f9-4b2f-a06e-830bfc0f768c" (UID: "baf52f13-97f9-4b2f-a06e-830bfc0f768c"). InnerVolumeSpecName "kube-api-access-bb5cz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:38:28.792832 kubelet[1923]: I0317 18:38:28.792796 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baf52f13-97f9-4b2f-a06e-830bfc0f768c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "baf52f13-97f9-4b2f-a06e-830bfc0f768c" (UID: "baf52f13-97f9-4b2f-a06e-830bfc0f768c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:38:28.793063 kubelet[1923]: I0317 18:38:28.793036 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b81920-4c44-4347-a538-5c6d5477fa7f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "04b81920-4c44-4347-a538-5c6d5477fa7f" (UID: "04b81920-4c44-4347-a538-5c6d5477fa7f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888504 1923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m7snm\" (UniqueName: \"kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-kube-api-access-m7snm\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888559 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888567 1923 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888574 1923 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/04b81920-4c44-4347-a538-5c6d5477fa7f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888582 1923 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04b81920-4c44-4347-a538-5c6d5477fa7f-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888588 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/baf52f13-97f9-4b2f-a06e-830bfc0f768c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888595 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/04b81920-4c44-4347-a538-5c6d5477fa7f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.888656 kubelet[1923]: I0317 18:38:28.888602 1923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bb5cz\" (UniqueName: \"kubernetes.io/projected/baf52f13-97f9-4b2f-a06e-830bfc0f768c-kube-api-access-bb5cz\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:28.889031 kubelet[1923]: I0317 18:38:28.888608 1923 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/04b81920-4c44-4347-a538-5c6d5477fa7f-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:29.028029 systemd[1]: Removed slice kubepods-besteffort-podbaf52f13_97f9_4b2f_a06e_830bfc0f768c.slice. Mar 17 18:38:29.031317 systemd[1]: Removed slice kubepods-burstable-pod04b81920_4c44_4347_a538_5c6d5477fa7f.slice. Mar 17 18:38:29.031386 systemd[1]: kubepods-burstable-pod04b81920_4c44_4347_a538_5c6d5477fa7f.slice: Consumed 7.290s CPU time. Mar 17 18:38:29.244432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00-rootfs.mount: Deactivated successfully. Mar 17 18:38:29.244573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f-rootfs.mount: Deactivated successfully. Mar 17 18:38:29.244644 systemd[1]: var-lib-kubelet-pods-04b81920\x2d4c44\x2d4347\x2da538\x2d5c6d5477fa7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm7snm.mount: Deactivated successfully. Mar 17 18:38:29.244724 systemd[1]: var-lib-kubelet-pods-baf52f13\x2d97f9\x2d4b2f\x2da06e\x2d830bfc0f768c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbb5cz.mount: Deactivated successfully. Mar 17 18:38:29.244813 systemd[1]: var-lib-kubelet-pods-04b81920\x2d4c44\x2d4347\x2da538\x2d5c6d5477fa7f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:38:29.244892 systemd[1]: var-lib-kubelet-pods-04b81920\x2d4c44\x2d4347\x2da538\x2d5c6d5477fa7f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:38:29.498017 kubelet[1923]: I0317 18:38:29.497893 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b81920-4c44-4347-a538-5c6d5477fa7f" path="/var/lib/kubelet/pods/04b81920-4c44-4347-a538-5c6d5477fa7f/volumes" Mar 17 18:38:29.498492 kubelet[1923]: I0317 18:38:29.498405 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf52f13-97f9-4b2f-a06e-830bfc0f768c" path="/var/lib/kubelet/pods/baf52f13-97f9-4b2f-a06e-830bfc0f768c/volumes" Mar 17 18:38:29.879776 sshd[3594]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:29.883190 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:44812.service: Deactivated successfully. Mar 17 18:38:29.883829 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:38:29.884369 systemd-logind[1196]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:38:29.885600 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:44818.service. Mar 17 18:38:29.886779 systemd-logind[1196]: Removed session 25. Mar 17 18:38:29.917259 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 44818 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:29.918651 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:29.922457 systemd-logind[1196]: New session 26 of user core. Mar 17 18:38:29.923393 systemd[1]: Started session-26.scope. Mar 17 18:38:30.578373 sshd[3758]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:30.583115 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:44818.service: Deactivated successfully. Mar 17 18:38:30.583924 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 18:38:30.585150 systemd-logind[1196]: Session 26 logged out. Waiting for processes to exit. Mar 17 18:38:30.587246 systemd[1]: Started sshd@26-10.0.0.35:22-10.0.0.1:44826.service. Mar 17 18:38:30.594305 systemd-logind[1196]: Removed session 26. Mar 17 18:38:30.618344 kubelet[1923]: I0317 18:38:30.618301 1923 memory_manager.go:355] "RemoveStaleState removing state" podUID="04b81920-4c44-4347-a538-5c6d5477fa7f" containerName="cilium-agent" Mar 17 18:38:30.618884 kubelet[1923]: I0317 18:38:30.618869 1923 memory_manager.go:355] "RemoveStaleState removing state" podUID="baf52f13-97f9-4b2f-a06e-830bfc0f768c" containerName="cilium-operator" Mar 17 18:38:30.624234 systemd[1]: Created slice kubepods-burstable-podaa9822d3_4703_45ff_9267_18ea23e9c6a1.slice. Mar 17 18:38:30.636504 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 44826 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:30.637746 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:30.644395 systemd[1]: Started session-27.scope. Mar 17 18:38:30.645315 systemd-logind[1196]: New session 27 of user core. Mar 17 18:38:30.702090 kubelet[1923]: I0317 18:38:30.702047 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-etc-cni-netd\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702369 kubelet[1923]: I0317 18:38:30.702343 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-kernel\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702448 kubelet[1923]: I0317 18:38:30.702378 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hostproc\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702448 kubelet[1923]: I0317 18:38:30.702412 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-lib-modules\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702448 kubelet[1923]: I0317 18:38:30.702439 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-clustermesh-secrets\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702551 kubelet[1923]: I0317 18:38:30.702459 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-ipsec-secrets\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702551 kubelet[1923]: I0317 18:38:30.702485 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ks9p\" (UniqueName: \"kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-kube-api-access-6ks9p\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702551 kubelet[1923]: I0317 18:38:30.702503 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-bpf-maps\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702551 kubelet[1923]: I0317 18:38:30.702519 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cni-path\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702551 kubelet[1923]: I0317 18:38:30.702535 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-xtables-lock\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702660 kubelet[1923]: I0317 18:38:30.702561 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-config-path\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702660 kubelet[1923]: I0317 18:38:30.702582 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hubble-tls\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702660 kubelet[1923]: I0317 18:38:30.702599 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-net\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702660 kubelet[1923]: I0317 18:38:30.702618 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-cgroup\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.702660 kubelet[1923]: I0317 18:38:30.702636 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-run\") pod \"cilium-9v6xk\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " pod="kube-system/cilium-9v6xk" Mar 17 18:38:30.781874 sshd[3770]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:30.785089 systemd[1]: sshd@26-10.0.0.35:22-10.0.0.1:44826.service: Deactivated successfully. Mar 17 18:38:30.785586 systemd[1]: session-27.scope: Deactivated successfully. Mar 17 18:38:30.787673 systemd[1]: Started sshd@27-10.0.0.35:22-10.0.0.1:44828.service. Mar 17 18:38:30.788324 systemd-logind[1196]: Session 27 logged out. Waiting for processes to exit. Mar 17 18:38:30.789556 systemd-logind[1196]: Removed session 27. Mar 17 18:38:30.797920 kubelet[1923]: E0317 18:38:30.797863 1923 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6ks9p lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-9v6xk" podUID="aa9822d3-4703-45ff-9267-18ea23e9c6a1" Mar 17 18:38:30.834458 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 44828 ssh2: RSA SHA256:EcJpbXadXymLrINQtrmLSqTXC2wy0UoSwO9MmZb5CTo Mar 17 18:38:30.836087 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:38:30.840191 systemd-logind[1196]: New session 28 of user core. Mar 17 18:38:30.841355 systemd[1]: Started session-28.scope. Mar 17 18:38:31.910781 kubelet[1923]: I0317 18:38:31.910715 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-lib-modules\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911233 kubelet[1923]: I0317 18:38:31.910808 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-config-path\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911233 kubelet[1923]: I0317 18:38:31.910843 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hostproc\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911233 kubelet[1923]: I0317 18:38:31.910838 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.911233 kubelet[1923]: I0317 18:38:31.910868 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-run\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911233 kubelet[1923]: I0317 18:38:31.910890 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-kernel\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911233 kubelet[1923]: I0317 18:38:31.910914 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ks9p\" (UniqueName: \"kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-kube-api-access-6ks9p\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911455 kubelet[1923]: I0317 18:38:31.910932 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-cgroup\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911455 kubelet[1923]: I0317 18:38:31.910932 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hostproc" (OuterVolumeSpecName: "hostproc") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.911455 kubelet[1923]: I0317 18:38:31.910954 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-clustermesh-secrets\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911455 kubelet[1923]: I0317 18:38:31.910969 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.911455 kubelet[1923]: I0317 18:38:31.910974 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-ipsec-secrets\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911658 kubelet[1923]: I0317 18:38:31.911009 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-xtables-lock\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911658 kubelet[1923]: I0317 18:38:31.911034 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cni-path\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911658 kubelet[1923]: I0317 18:38:31.911053 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-etc-cni-netd\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911658 kubelet[1923]: I0317 18:38:31.911071 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-bpf-maps\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911658 kubelet[1923]: I0317 18:38:31.911096 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hubble-tls\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911658 kubelet[1923]: I0317 18:38:31.911114 1923 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-net\") pod \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\" (UID: \"aa9822d3-4703-45ff-9267-18ea23e9c6a1\") " Mar 17 18:38:31.911908 kubelet[1923]: I0317 18:38:31.911193 1923 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:31.911908 kubelet[1923]: I0317 18:38:31.911208 1923 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:31.911908 kubelet[1923]: I0317 18:38:31.911218 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:31.911908 kubelet[1923]: I0317 18:38:31.911243 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.911908 kubelet[1923]: I0317 18:38:31.911265 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.911908 kubelet[1923]: I0317 18:38:31.911281 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.912267 kubelet[1923]: I0317 18:38:31.911296 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cni-path" (OuterVolumeSpecName: "cni-path") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.912267 kubelet[1923]: I0317 18:38:31.911313 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.912267 kubelet[1923]: I0317 18:38:31.911329 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.912267 kubelet[1923]: I0317 18:38:31.911582 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:38:31.912527 kubelet[1923]: I0317 18:38:31.912496 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:38:31.915578 systemd[1]: var-lib-kubelet-pods-aa9822d3\x2d4703\x2d45ff\x2d9267\x2d18ea23e9c6a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6ks9p.mount: Deactivated successfully. Mar 17 18:38:31.915682 systemd[1]: var-lib-kubelet-pods-aa9822d3\x2d4703\x2d45ff\x2d9267\x2d18ea23e9c6a1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:38:31.918097 kubelet[1923]: I0317 18:38:31.918070 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:38:31.918207 kubelet[1923]: I0317 18:38:31.918186 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:38:31.918234 systemd[1]: var-lib-kubelet-pods-aa9822d3\x2d4703\x2d45ff\x2d9267\x2d18ea23e9c6a1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:38:31.918326 systemd[1]: var-lib-kubelet-pods-aa9822d3\x2d4703\x2d45ff\x2d9267\x2d18ea23e9c6a1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:38:31.918430 kubelet[1923]: I0317 18:38:31.918097 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:38:31.918885 kubelet[1923]: I0317 18:38:31.918857 1923 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-kube-api-access-6ks9p" (OuterVolumeSpecName: "kube-api-access-6ks9p") pod "aa9822d3-4703-45ff-9267-18ea23e9c6a1" (UID: "aa9822d3-4703-45ff-9267-18ea23e9c6a1"). InnerVolumeSpecName "kube-api-access-6ks9p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012168 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012204 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012213 1923 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ks9p\" (UniqueName: \"kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-kube-api-access-6ks9p\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012222 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012229 1923 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012235 1923 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012221 kubelet[1923]: I0317 18:38:32.012242 1923 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012546 kubelet[1923]: I0317 18:38:32.012250 1923 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012546 kubelet[1923]: I0317 18:38:32.012258 1923 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012546 kubelet[1923]: I0317 18:38:32.012264 1923 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa9822d3-4703-45ff-9267-18ea23e9c6a1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012546 kubelet[1923]: I0317 18:38:32.012270 1923 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.012546 kubelet[1923]: I0317 18:38:32.012279 1923 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa9822d3-4703-45ff-9267-18ea23e9c6a1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:38:32.553166 kubelet[1923]: E0317 18:38:32.553120 1923 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:38:32.740218 systemd[1]: Removed slice kubepods-burstable-podaa9822d3_4703_45ff_9267_18ea23e9c6a1.slice. Mar 17 18:38:32.776002 systemd[1]: Created slice kubepods-burstable-pode7b62c00_1c48_406f_a2bb_609b28016b7a.slice. Mar 17 18:38:32.917280 kubelet[1923]: I0317 18:38:32.917152 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-bpf-maps\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917280 kubelet[1923]: I0317 18:38:32.917192 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-lib-modules\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917280 kubelet[1923]: I0317 18:38:32.917209 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-host-proc-sys-net\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917280 kubelet[1923]: I0317 18:38:32.917224 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-cilium-run\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917280 kubelet[1923]: I0317 18:38:32.917239 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-hostproc\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917280 kubelet[1923]: I0317 18:38:32.917251 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7b62c00-1c48-406f-a2bb-609b28016b7a-hubble-tls\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917850 kubelet[1923]: I0317 18:38:32.917266 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-xtables-lock\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917850 kubelet[1923]: I0317 18:38:32.917335 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7b62c00-1c48-406f-a2bb-609b28016b7a-clustermesh-secrets\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917850 kubelet[1923]: I0317 18:38:32.917366 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-etc-cni-netd\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917850 kubelet[1923]: I0317 18:38:32.917398 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7b62c00-1c48-406f-a2bb-609b28016b7a-cilium-config-path\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917850 kubelet[1923]: I0317 18:38:32.917411 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e7b62c00-1c48-406f-a2bb-609b28016b7a-cilium-ipsec-secrets\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917978 kubelet[1923]: I0317 18:38:32.917425 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-host-proc-sys-kernel\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917978 kubelet[1923]: I0317 18:38:32.917454 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktgmx\" (UniqueName: \"kubernetes.io/projected/e7b62c00-1c48-406f-a2bb-609b28016b7a-kube-api-access-ktgmx\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917978 kubelet[1923]: I0317 18:38:32.917472 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-cilium-cgroup\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:32.917978 kubelet[1923]: I0317 18:38:32.917487 1923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7b62c00-1c48-406f-a2bb-609b28016b7a-cni-path\") pod \"cilium-4hknw\" (UID: \"e7b62c00-1c48-406f-a2bb-609b28016b7a\") " pod="kube-system/cilium-4hknw" Mar 17 18:38:33.078919 kubelet[1923]: E0317 18:38:33.078856 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:33.079634 env[1210]: time="2025-03-17T18:38:33.079497490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hknw,Uid:e7b62c00-1c48-406f-a2bb-609b28016b7a,Namespace:kube-system,Attempt:0,}" Mar 17 18:38:33.095621 env[1210]: time="2025-03-17T18:38:33.095521905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:38:33.095621 env[1210]: time="2025-03-17T18:38:33.095591930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:38:33.095621 env[1210]: time="2025-03-17T18:38:33.095618581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:38:33.095943 env[1210]: time="2025-03-17T18:38:33.095883087Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61 pid=3812 runtime=io.containerd.runc.v2 Mar 17 18:38:33.107895 systemd[1]: Started cri-containerd-a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61.scope. Mar 17 18:38:33.132201 env[1210]: time="2025-03-17T18:38:33.132138476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hknw,Uid:e7b62c00-1c48-406f-a2bb-609b28016b7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\"" Mar 17 18:38:33.132961 kubelet[1923]: E0317 18:38:33.132928 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:33.135966 env[1210]: time="2025-03-17T18:38:33.135907884Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:38:33.152390 env[1210]: time="2025-03-17T18:38:33.152315875Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e\"" Mar 17 18:38:33.153184 env[1210]: time="2025-03-17T18:38:33.153133681Z" level=info msg="StartContainer for \"20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e\"" Mar 17 18:38:33.170150 systemd[1]: Started cri-containerd-20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e.scope. Mar 17 18:38:33.196487 env[1210]: time="2025-03-17T18:38:33.196440784Z" level=info msg="StartContainer for \"20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e\" returns successfully" Mar 17 18:38:33.205326 systemd[1]: cri-containerd-20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e.scope: Deactivated successfully. Mar 17 18:38:33.259776 env[1210]: time="2025-03-17T18:38:33.259715824Z" level=info msg="shim disconnected" id=20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e Mar 17 18:38:33.260002 env[1210]: time="2025-03-17T18:38:33.259809153Z" level=warning msg="cleaning up after shim disconnected" id=20c98e94a1d3f518fdd09e2321468c0b0b1d8f69ba5485e3f6e21f3b7dae364e namespace=k8s.io Mar 17 18:38:33.260002 env[1210]: time="2025-03-17T18:38:33.259821917Z" level=info msg="cleaning up dead shim" Mar 17 18:38:33.266478 env[1210]: time="2025-03-17T18:38:33.266433719Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3896 runtime=io.containerd.runc.v2\n" Mar 17 18:38:33.497920 kubelet[1923]: I0317 18:38:33.497780 1923 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9822d3-4703-45ff-9267-18ea23e9c6a1" path="/var/lib/kubelet/pods/aa9822d3-4703-45ff-9267-18ea23e9c6a1/volumes" Mar 17 18:38:33.739832 kubelet[1923]: E0317 18:38:33.739801 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:33.742394 env[1210]: time="2025-03-17T18:38:33.742350049Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:38:33.767181 env[1210]: time="2025-03-17T18:38:33.767042635Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779\"" Mar 17 18:38:33.767693 env[1210]: time="2025-03-17T18:38:33.767651862Z" level=info msg="StartContainer for \"d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779\"" Mar 17 18:38:33.781991 systemd[1]: Started cri-containerd-d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779.scope. Mar 17 18:38:33.808300 env[1210]: time="2025-03-17T18:38:33.808246781Z" level=info msg="StartContainer for \"d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779\" returns successfully" Mar 17 18:38:33.811969 systemd[1]: cri-containerd-d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779.scope: Deactivated successfully. Mar 17 18:38:33.834681 env[1210]: time="2025-03-17T18:38:33.834617380Z" level=info msg="shim disconnected" id=d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779 Mar 17 18:38:33.834681 env[1210]: time="2025-03-17T18:38:33.834674278Z" level=warning msg="cleaning up after shim disconnected" id=d59255ef06d06e15563b271b1e99542fe6d9c6366c8bab8f33254d3cc8c0d779 namespace=k8s.io Mar 17 18:38:33.834925 env[1210]: time="2025-03-17T18:38:33.834690199Z" level=info msg="cleaning up dead shim" Mar 17 18:38:33.842444 env[1210]: time="2025-03-17T18:38:33.842387599Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3958 runtime=io.containerd.runc.v2\n" Mar 17 18:38:34.742927 kubelet[1923]: E0317 18:38:34.742898 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:34.744291 env[1210]: time="2025-03-17T18:38:34.744258425Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:38:35.048538 env[1210]: time="2025-03-17T18:38:35.048426403Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc\"" Mar 17 18:38:35.048914 env[1210]: time="2025-03-17T18:38:35.048892216Z" level=info msg="StartContainer for \"771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc\"" Mar 17 18:38:35.064204 systemd[1]: Started cri-containerd-771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc.scope. Mar 17 18:38:35.118908 systemd[1]: cri-containerd-771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc.scope: Deactivated successfully. Mar 17 18:38:35.126122 env[1210]: time="2025-03-17T18:38:35.126080315Z" level=info msg="StartContainer for \"771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc\" returns successfully" Mar 17 18:38:35.140372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc-rootfs.mount: Deactivated successfully. Mar 17 18:38:35.254349 env[1210]: time="2025-03-17T18:38:35.254295665Z" level=info msg="shim disconnected" id=771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc Mar 17 18:38:35.254475 env[1210]: time="2025-03-17T18:38:35.254348075Z" level=warning msg="cleaning up after shim disconnected" id=771688ee6463cec38a23ca7b73d99b9e9f74db7c2fc865af79dd29a2eedd35bc namespace=k8s.io Mar 17 18:38:35.254475 env[1210]: time="2025-03-17T18:38:35.254360218Z" level=info msg="cleaning up dead shim" Mar 17 18:38:35.260804 env[1210]: time="2025-03-17T18:38:35.260753177Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4014 runtime=io.containerd.runc.v2\n" Mar 17 18:38:35.747306 kubelet[1923]: E0317 18:38:35.747279 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:35.749048 env[1210]: time="2025-03-17T18:38:35.748988905Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:38:35.961398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2303557715.mount: Deactivated successfully. Mar 17 18:38:36.015274 env[1210]: time="2025-03-17T18:38:36.015144440Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f\"" Mar 17 18:38:36.015925 env[1210]: time="2025-03-17T18:38:36.015888776Z" level=info msg="StartContainer for \"ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f\"" Mar 17 18:38:36.031158 systemd[1]: Started cri-containerd-ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f.scope. Mar 17 18:38:36.056014 systemd[1]: cri-containerd-ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f.scope: Deactivated successfully. Mar 17 18:38:36.057491 env[1210]: time="2025-03-17T18:38:36.057408287Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b62c00_1c48_406f_a2bb_609b28016b7a.slice/cri-containerd-ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f.scope/memory.events\": no such file or directory" Mar 17 18:38:36.121241 env[1210]: time="2025-03-17T18:38:36.121161333Z" level=info msg="StartContainer for \"ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f\" returns successfully" Mar 17 18:38:36.136688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f-rootfs.mount: Deactivated successfully. Mar 17 18:38:36.160567 env[1210]: time="2025-03-17T18:38:36.160518892Z" level=info msg="shim disconnected" id=ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f Mar 17 18:38:36.160705 env[1210]: time="2025-03-17T18:38:36.160569669Z" level=warning msg="cleaning up after shim disconnected" id=ed35a4dbcf54dcd5874b70461721a2881797a9dbc50c5c3cfe8a1729823c715f namespace=k8s.io Mar 17 18:38:36.160705 env[1210]: time="2025-03-17T18:38:36.160582153Z" level=info msg="cleaning up dead shim" Mar 17 18:38:36.167012 env[1210]: time="2025-03-17T18:38:36.166967439Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:38:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4068 runtime=io.containerd.runc.v2\n" Mar 17 18:38:36.750912 kubelet[1923]: E0317 18:38:36.750842 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:36.753373 env[1210]: time="2025-03-17T18:38:36.753336063Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:38:36.825292 env[1210]: time="2025-03-17T18:38:36.825209930Z" level=info msg="CreateContainer within sandbox \"a4f87aedf794e63f0375a35960a85554d7c03ac5418377efe3894de20b9e9f61\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4\"" Mar 17 18:38:36.825853 env[1210]: time="2025-03-17T18:38:36.825820540Z" level=info msg="StartContainer for \"e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4\"" Mar 17 18:38:36.844530 systemd[1]: Started cri-containerd-e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4.scope. Mar 17 18:38:36.884373 env[1210]: time="2025-03-17T18:38:36.884302195Z" level=info msg="StartContainer for \"e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4\" returns successfully" Mar 17 18:38:37.134792 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 17 18:38:37.756090 kubelet[1923]: E0317 18:38:37.756061 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:39.079734 kubelet[1923]: E0317 18:38:39.079699 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:39.376533 systemd[1]: run-containerd-runc-k8s.io-e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4-runc.lbiqs2.mount: Deactivated successfully. Mar 17 18:38:39.708656 systemd-networkd[1033]: lxc_health: Link UP Mar 17 18:38:39.716040 systemd-networkd[1033]: lxc_health: Gained carrier Mar 17 18:38:39.716826 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:38:41.055958 systemd-networkd[1033]: lxc_health: Gained IPv6LL Mar 17 18:38:41.080610 kubelet[1923]: E0317 18:38:41.080580 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:41.177549 kubelet[1923]: I0317 18:38:41.177480 1923 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4hknw" podStartSLOduration=9.177465956 podStartE2EDuration="9.177465956s" podCreationTimestamp="2025-03-17 18:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:38:37.815171647 +0000 UTC m=+110.404000810" watchObservedRunningTime="2025-03-17 18:38:41.177465956 +0000 UTC m=+113.766295099" Mar 17 18:38:41.458474 systemd[1]: run-containerd-runc-k8s.io-e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4-runc.VRbePb.mount: Deactivated successfully. Mar 17 18:38:41.763233 kubelet[1923]: E0317 18:38:41.763114 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:42.764797 kubelet[1923]: E0317 18:38:42.764753 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:43.497650 kubelet[1923]: E0317 18:38:43.497622 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:43.552771 systemd[1]: run-containerd-runc-k8s.io-e64ac6f0729569768158c24ae3f92aea44ffdf7b29253db24ba6a015347c37d4-runc.PPsy4Y.mount: Deactivated successfully. Mar 17 18:38:44.496080 kubelet[1923]: E0317 18:38:44.496042 1923 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:38:45.676271 sshd[3783]: pam_unix(sshd:session): session closed for user core Mar 17 18:38:45.678690 systemd[1]: sshd@27-10.0.0.35:22-10.0.0.1:44828.service: Deactivated successfully. Mar 17 18:38:45.679376 systemd[1]: session-28.scope: Deactivated successfully. Mar 17 18:38:45.679912 systemd-logind[1196]: Session 28 logged out. Waiting for processes to exit. Mar 17 18:38:45.680566 systemd-logind[1196]: Removed session 28. Mar 17 18:38:47.484892 env[1210]: time="2025-03-17T18:38:47.484841815Z" level=info msg="StopPodSandbox for \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\"" Mar 17 18:38:47.485344 env[1210]: time="2025-03-17T18:38:47.484927459Z" level=info msg="TearDown network for sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" successfully" Mar 17 18:38:47.485344 env[1210]: time="2025-03-17T18:38:47.484956906Z" level=info msg="StopPodSandbox for \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" returns successfully" Mar 17 18:38:47.485901 env[1210]: time="2025-03-17T18:38:47.485846294Z" level=info msg="RemovePodSandbox for \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\"" Mar 17 18:38:47.486083 env[1210]: time="2025-03-17T18:38:47.485899896Z" level=info msg="Forcibly stopping sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\"" Mar 17 18:38:47.486083 env[1210]: time="2025-03-17T18:38:47.486008545Z" level=info msg="TearDown network for sandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" successfully" Mar 17 18:38:47.491635 env[1210]: time="2025-03-17T18:38:47.491591274Z" level=info msg="RemovePodSandbox \"f0c20ce1eba723c2734593b70069beccd131d62fa4caa7cb10c347b6533ece2f\" returns successfully" Mar 17 18:38:47.492032 env[1210]: time="2025-03-17T18:38:47.492007182Z" level=info msg="StopPodSandbox for \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\"" Mar 17 18:38:47.492110 env[1210]: time="2025-03-17T18:38:47.492075764Z" level=info msg="TearDown network for sandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" successfully" Mar 17 18:38:47.492110 env[1210]: time="2025-03-17T18:38:47.492105962Z" level=info msg="StopPodSandbox for \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" returns successfully" Mar 17 18:38:47.492310 env[1210]: time="2025-03-17T18:38:47.492288242Z" level=info msg="RemovePodSandbox for \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\"" Mar 17 18:38:47.492384 env[1210]: time="2025-03-17T18:38:47.492309082Z" level=info msg="Forcibly stopping sandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\"" Mar 17 18:38:47.492384 env[1210]: time="2025-03-17T18:38:47.492353367Z" level=info msg="TearDown network for sandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" successfully" Mar 17 18:38:47.495313 env[1210]: time="2025-03-17T18:38:47.495288382Z" level=info msg="RemovePodSandbox \"058e31accdf8bce6a021fbcfe960870790f61366172d7101b45a29bcb7b46c00\" returns successfully"